Skip to content

Intermediate results of model inference changed before it reaches next layer #4

@stevenycw777

Description

@stevenycw777

In a model generated by RUHMI for the RA8T2, I have observed that the memory content of a tensor is modified after it is produced but before it is successfully processed as an input by the following layer.

In the generated .c code, the output of my first convolutional layer is stored in:
model_6_tf_math_add_Add_model_6_tf_compat_v1_nn_conv1d_conv1d_Squeeze_Const_22_model_6_tf_compat_v1_nn_conv1d_conv1d_70034
This is located at memory address 0x2200ca50.

The same variable at the same memory location serves as the input to the second convolutional layer. However, right before the model runs the second convolutional layer, the content in that specific location is changed. This change causes the output of the second convolutional layer to be incorrect compared to the reference TFLite model.

The code snippet are shown below

int8_t* model_6_tf_math_add_Add_model_..._nn_conv1d_conv1d_70034 = (int8_t *) &main_storage[0]; // 1,1,500,64 == 32000

arm_convolve_wrapper_s8(..., model_6_tf_math_add_Add_model_..._nn_conv1d_conv1d_70034); # receive output from first conv layer

# the first 16 values in model_6_tf_math_add_Add_model_..._nn_conv1d_conv1d_70034 are 
# -3 24 -18 -32 -43 -8 0 27 -51 -32 -43 -2 14 -33 -13 22

...

int8_t* other_variable = model_6_tf_math_add_Add_model_..._nn_conv1d_conv1d_70034;

...

# the first 16 values in model_6_tf_math_add_Add_model_..._nn_conv1d_conv1d_70034 become
# 30 10 11 9 9 9 10 10 9 11 11 12 11 9 12 11

arm_convolve_wrapper_s8(&ctx, &conv_params, &quant_params, &input_dims,
  model_6_tf_math_add_Add_model_..._nn_conv1d_conv1d_70034, ...); # input to the second conv layer

I've noticed that the generated code declare a shared buffer for the intermediate result and multiple variables refer to the same location to the model

// First Layer Output (Size: 32000 bytes)
int8_t* model_6_tf_math_add_Add_..._70034 = (int8_t *) &main_storage[0]; 

// Second Layer Output (Size: 16000 bytes)
int8_t* model_6_tf_nn_relu_Relu_..._70062 = (int8_t *) &main_storage[0];

Is this the root cause of the problem I encountered?

My environment is as follows

  • Target Hardware: Renesas RA8T2 (MCB-RA8T2)

  • Compiler/IDE: e2 studio with LLVM

  • Acceleration: CMSIS-NN with Helium (MVE)

  • RUHMI Version: mera-2.5.0%2Bpkg.3577-cp310-cp310-manylinux_2_27_x86_64.whl

Thank you for the assistance.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions