Skip to content

Conversation

VecherVhatuX
Copy link

@VecherVhatuX VecherVhatuX commented Sep 5, 2025

In erase_tensor_id, subtracting 1 from graph->tensor_num (uint16_t) could wrap around to a large value when tensor_num is 0. This caused sys_realloc to allocate an unexpectedly huge size, flagged as potential integer overflow. When tensor_num is 0,
subtracting 1 wraps around to a large value, which is a classic CWE-190: Integer Overflow or Wraparound.

Changes:

  • Cast (tensor_num - 1) to size_t before multiplying by sizeof(ir_tensor_t*)
  • Keep original logic intact, no additional memory free
  • Ensures safe shrinking of tensor_list without wraparound

I can suggest another version of fix with additional checking (if you do not like the current one):

static int erase_tensor_id(ir_graph_t* graph, int16_t id)
{
    // SECURITY FIX: Add boundary validation to prevent integer underflow
    // Previously: (graph->tensor_num - 1) could become -1 when tensor_num=0
    // This would cast to size_t as 18446744073709551615, causing memory issues
    if (!graph || graph->tensor_num <= 0 || id < 0 || id >= graph->tensor_num) {
        return -1; // Return error for invalid parameters
    }

    ir_tensor_t* tensor_del = get_ir_graph_tensor(graph, id);
    std::map<int16_t, int16_t> old_new_id;
    int16_t j = 0;
    
    // Compact the tensor array by removing the target tensor
    for (int i = 0; i < graph->tensor_num; i++)  // Changed: use int instead of size_t to match tensor_num type
    {
        if (i == id) continue; // Skip the tensor to be deleted
        
        ir_tensor_t* tensor = get_ir_graph_tensor(graph, i);
        tensor->index = j;
        graph->tensor_list[j] = graph->tensor_list[i];
        old_new_id[i] = j++;
    }
    
    // Update all node references to use new tensor indices
    for (int i = 0; i < graph->node_num; i++)  // Changed: use int for consistency
    {
        ir_node_t* node = get_ir_graph_node(graph, i);
        for (int k = 0; k < node->input_num; k++)  // Fixed: renamed from 'j' to 'k' to avoid variable shadowing
        {
            node->input_tensors[k] = old_new_id[node->input_tensors[k]];
        }
        for (int k = 0; k < node->output_num; k++)  // Fixed: renamed from 'j' to 'k' to avoid variable shadowing
        {
            node->output_tensors[k] = old_new_id[node->output_tensors[k]];
        }
    }

    // SECURITY FIX: Decrement tensor count first (safe after boundary check above)
    // This eliminates the need for intermediate variables that caused type conversion warnings
    graph->tensor_num--;
    
    // Resize the tensor array to free unused memory
    if (graph->tensor_num > 0) {
        // Only reallocate if we still have tensors remaining
        size_t new_size = (size_t)graph->tensor_num * sizeof(ir_tensor_t*);  // Explicit cast to prevent warnings
        ir_tensor_t** new_tensor_list = (ir_tensor_t**)sys_realloc(graph->tensor_list, new_size);
        if (new_tensor_list) {
            graph->tensor_list = new_tensor_list;
        }
        // Note: If realloc fails, we continue with the old pointer (still valid)
    } else {
        // If no tensors remain, free the entire array
        sys_free(graph->tensor_list);
        graph->tensor_list = nullptr;
    }
    
    // Clean up the deleted tensor
    destroy_ir_tensor(graph, tensor_del);
    return 0; // Success
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant