Skip to content

mtmd : Fix 32-bit narrowing issue in export-lora and mtmd clip #14503

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Jul 25, 2025

Conversation

kiwi142857
Copy link
Contributor

Summary

Fixes narrowing conversion errors when building on 32-bit platforms due to implicit conversion from long long to size_t in initializer lists.

Under C++11 rules, non-constant expressions in initializer lists that narrow types will cause compile errors. This affects both export-lora.cpp and clip.cpp:

gguf_get_n_tensors(...) * ggml_tensor_overhead(); // returns long long

On 32-bit platforms, this cannot be implicitly converted to size_t (unsigned int) in an initializer list.

Fix

Added static_cast<size_t> to silence narrowing warnings and ensure type safety across 32/64-bit platforms.

Files affected

•tools/export-lora/export-lora.cpp
•tools/mtmd/clip.cpp

Notes

•This change is non-functional: it only impacts build correctness.
•Confirmed to compile cleanly on both 32-bit and 64-bit targets using clang.

@ngxson ngxson changed the title [fix] Fix 32-bit narrowing issue in export-lora and mtmd clip mtmd : Fix 32-bit narrowing issue in export-lora and mtmd clip Jul 3, 2025
@CISC CISC merged commit 749e0d2 into ggml-org:master Jul 25, 2025
85 of 90 checks passed
taronaeo pushed a commit to taronaeo/llama.cpp-s390x that referenced this pull request Jul 25, 2025
…org#14503)

* [fix] Fix 32-bit narrowing issue in export-lora and mtmd clip

* Update export-lora.cpp

* Update clip.cpp

* Update export-lora.cpp

* format: use space to replace tab
gabe-l-hart added a commit to gabe-l-hart/llama.cpp that referenced this pull request Jul 25, 2025
* origin/master:
docs : update HOWTO‑add‑model.md for ModelBase and new model classes (ggml-org#14874)
ggml : remove invalid portPos specifiers from dot files (ggml-org#14838)
context : restore preemptive sched reset when LLAMA_SET_ROWS=0 (ggml-org#14870)
mtmd : fix 32-bit narrowing issue in export-lora and mtmd clip (ggml-org#14503)
rpc : check for null buffers in get/set/copy tensor endpoints (ggml-org#14868)
sched : fix multiple evaluations of the same graph with pipeline parallelism (ggml-org#14855)
musa: upgrade musa sdk to rc4.2.0 (ggml-org#14498)
sync : ggml
cmake : fix usage issues (ggml/1257)
ggml-cpu : remove stdlib include from repack.cpp (ggml/1276)
context : perform output reorder lazily upon access after sync (ggml-org#14853)
chat : fix kimi-k2 chat template (ggml-org#14852)
sycl: fixed semantics of block offset calculation (ggml-org#14814)
llama : fix MiniCPM inference after Granite Four changes (ggml-org#14850)
docs: add libcurl-dev install hint for Linux distros (ggml-org#14801)
metal : fix fusion across different encoders (ggml-org#14849)
sycl: fix undefined variable in work group size check (ggml-org#14843)
convert : text-only support for GLM-4.1V-9B-Thinking (ggml-org#14823)
CUDA: fix overflow in FA, tune performance (ggml-org#14840)
CUDA: fix compilation with GGML_CUDA_F16 (ggml-org#14837)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants