-
Notifications
You must be signed in to change notification settings - Fork 12.6k
Description
Name and Version
.\llama-cli.exe --version
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 3 CUDA devices:
Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
Device 1: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes
Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
load_backend: loaded CUDA backend from C:\Users\unat\llm\llamacpp\cuda12\ggml-cuda.dll
load_backend: loaded RPC backend from C:\Users\unat\llm\llamacpp\cuda12\ggml-rpc.dll
load_backend: loaded CPU backend from C:\Users\unat\llm\llamacpp\cuda12\ggml-cpu-icelake.dll
version: 5920 (d9b6910)
built with clang version 19.1.5 for x86_64-pc-windows-msvc
Operating systems
Windows
GGML backends
CUDA
Hardware
Ryzen 7900X + 64Gb DDR5@6200
Models
Qwen3-32b (Unsloth's Qwen3-32B-128K-UD-Q6_K_XL)
Problem description & steps to reproduce
When updating to any newer version than b5920 I got speed loss from 36tps to 30tps and power draw reduction on RTX 5090 from 226W to 200W.
First Bad Commit
I think it's https://github.com/ggml-org/llama.cpp/releases/tag/b5922
Relevant log output
**b5920**:
slot launch_slot_: id 0 | task 0 | processing task
slot update_slots: id 0 | task 0 | new prompt, n_ctx_slot = 65536, n_keep = 0, n_prompt_tokens = 42
slot update_slots: id 0 | task 0 | kv cache rm [0, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_past = 42, n_tokens = 42, progress = 1.000000
slot update_slots: id 0 | task 0 | prompt done, n_past = 42, n_tokens = 42
slot release: id 0 | task 0 | stop processing: n_past = 2567, truncated = 0
slot print_timing: id 0 | task 0 |
prompt eval time = 116.05 ms / 42 tokens ( 2.76 ms per token, 361.92 tokens per second)
eval time = 69321.83 ms / 2526 tokens ( 27.44 ms per token, 36.44 tokens per second)
total time = 69437.88 ms / 2568 tokens
srv update_slots: all slots are idle
**b5989**:
slot launch_slot_: id 0 | task 2107 | processing task
slot update_slots: id 0 | task 2107 | new prompt, n_ctx_slot = 65536, n_keep = 0, n_prompt_tokens = 42
slot update_slots: id 0 | task 2107 | need to evaluate at least 1 token for each active slot, n_past = 42, n_prompt_tokens = 42
slot update_slots: id 0 | task 2107 | kv cache rm [41, end)
slot update_slots: id 0 | task 2107 | prompt processing progress, n_past = 42, n_tokens = 1, progress = 0.023810
slot update_slots: id 0 | task 2107 | prompt done, n_past = 42, n_tokens = 1
slot release: id 0 | task 2107 | stop processing: n_past = 2716, truncated = 0
slot print_timing: id 0 | task 2107 |
prompt eval time = 402.88 ms / 1 tokens ( 402.88 ms per token, 2.48 tokens per second)
eval time = 86797.09 ms / 2675 tokens ( 32.45 ms per token, 30.82 tokens per second)
total time = 87199.96 ms / 2676 tokens
srv update_slots: all slots are idle