.\llama-server.exe --host 0.0.0.0 --port 8080 -m MiniMax-M2.5-UD-Q3_K_XL-00001-of-00004.gguf -ngl 99 -fa on --cache-type-k q8_0 --cache-type-v q8_0 --top-p 0.95 -t 1.0 --min_p 0.01 -np 1 --top_k 40 -ub 256 --batch-size 512 --jinja --no-mmap
load_backend: loaded RPC backend from C:\Users\AI Max\Downloads\llama-b8703-bin-win-vulkan-x64\ggml-rpc.dll
load_backend: loaded Vulkan backend from C:\Users\AI Max\Downloads\llama-b8703-bin-win-vulkan-x64\ggml-vulkan.dll
load_backend: loaded CPU backend from C:\Users\AI Max\Downloads\llama-b8703-bin-win-vulkan-x64\ggml-cpu-zen4.dll
build_info: b8703-5c4aae66e
system_info: n_threads = 1 (n_threads_batch = 1) / 32 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | LLAMAFILE = 1 | OPENMP = 1 | REPACK = 1 |
Running without SSL
init: using 31 threads for HTTP server
start: binding port with default address family
main: loading model
srv load_model: loading model 'MiniMax-M2.5-UD-Q3_K_XL-00001-of-00004.gguf'
common_init_result: fitting params to device memory, for bugs during this step try to reproduce them with -fit off, or provide --verbose logs if the bug only occurs with -fit on
llama_params_fit_impl: projected to use 121861 MiB of device memory vs. 108781 MiB of free device memory
llama_params_fit_impl: cannot meet free memory target of 1024 MiB, need to reduce device memory by 14104 MiB
llama_params_fit_impl: context size reduced from 196608 to 87296 -> need 14121 MiB less memory in total
llama_params_fit_impl: entire model can be fit by reducing context
llama_params_fit: successfully fit params to free device memory
llama_params_fit: fitting params to free memory took 0.28 seconds
llama_model_load_from_file_impl: using device Vulkan0 (AMD Radeon(TM) 8060S Graphics) (unknown id) - 108782 MiB free
llama_model_loader: additional 3 GGUFs metadata loaded.
llama_model_loader: loaded meta data with 53 key-value pairs and 809 tensors from MiniMax-M2.5-UD-Q3_K_XL-00001-of-00004.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = minimax-m2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.sampling.top_k i32 = 40
llama_model_loader: - kv 3: general.sampling.top_p f32 = 0.950000
llama_model_loader: - kv 4: general.sampling.temp f32 = 1.000000
llama_model_loader: - kv 5: general.name str = Minimax-M2.5
llama_model_loader: - kv 6: general.basename str = Minimax-M2.5
llama_model_loader: - kv 7: general.quantized_by str = Unsloth
llama_model_loader: - kv 8: general.size_label str = 256x4.9B
llama_model_loader: - kv 9: general.license str = other
llama_model_loader: - kv 10: general.license.name str = modified-mit
llama_model_loader: - kv 11: general.license.link str = https://github.com/MiniMax-AI/MiniMax...
llama_model_loader: - kv 12: general.repo_url str = https://huggingface.co/unsloth
llama_model_loader: - kv 13: general.base_model.count u32 = 1
llama_model_loader: - kv 14: general.base_model.0.name str = MiniMax M2.5
llama_model_loader: - kv 15: general.base_model.0.organization str = MiniMaxAI
llama_model_loader: - kv 16: general.base_model.0.repo_url str = https://huggingface.co/MiniMaxAI/Mini...
llama_model_loader: - kv 17: general.tags arr[str,2] = ["unsloth", "text-generation"]
llama_model_loader: - kv 18: minimax-m2.block_count u32 = 62
llama_model_loader: - kv 19: minimax-m2.context_length u32 = 196608
llama_model_loader: - kv 20: minimax-m2.embedding_length u32 = 3072
llama_model_loader: - kv 21: minimax-m2.feed_forward_length u32 = 1536
llama_model_loader: - kv 22: minimax-m2.attention.head_count u32 = 48
llama_model_loader: - kv 23: minimax-m2.attention.head_count_kv u32 = 8
llama_model_loader: - kv 24: minimax-m2.rope.freq_base f32 = 5000000.000000
llama_model_loader: - kv 25: minimax-m2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 26: minimax-m2.expert_count u32 = 256
llama_model_loader: - kv 27: minimax-m2.expert_used_count u32 = 8
llama_model_loader: - kv 28: minimax-m2.expert_gating_func u32 = 2
llama_model_loader: - kv 29: minimax-m2.attention.key_length u32 = 128
llama_model_loader: - kv 30: minimax-m2.attention.value_length u32 = 128
llama_model_loader: - kv 31: minimax-m2.expert_feed_forward_length u32 = 1536
llama_model_loader: - kv 32: minimax-m2.rope.dimension_count u32 = 64
llama_model_loader: - kv 33: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 34: tokenizer.ggml.pre str = minimax-m2
llama_model_loader: - kv 35: tokenizer.ggml.tokens arr[str,200064] = ["Ā", "ā", "Ă", "ă", "Ą", "ą", ...
llama_model_loader: - kv 36: tokenizer.ggml.token_type arr[i32,200064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 37: tokenizer.ggml.merges arr[str,199744] = ["Ġ Ġ", "Ġ t", "Ġ a", "i n", "e r...
llama_model_loader: - kv 38: tokenizer.ggml.bos_token_id u32 = 200034
llama_model_loader: - kv 39: tokenizer.ggml.eos_token_id u32 = 200020
llama_model_loader: - kv 40: tokenizer.ggml.unknown_token_id u32 = 200021
llama_model_loader: - kv 41: tokenizer.ggml.padding_token_id u32 = 200004
llama_model_loader: - kv 42: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 43: tokenizer.chat_template str = {# Unsloth template fixes #}\n{# -----...
llama_model_loader: - kv 44: general.quantization_version u32 = 2
llama_model_loader: - kv 45: general.file_type u32 = 12
llama_model_loader: - kv 46: quantize.imatrix.file str = MiniMax-M2.5-GGUF/imatrix_unsloth.gguf
llama_model_loader: - kv 47: quantize.imatrix.dataset str = unsloth_calibration_MiniMax-M2.5.txt
llama_model_loader: - kv 48: quantize.imatrix.entries_count u32 = 496
llama_model_loader: - kv 49: quantize.imatrix.chunks_count u32 = 81
llama_model_loader: - kv 50: split.no u16 = 0
llama_model_loader: - kv 51: split.tensors.count i32 = 809
llama_model_loader: - kv 52: split.count u16 = 4
llama_model_loader: - type f32: 373 tensors
llama_model_loader: - type q3_K: 173 tensors
llama_model_loader: - type q4_K: 232 tensors
llama_model_loader: - type q5_K: 20 tensors
llama_model_loader: - type q6_K: 11 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q3_K - Medium
print_info: file size = 94.33 GiB (3.54 BPW)
load: 0 unused tokens
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: printing all EOG tokens:
load: - 200004 ('<fim_pad>')
load: - 200005 ('<reponame>')
load: - 200020 ('[e~[')
load: special tokens cache size = 54
load: token to piece cache size = 1.3355 MB
print_info: arch = minimax-m2
print_info: vocab_only = 0
print_info: no_alloc = 0
print_info: n_ctx_train = 196608
print_info: n_embd = 3072
print_info: n_embd_inp = 3072
print_info: n_layer = 62
print_info: n_head = 48
print_info: n_head_kv = 8
print_info: n_rot = 64
print_info: n_swa = 0
print_info: is_swa_any = 0
print_info: n_embd_head_k = 128
print_info: n_embd_head_v = 128
print_info: n_gqa = 6
print_info: n_embd_k_gqa = 1024
print_info: n_embd_v_gqa = 1024
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-06
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 1536
print_info: n_expert = 256
print_info: n_expert_used = 8
print_info: n_expert_groups = 0
print_info: n_group_used = 0
print_info: causal attn = 1
print_info: pooling type = -1
print_info: rope type = 2
print_info: rope scaling = linear
print_info: freq_base_train = 5000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 196608
print_info: rope_yarn_log_mul = 0.0000
print_info: rope_finetuned = unknown
print_info: model type = 230B.A10B
print_info: model params = 228.69 B
print_info: general.name = Minimax-M2.5
print_info: vocab type = BPE
print_info: n_vocab = 200064
print_info: n_merges = 199744
print_info: BOS token = 200034 ']~!b['
print_info: EOS token = 200020 '[e~['
print_info: UNK token = 200021 ']!d~['
print_info: PAD token = 200004 '<fim_pad>'
print_info: LF token = 10 'Ċ'
print_info: FIM PRE token = 200001 '<fim_prefix>'
print_info: FIM SUF token = 200003 '<fim_suffix>'
print_info: FIM MID token = 200002 '<fim_middle>'
print_info: FIM PAD token = 200004 '<fim_pad>'
print_info: FIM REP token = 200005 '<reponame>'
print_info: EOG token = 200004 '<fim_pad>'
print_info: EOG token = 200005 '<reponame>'
print_info: EOG token = 200020 '[e~['
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false, direct_io = false)
load_tensors: offloading output layer to GPU
load_tensors: offloading 61 repeating layers to GPU
load_tensors: offloaded 63/63 layers to GPU
load_tensors: Vulkan0 model buffer size = 96266.43 MiB
load_tensors: Vulkan_Host model buffer size = 329.70 MiB
....................................................................................................
common_init_result: added <fim_pad> logit bias = -inf
common_init_result: added <reponame> logit bias = -inf
common_init_result: added [e~[ logit bias = -inf
llama_context: constructing llama_context
llama_context: n_seq_max = 1
llama_context: n_ctx = 87296
llama_context: n_ctx_seq = 87296
llama_context: n_batch = 512
llama_context: n_ubatch = 256
llama_context: causal_attn = 1
llama_context: flash_attn = enabled
llama_context: kv_unified = false
llama_context: freq_base = 5000000.0
llama_context: freq_scale = 1
llama_context: n_ctx_seq (87296) < n_ctx_train (196608) -- the full capacity of the model will not be utilized
llama_context: Vulkan_Host output buffer size = 0.76 MiB
llama_kv_cache: Vulkan0 KV buffer size = 11231.69 MiB
llama_kv_cache: size = 11231.69 MiB ( 87296 cells, 62 layers, 1/1 seqs), K (q8_0): 5615.84 MiB, V (q8_0): 5615.84 MiB
llama_kv_cache: attn_rot_k = 1, n_embd_head_k_all = 128
llama_kv_cache: attn_rot_v = 1, n_embd_head_k_all = 128
sched_reserve: reserving ...
sched_reserve: resolving fused Gated Delta Net support:
sched_reserve: fused Gated Delta Net (autoregressive) enabled
sched_reserve: fused Gated Delta Net (chunked) enabled
sched_reserve: Vulkan0 compute buffer size = 198.38 MiB
sched_reserve: Vulkan_Host compute buffer size = 91.33 MiB
sched_reserve: graph nodes = 4719
sched_reserve: graph splits = 2
sched_reserve: reserve took 113.49 ms, sched copies = 1
common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
srv load_model: initializing slots, n_slots = 1
no implementations specified for speculative decoding
slot load_model: id 0 | task -1 | speculative decoding context not initialized
slot load_model: id 0 | task -1 | new slot, n_ctx = 87296
srv load_model: prompt cache is enabled, size limit: 8192 MiB
srv load_model: use `--cache-ram 0` to disable the prompt cache
srv load_model: for more info see https://github.com/ggml-org/llama.cpp/pull/16391
srv init: init: --clear-idle requires --kv-unified, disabling
init: chat template, example_format: ']~b]system
You are a helpful assistant[e~[
]~b]user
Hello[e~[
]~b]ai
Hi there[e~[
]~b]user
How are you?[e~[
]~b]ai
<think>
'
srv init: init: chat template, thinking = 1
main: model loaded
main: server is listening on http://0.0.0.0:8080
main: starting the main loop...
srv update_slots: all slots are idle
srv log_server_r: done request: OPTIONS /v1/chat/completions 100.109.163.105 200
srv params_from_: Chat format: peg-native
slot get_availabl: id 0 | task -1 | selected slot by LRU, t_last = -1
srv get_availabl: updating prompt cache
srv load: - looking for better prompt, base f_keep = -1.000, sim = 0.000
srv update: - cache state: 0 prompts, 0.000 MiB (limits: 8192.000 MiB, 87296 tokens, 8589934592 est)
srv get_availabl: prompt cache update took 0.14 ms
slot launch_slot_: id 0 | task -1 | sampler chain: logits -> ?penalties -> ?dry -> ?top-n-sigma -> top-k -> ?typical -> top-p -> min-p -> ?xtc -> ?temp-ext -> dist
slot launch_slot_: id 0 | task 0 | processing task, is_child = 0
slot update_slots: id 0 | task 0 | new prompt, n_ctx_slot = 87296, n_keep = 0, task.n_tokens = 2924
slot update_slots: id 0 | task 0 | n_tokens = 0, memory_seq_rm [0, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 512, batch.n_tokens = 512, progress = 0.175103
slot update_slots: id 0 | task 0 | n_tokens = 512, memory_seq_rm [512, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 1024, batch.n_tokens = 512, progress = 0.350205
slot update_slots: id 0 | task 0 | n_tokens = 1024, memory_seq_rm [1024, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 1536, batch.n_tokens = 512, progress = 0.525308
slot update_slots: id 0 | task 0 | n_tokens = 1536, memory_seq_rm [1536, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 2048, batch.n_tokens = 512, progress = 0.700410
slot update_slots: id 0 | task 0 | n_tokens = 2048, memory_seq_rm [2048, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 2560, batch.n_tokens = 512, progress = 0.875513
slot update_slots: id 0 | task 0 | n_tokens = 2560, memory_seq_rm [2560, end)
reasoning-budget: activated, budget=2147483647 tokens
slot init_sampler: id 0 | task 0 | init sampler, took 1.05 ms, tokens: text = 2924, total = 2924
slot update_slots: id 0 | task 0 | prompt processing done, n_tokens = 2924, batch.n_tokens = 364
srv log_server_r: done request: POST /v1/chat/completions 100.109.163.105 200
reasoning-budget: deactivated (natural end)
slot print_timing: id 0 | task 0 |
prompt eval time = 23663.78 ms / 2924 tokens ( 8.09 ms per token, 123.56 tokens per second)
eval time = 4016.48 ms / 109 tokens ( 36.85 ms per token, 27.14 tokens per second)
total time = 27680.26 ms / 3033 tokens
slot release: id 0 | task 0 | stop processing: n_tokens = 3032, truncated = 0
srv update_slots: all slots are idle
Name and Version
b8703-5c4aae66e, prebuilt Vulkan x64 binary for Windows
Operating systems
Windows
GGML backends
Vulkan
Hardware
AMD Ryzen™ AI Max+ 395, Radeon 8060s
Models
MiniMax-M2.5-UD-Q3_K_XL.gguf
Problem description:
MiniMax M2.5 output quality has noticeably degraded on b8703 compared to earlier builds (pre-April 2026). The opening think token is consistently missing.
Tested and ruled out:
LLAMA_ATTN_ROT_DISABLE=1 → attn_rot goes to 0 in logs, issue persists
Removing KV cache quantization (f16 instead of q8_0) → attn_rot goes to 0, issue persists
GDN log lines ("fused Gated Delta Net enabled") appear on both working and broken builds → not the cause?
Last known working build was pre-attn_rot merge and it was build llama-b8338-bin-win-vulkan-x64
First Bad Commit
No response
Relevant log output
Logs