-
Notifications
You must be signed in to change notification settings - Fork 12.7k
Description
Name and Version
load_backend: loaded RPC backend from /mnt/10a35c9a-e885-401a-b71a-38f856f9bf0a/ai/llama.cpp_ggerganov/build/bin/libggml-rpc.so
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = NVIDIA GeForce GTX 1050 Ti (NVIDIA) | uma: 0 | fp16: 0 | bf16: 0 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: none
load_backend: loaded Vulkan backend from /mnt/10a35c9a-e885-401a-b71a-38f856f9bf0a/ai/llama.cpp_ggerganov/build/bin/libggml-vulkan.so
load_backend: loaded CPU backend from /mnt/10a35c9a-e885-401a-b71a-38f856f9bf0a/ai/llama.cpp_ggerganov/build/bin/libggml-cpu-haswell.so
version: 6106 (5fd160b)
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
Operating systems
Linux
GGML backends
Vulkan
Hardware
System Details Report
Report details
- Date generated: 2025-08-12 18:47:32
Hardware Information:
- Hardware Model: ASUSTeK COMPUTER INC. TUF B450M-PRO GAMING
- Memory: 64.0 GiB
- Processor: AMD Ryzen™ 7 2700 × 16
- Graphics: NVIDIA GeForce GTX 1050 Ti
- Disk Capacity: 6.5 TB
Software Information:
- Firmware Version: 2006
- OS Name: Ubuntu 25.04
- OS Build: (null)
- OS Type: 64-bit
- GNOME Version: 48
- Windowing System: Wayland
- Kernel Version: Linux 6.14.0-27-generic
Models
Unsloth/gpt-oss-20b: gpt-oss-20b-F16
Problem description & steps to reproduce
Answer: GGGGGGGG... or garbled works. it occours randomly, but rebooting doesn't solve it. Happens with Ollama too. Happens with every model tested.
First Bad Commit
No response
Relevant log output
#CTRL+C
srv operator(): operator(): cleaning up before exit...
[New LWP 68015]
[New LWP 68014]
[New LWP 68013]
[New LWP 68012]
[New LWP 68011]
[New LWP 68010]
[New LWP 68009]
[New LWP 67996]
[New LWP 67993]
[New LWP 67991]
[New LWP 67990]
[New LWP 67989]
[New LWP 67973]
[New LWP 67972]
[New LWP 67971]
This GDB supports auto-downloading debuginfo from the following URLs:
<https://debuginfod.ubuntu.com>
Enable debuginfod for this session? (y or [n]) [answered N; input not from terminal]
Debuginfod has been disabled.
To make this setting permanent, add 'set debuginfod enabled off' to .gdbinit.
Function(s) ^std::(move|forward|as_const|(__)?addressof) will be skipped when stepping.
Function(s) ^std::(shared|unique)_ptr<.*>::(get|operator) will be skipped when stepping.
Function(s) ^std::(basic_string|vector|array|deque|(forward_)?list|(unordered_|flat_)?(multi)?(map|set)|span)<.*>::(c?r?(begin|end)|front|back|data|size|empty) will be skipped when stepping.
Function(s) ^std::(basic_string|vector|array|deque|span)<.*>::operator.] will be skipped when stepping.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
__syscall_cancel_arch () at ../sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S:56
warning: 56 ../sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S: File o directory non esistente
#0 __syscall_cancel_arch () at ../sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S:56
56 in ../sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S
#1 0x00007f82b669eae3 in __internal_syscall_cancel (a1=a1@entry=140199065545104, a2=<optimized out>, a3=<optimized out>, a4=a4@entry=0, a5=a5@entry=0, a6=a6@entry=4294967295, nr=202) at ./nptl/cancellation.c:49
warning: 49 ./nptl/cancellation.c: File o directory non esistente
#2 0x00007f82b669f237 in __futex_abstimed_wait_common64 (private=128, futex_word=0x7f82a37fe990, expected=<optimized out>, op=265, abstime=0x0, cancel=true) at ./nptl/futex-internal.c:57
warning: 57 ./nptl/futex-internal.c: File o directory non esistente
#3 __futex_abstimed_wait_common (futex_word=0x7f82a37fe990, expected=<optimized out>, clockid=0, abstime=0x0, private=128, cancel=true) at ./nptl/futex-internal.c:87
87 in ./nptl/futex-internal.c
#4 __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x7f82a37fe990, expected=<optimized out>, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=128) at ./nptl/futex-internal.c:139
139 in ./nptl/futex-internal.c
#5 0x00007f82b66a4614 in __pthread_clockjoin_ex (threadid=140199065544384, thread_return=0x0, clockid=0, abstime=0x0, block=<optimized out>) at ./nptl/pthread_join_common.c:108
warning: 108 ./nptl/pthread_join_common.c: File o directory non esistente
#6 0x00007f82b6af2393 in std::thread::join() () from /lib/x86_64-linux-gnu/libstdc++.so.6
#7 0x000061d15aac05d9 in main ()
[Inferior 1 (process 67970) detached]
terminate called without an active exception
Annullato (core dump creato)
./llama-server -c 16384 --host 127.0.0.1 --port 8080 --jinja -m ../../../llm/gpt/gpt-oss-20b-F16/gpt-oss-20b-F16.gguf
load_backend: loaded RPC backend from /mnt/10a35c9a-e885-401a-b71a-38f856f9bf0a/ai/llama.cpp_ggerganov/build/bin/libggml-rpc.so
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = NVIDIA GeForce GTX 1050 Ti (NVIDIA) | uma: 0 | fp16: 0 | bf16: 0 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: none
load_backend: loaded Vulkan backend from /mnt/10a35c9a-e885-401a-b71a-38f856f9bf0a/ai/llama.cpp_ggerganov/build/bin/libggml-vulkan.so
load_backend: loaded CPU backend from /mnt/10a35c9a-e885-401a-b71a-38f856f9bf0a/ai/llama.cpp_ggerganov/build/bin/libggml-cpu-haswell.so
build: 6106 (5fd160bb) with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
system info: n_threads = 8, n_threads_batch = 8, total_threads = 16
system_info: n_threads = 8 (n_threads_batch = 8) / 16 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | LLAMAFILE = 1 | OPENMP = 1 | REPACK = 1 |
main: binding port with default address family
main: HTTP server is listening, hostname: 127.0.0.1, port: 8080, http threads: 15
main: loading model
srv load_model: loading model '../../../llm/gpt/gpt-oss-20b-F16/gpt-oss-20b-F16.gguf'
llama_model_load_from_file_impl: using device Vulkan0 (NVIDIA GeForce GTX 1050 Ti) - 4096 MiB free
llama_model_loader: loaded meta data with 37 key-value pairs and 459 tensors from ../../../llm/gpt/gpt-oss-20b-F16/gpt-oss-20b-F16.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = gpt-oss
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Gpt-Oss-20B
llama_model_loader: - kv 3: general.basename str = Gpt-Oss-20B
llama_model_loader: - kv 4: general.quantized_by str = Unsloth
llama_model_loader: - kv 5: general.size_label str = 20B
llama_model_loader: - kv 6: general.license str = apache-2.0
llama_model_loader: - kv 7: general.repo_url str = https://huggingface.co/unsloth
llama_model_loader: - kv 8: general.tags arr[str,2] = ["vllm", "text-generation"]
llama_model_loader: - kv 9: gpt-oss.block_count u32 = 24
llama_model_loader: - kv 10: gpt-oss.context_length u32 = 131072
llama_model_loader: - kv 11: gpt-oss.embedding_length u32 = 2880
llama_model_loader: - kv 12: gpt-oss.feed_forward_length u32 = 2880
llama_model_loader: - kv 13: gpt-oss.attention.head_count u32 = 64
llama_model_loader: - kv 14: gpt-oss.attention.head_count_kv u32 = 8
llama_model_loader: - kv 15: gpt-oss.rope.freq_base f32 = 150000.000000
llama_model_loader: - kv 16: gpt-oss.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 17: gpt-oss.expert_count u32 = 32
llama_model_loader: - kv 18: gpt-oss.expert_used_count u32 = 4
llama_model_loader: - kv 19: gpt-oss.attention.key_length u32 = 64
llama_model_loader: - kv 20: gpt-oss.attention.value_length u32 = 64
llama_model_loader: - kv 21: general.file_type u32 = 1
llama_model_loader: - kv 22: gpt-oss.attention.sliding_window u32 = 128
llama_model_loader: - kv 23: gpt-oss.expert_feed_forward_length u32 = 2880
llama_model_loader: - kv 24: gpt-oss.rope.scaling.type str = yarn
llama_model_loader: - kv 25: gpt-oss.rope.scaling.factor f32 = 32.000000
llama_model_loader: - kv 26: gpt-oss.rope.scaling.original_context_length u32 = 4096
llama_model_loader: - kv 27: general.quantization_version u32 = 2
llama_model_loader: - kv 28: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 29: tokenizer.ggml.pre str = gpt-4o
llama_model_loader: - kv 30: tokenizer.ggml.tokens arr[str,201088] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 31: tokenizer.ggml.token_type arr[i32,201088] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 32: tokenizer.ggml.merges arr[str,446189] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 33: tokenizer.ggml.bos_token_id u32 = 199998
llama_model_loader: - kv 34: tokenizer.ggml.eos_token_id u32 = 200002
llama_model_loader: - kv 35: tokenizer.ggml.padding_token_id u32 = 200017
llama_model_loader: - kv 36: tokenizer.chat_template str = {# Copyright 2025-present Unsloth. Ap...
llama_model_loader: - type f32: 289 tensors
llama_model_loader: - type f16: 98 tensors
llama_model_loader: - type mxfp4: 72 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = F16
print_info: file size = 12.83 GiB (5.27 BPW)
load: printing all EOG tokens:
load: - 199999 ('<|endoftext|>')
load: - 200002 ('<|return|>')
load: - 200007 ('<|end|>')
load: - 200012 ('<|call|>')
load: special_eog_ids contains both '<|return|>' and '<|call|>' tokens, removing '<|end|>' token from EOG list
load: special tokens cache size = 21
load: token to piece cache size = 1.3332 MB
print_info: arch = gpt-oss
print_info: vocab_only = 0
print_info: n_ctx_train = 131072
print_info: n_embd = 2880
print_info: n_layer = 24
print_info: n_head = 64
print_info: n_head_kv = 8
print_info: n_rot = 64
print_info: n_swa = 128
print_info: is_swa_any = 1
print_info: n_embd_head_k = 64
print_info: n_embd_head_v = 64
print_info: n_gqa = 8
print_info: n_embd_k_gqa = 512
print_info: n_embd_v_gqa = 512
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-05
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 2880
print_info: n_expert = 32
print_info: n_expert_used = 4
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 2
print_info: rope scaling = yarn
print_info: freq_base_train = 150000.0
print_info: freq_scale_train = 0.03125
print_info: n_ctx_orig_yarn = 4096
print_info: rope_finetuned = unknown
print_info: model type = ?B
print_info: model params = 20.91 B
print_info: general.name = Gpt-Oss-20B
print_info: n_ff_exp = 2880
print_info: vocab type = BPE
print_info: n_vocab = 201088
print_info: n_merges = 446189
print_info: BOS token = 199998 '<|startoftext|>'
print_info: EOS token = 200002 '<|return|>'
print_info: EOT token = 199999 '<|endoftext|>'
print_info: PAD token = 200017 '<|reserved_200017|>'
print_info: LF token = 198 'Ċ'
print_info: EOG token = 199999 '<|endoftext|>'
print_info: EOG token = 200002 '<|return|>'
print_info: EOG token = 200012 '<|call|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: offloading 0 repeating layers to GPU
load_tensors: offloaded 0/25 layers to GPU
load_tensors: CPU_Mapped model buffer size = 13141.28 MiB
....................................................................................
llama_context: constructing llama_context
llama_context: n_seq_max = 1
llama_context: n_ctx = 16384
llama_context: n_ctx_per_seq = 16384
llama_context: n_batch = 2048
llama_context: n_ubatch = 512
llama_context: causal_attn = 1
llama_context: flash_attn = 0
llama_context: kv_unified = false
llama_context: freq_base = 150000.0
llama_context: freq_scale = 0.03125
llama_context: n_ctx_per_seq (16384) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context: CPU output buffer size = 0.77 MiB
llama_kv_cache_unified_iswa: creating non-SWA KV cache, size = 16384 cells
llama_kv_cache_unified: CPU KV buffer size = 384.00 MiB
llama_kv_cache_unified: size = 384.00 MiB ( 16384 cells, 12 layers, 1/1 seqs), K (f16): 192.00 MiB, V (f16): 192.00 MiB
llama_kv_cache_unified_iswa: creating SWA KV cache, size = 640 cells
llama_kv_cache_unified: CPU KV buffer size = 15.00 MiB
llama_kv_cache_unified: size = 15.00 MiB ( 640 cells, 12 layers, 1/1 seqs), K (f16): 7.50 MiB, V (f16): 7.50 MiB
llama_context: Vulkan0 compute buffer size = 2120.89 MiB
llama_context: Vulkan_Host compute buffer size = 82.26 MiB
llama_context: graph nodes = 1446
llama_context: graph splits = 556 (with bs=512), 1 (with bs=1)
common_init_from_params: KV cache shifting is not supported for this context, disabling KV cache shifting
common_init_from_params: added <|endoftext|> logit bias = -inf
common_init_from_params: added <|return|> logit bias = -inf
common_init_from_params: added <|call|> logit bias = -inf
common_init_from_params: setting dry_penalty_last_n to ctx_size = 16384
common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
srv init: initializing slots, n_slots = 1
slot init: id 0 | task -1 | new slot n_ctx_slot = 16384
main: model loaded