./run-llama.sh
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4060 Laptop GPU, compute capability 8.9, VMM: yes
build: 8094 (b55dcdef5) with GNU 15.2.1 for Linux x86_64
system info: n_threads = 16, n_threads_batch = 16, total_threads = 32
system_info: n_threads = 16 (n_threads_batch = 16) / 32 | CUDA : ARCHS = 890 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | LLAMAFILE = 1 | OPENMP = 1 | REPACK = 1 |
Running without SSL
init: using 31 threads for HTTP server
start: binding port with default address family
main: loading model
srv load_model: loading model '/home/henry/AiModels/GLM-4.7-Flash-Q4_K_S.gguf'
common_init_result: fitting params to device memory, for bugs during this step try to reproduce them with -fit off, or provide --verbose logs if the bug only occurs with -fit on
llama_params_fit_impl: projected to use 17072 MiB of device memory vs. 7706 MiB of free device memory
llama_params_fit_impl: cannot meet free memory target of 418 MiB, need to reduce device memory by 9784 MiB
llama_params_fit_impl: context size set by user to 202752 -> no change
llama_params_fit_impl: with only dense weights in device memory there is a total surplus of 5215 MiB
llama_params_fit_impl: filling dense-only layers back-to-front:
llama_params_fit_impl: - CUDA0 (NVIDIA GeForce RTX 4060 Laptop GPU): 48 layers, 2072 MiB used, 5633 MiB free
llama_params_fit_impl: converting dense-only layers to full layers and filling them front-to-back with overflow to next device/system memory:
llama_params_fit_impl: - CUDA0 (NVIDIA GeForce RTX 4060 Laptop GPU): 48 layers (32 overflowing), 7242 MiB used, 463 MiB free
llama_params_fit: successfully fit params to free device memory
llama_params_fit: fitting params to free memory took 2.46 seconds
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4060 Laptop GPU) (0000:01:00.0) - 7706 MiB free
llama_model_loader: loaded meta data with 60 key-value pairs and 844 tensors from /home/henry/AiModels/GLM-4.7-Flash-Q4_K_S.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = deepseek2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.sampling.top_p f32 = 0.950000
llama_model_loader: - kv 3: general.sampling.temp f32 = 1.000000
llama_model_loader: - kv 4: general.name str = Glm-4.7-Flash
llama_model_loader: - kv 5: general.basename str = Glm-4.7-Flash
llama_model_loader: - kv 6: general.quantized_by str = Unsloth
llama_model_loader: - kv 7: general.size_label str = 64x2.6B
llama_model_loader: - kv 8: general.license str = mit
llama_model_loader: - kv 9: general.repo_url str = https://huggingface.co/unsloth
llama_model_loader: - kv 10: general.base_model.count u32 = 1
llama_model_loader: - kv 11: general.base_model.0.name str = GLM 4.7 Flash
llama_model_loader: - kv 12: general.base_model.0.organization str = Zai Org
llama_model_loader: - kv 13: general.base_model.0.repo_url str = https://huggingface.co/zai-org/GLM-4....
llama_model_loader: - kv 14: general.tags arr[str,2] = ["unsloth", "text-generation"]
llama_model_loader: - kv 15: general.languages arr[str,2] = ["en", "zh"]
llama_model_loader: - kv 16: deepseek2.block_count u32 = 47
llama_model_loader: - kv 17: deepseek2.context_length u32 = 202752
llama_model_loader: - kv 18: deepseek2.embedding_length u32 = 2048
llama_model_loader: - kv 19: deepseek2.feed_forward_length u32 = 10240
llama_model_loader: - kv 20: deepseek2.attention.head_count u32 = 20
llama_model_loader: - kv 21: deepseek2.attention.head_count_kv u32 = 1
llama_model_loader: - kv 22: deepseek2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 23: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 24: deepseek2.expert_used_count u32 = 4
llama_model_loader: - kv 25: deepseek2.expert_group_count u32 = 1
llama_model_loader: - kv 26: deepseek2.expert_group_used_count u32 = 1
llama_model_loader: - kv 27: deepseek2.expert_gating_func u32 = 2
llama_model_loader: - kv 28: deepseek2.leading_dense_block_count u32 = 1
llama_model_loader: - kv 29: deepseek2.vocab_size u32 = 154880
llama_model_loader: - kv 30: deepseek2.attention.q_lora_rank u32 = 768
llama_model_loader: - kv 31: deepseek2.attention.kv_lora_rank u32 = 512
llama_model_loader: - kv 32: deepseek2.attention.key_length u32 = 576
llama_model_loader: - kv 33: deepseek2.attention.value_length u32 = 512
llama_model_loader: - kv 34: deepseek2.attention.key_length_mla u32 = 256
llama_model_loader: - kv 35: deepseek2.attention.value_length_mla u32 = 256
llama_model_loader: - kv 36: deepseek2.expert_feed_forward_length u32 = 1536
llama_model_loader: - kv 37: deepseek2.expert_count u32 = 64
llama_model_loader: - kv 38: deepseek2.expert_shared_count u32 = 1
llama_model_loader: - kv 39: deepseek2.expert_weights_scale f32 = 1.800000
llama_model_loader: - kv 40: deepseek2.expert_weights_norm bool = true
llama_model_loader: - kv 41: deepseek2.rope.dimension_count u32 = 64
llama_model_loader: - kv 42: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 43: tokenizer.ggml.pre str = glm4
llama_model_loader: - kv 44: tokenizer.ggml.tokens arr[str,154880] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 45: tokenizer.ggml.token_type arr[i32,154880] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 46: tokenizer.ggml.merges arr[str,321649] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 47: tokenizer.ggml.eos_token_id u32 = 154820
llama_model_loader: - kv 48: tokenizer.ggml.padding_token_id u32 = 154821
llama_model_loader: - kv 49: tokenizer.ggml.bos_token_id u32 = 154822
llama_model_loader: - kv 50: tokenizer.ggml.eot_token_id u32 = 154827
llama_model_loader: - kv 51: tokenizer.ggml.unknown_token_id u32 = 154820
llama_model_loader: - kv 52: tokenizer.ggml.eom_token_id u32 = 154829
llama_model_loader: - kv 53: tokenizer.chat_template str = [gMASK]<sop>\n{%- if tools -%}\n<|syste...
llama_model_loader: - kv 54: general.quantization_version u32 = 2
llama_model_loader: - kv 55: general.file_type u32 = 14
llama_model_loader: - kv 56: quantize.imatrix.file str = GLM-4.7-Flash-GGUF/imatrix_unsloth.gguf
llama_model_loader: - kv 57: quantize.imatrix.dataset str = unsloth_calibration_GLM-4.7-Flash.txt
llama_model_loader: - kv 58: quantize.imatrix.entries_count u32 = 607
llama_model_loader: - kv 59: quantize.imatrix.chunks_count u32 = 85
llama_model_loader: - type f32: 281 tensors
llama_model_loader: - type q8_0: 141 tensors
llama_model_loader: - type q4_K: 278 tensors
llama_model_loader: - type q5_K: 97 tensors
llama_model_loader: - type q6_K: 47 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_K - Small
print_info: file size = 16.07 GiB (4.61 BPW)
load: 0 unused tokens
load: special_eot_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special_eom_id is not in special_eog_ids - the tokenizer config may be incorrect
load: printing all EOG tokens:
load: - 154820 ('<|endoftext|>')
load: - 154827 ('<|user|>')
load: - 154829 ('<|observation|>')
load: special tokens cache size = 36
load: token to piece cache size = 0.9811 MB
print_info: arch = deepseek2
print_info: vocab_only = 0
print_info: no_alloc = 0
print_info: n_ctx_train = 202752
print_info: n_embd = 2048
print_info: n_embd_inp = 2048
print_info: n_layer = 47
print_info: n_head = 20
print_info: n_head_kv = 1
print_info: n_rot = 64
print_info: n_swa = 0
print_info: is_swa_any = 0
print_info: n_embd_head_k = 576
print_info: n_embd_head_v = 512
print_info: n_gqa = 20
print_info: n_embd_k_gqa = 576
print_info: n_embd_v_gqa = 512
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-05
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 10240
print_info: n_expert = 64
print_info: n_expert_used = 4
print_info: n_expert_groups = 1
print_info: n_group_used = 1
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 0
print_info: rope scaling = linear
print_info: freq_base_train = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 202752
print_info: rope_yarn_log_mul = 0.0000
print_info: rope_finetuned = unknown
print_info: model type = 30B.A3B
print_info: model params = 29.94 B
print_info: general.name = Glm-4.7-Flash
print_info: n_layer_dense_lead = 1
print_info: n_lora_q = 768
print_info: n_lora_kv = 512
print_info: n_embd_head_k_mla = 256
print_info: n_embd_head_v_mla = 256
print_info: n_ff_exp = 1536
print_info: n_expert_shared = 1
print_info: expert_weights_scale = 1.8
print_info: expert_weights_norm = 1
print_info: expert_gating_func = sigmoid
print_info: vocab type = BPE
print_info: n_vocab = 154880
print_info: n_merges = 321649
print_info: BOS token = 154822 '[gMASK]'
print_info: EOS token = 154820 '<|endoftext|>'
print_info: EOT token = 154827 '<|user|>'
print_info: EOM token = 154829 '<|observation|>'
print_info: UNK token = 154820 '<|endoftext|>'
print_info: PAD token = 154821 '[MASK]'
print_info: LF token = 198 'Ċ'
print_info: FIM PRE token = 154838 '<|code_prefix|>'
print_info: FIM SUF token = 154840 '<|code_suffix|>'
print_info: FIM MID token = 154839 '<|code_middle|>'
print_info: EOG token = 154820 '<|endoftext|>'
print_info: EOG token = 154827 '<|user|>'
print_info: EOG token = 154829 '<|observation|>'
print_info: max token length = 1024
load_tensors: loading model tensors, this can take a while... (mmap = true, direct_io = false)
load_tensors: offloading output layer to GPU
load_tensors: offloading 46 repeating layers to GPU
load_tensors: offloaded 48/48 layers to GPU
load_tensors: CPU_Mapped model buffer size = 16209.10 MiB
load_tensors: CUDA0 model buffer size = 6458.73 MiB
warning: failed to mlock 448106496-byte buffer (after previously locking 0 bytes): Cannot allocate memory
Try increasing RLIMIT_MEMLOCK ('ulimit -l' as root).
...................................................................................................
common_init_result: added <|endoftext|> logit bias = -inf
common_init_result: added <|user|> logit bias = -inf
common_init_result: added <|observation|> logit bias = -inf
llama_context: constructing llama_context
llama_context: n_seq_max = 1
llama_context: n_ctx = 202752
llama_context: n_ctx_seq = 202752
llama_context: n_batch = 2048
llama_context: n_ubatch = 512
llama_context: causal_attn = 1
llama_context: flash_attn = enabled
llama_context: kv_unified = false
llama_context: freq_base = 1000000.0
llama_context: freq_scale = 1
llama_context: CUDA_Host output buffer size = 0.59 MiB
llama_kv_cache: CPU KV buffer size = 2944.48 MiB
llama_kv_cache: size = 2944.48 MiB (202752 cells, 47 layers, 1/1 seqs), K (q4_0): 2944.48 MiB, V (q4_0): 0.00 MiB
sched_reserve: reserving ...
sched_reserve: CUDA0 compute buffer size = 783.80 MiB
sched_reserve: CUDA_Host compute buffer size = 404.01 MiB
sched_reserve: graph nodes = 3317
sched_reserve: graph splits = 188 (with bs=512), 160 (with bs=1)
sched_reserve: reserve took 668.75 ms, sched copies = 1
common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
srv load_model: initializing slots, n_slots = 1
no implementations specified for speculative decoding
slot load_model: id 0 | task -1 | speculative decoding context not initialized
slot load_model: id 0 | task -1 | new slot, n_ctx = 202752
srv load_model: prompt cache is enabled, size limit: 8192 MiB
srv load_model: use `--cache-ram 0` to disable the prompt cache
srv load_model: for more info see https://github.com/ggml-org/llama.cpp/pull/16391
init: chat template, example_format: '[gMASK]<sop><|system|>You are a helpful assistant<|user|>Hello<|assistant|></think>Hi there<|user|>How are you?<|assistant|><think>'
srv init: init: chat template, thinking = 1
main: model loaded
main: server is listening on http://127.0.0.1:28000
main: starting the main loop...
srv update_slots: all slots are idle
Template supports tool calls but does not natively describe tools. The fallback behaviour used may produce bad results, inspect prompt w/ --verbose & consider overriding the template.
srv params_from_: Chat format: GLM 4.5
slot get_availabl: id 0 | task -1 | selected slot by LRU, t_last = -1
slot launch_slot_: id 0 | task -1 | sampler chain: logits -> ?penalties -> ?dry -> ?top-n-sigma -> ?top-k -> ?typical -> ?top-p -> min-p -> ?xtc -> temp-ext -> dist
slot launch_slot_: id 0 | task 0 | processing task, is_child = 0
slot update_slots: id 0 | task 0 | new prompt, n_ctx_slot = 202752, n_keep = 0, task.n_tokens = 65370
slot update_slots: id 0 | task 0 | n_tokens = 0, memory_seq_rm [0, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 2048, batch.n_tokens = 2048, progress = 0.031329
slot update_slots: id 0 | task 0 | n_tokens = 2048, memory_seq_rm [2048, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 4096, batch.n_tokens = 2048, progress = 0.062659
slot update_slots: id 0 | task 0 | n_tokens = 4096, memory_seq_rm [4096, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 6144, batch.n_tokens = 2048, progress = 0.093988
slot update_slots: id 0 | task 0 | n_tokens = 6144, memory_seq_rm [6144, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 8192, batch.n_tokens = 2048, progress = 0.125317
slot update_slots: id 0 | task 0 | n_tokens = 8192, memory_seq_rm [8192, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 10240, batch.n_tokens = 2048, progress = 0.156647
slot update_slots: id 0 | task 0 | n_tokens = 10240, memory_seq_rm [10240, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 12288, batch.n_tokens = 2048, progress = 0.187976
slot update_slots: id 0 | task 0 | n_tokens = 12288, memory_seq_rm [12288, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 14336, batch.n_tokens = 2048, progress = 0.219305
slot update_slots: id 0 | task 0 | n_tokens = 14336, memory_seq_rm [14336, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 16384, batch.n_tokens = 2048, progress = 0.250635
slot update_slots: id 0 | task 0 | n_tokens = 16384, memory_seq_rm [16384, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 18432, batch.n_tokens = 2048, progress = 0.281964
slot update_slots: id 0 | task 0 | n_tokens = 18432, memory_seq_rm [18432, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 20480, batch.n_tokens = 2048, progress = 0.313294
slot update_slots: id 0 | task 0 | n_tokens = 20480, memory_seq_rm [20480, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 22528, batch.n_tokens = 2048, progress = 0.344623
slot update_slots: id 0 | task 0 | n_tokens = 22528, memory_seq_rm [22528, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 24576, batch.n_tokens = 2048, progress = 0.375952
slot update_slots: id 0 | task 0 | n_tokens = 24576, memory_seq_rm [24576, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 26624, batch.n_tokens = 2048, progress = 0.407282
slot update_slots: id 0 | task 0 | n_tokens = 26624, memory_seq_rm [26624, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 28672, batch.n_tokens = 2048, progress = 0.438611
slot update_slots: id 0 | task 0 | n_tokens = 28672, memory_seq_rm [28672, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 30720, batch.n_tokens = 2048, progress = 0.469940
slot update_slots: id 0 | task 0 | n_tokens = 30720, memory_seq_rm [30720, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 32768, batch.n_tokens = 2048, progress = 0.501270
slot update_slots: id 0 | task 0 | n_tokens = 32768, memory_seq_rm [32768, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 34816, batch.n_tokens = 2048, progress = 0.532599
slot update_slots: id 0 | task 0 | n_tokens = 34816, memory_seq_rm [34816, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 36864, batch.n_tokens = 2048, progress = 0.563928
slot update_slots: id 0 | task 0 | n_tokens = 36864, memory_seq_rm [36864, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 38912, batch.n_tokens = 2048, progress = 0.595258
slot update_slots: id 0 | task 0 | n_tokens = 38912, memory_seq_rm [38912, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 40960, batch.n_tokens = 2048, progress = 0.626587
slot update_slots: id 0 | task 0 | n_tokens = 40960, memory_seq_rm [40960, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 43008, batch.n_tokens = 2048, progress = 0.657916
slot update_slots: id 0 | task 0 | n_tokens = 43008, memory_seq_rm [43008, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 45056, batch.n_tokens = 2048, progress = 0.689246
slot update_slots: id 0 | task 0 | n_tokens = 45056, memory_seq_rm [45056, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 47104, batch.n_tokens = 2048, progress = 0.720575
slot update_slots: id 0 | task 0 | n_tokens = 47104, memory_seq_rm [47104, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 49152, batch.n_tokens = 2048, progress = 0.751905
slot update_slots: id 0 | task 0 | n_tokens = 49152, memory_seq_rm [49152, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 51200, batch.n_tokens = 2048, progress = 0.783234
slot update_slots: id 0 | task 0 | n_tokens = 51200, memory_seq_rm [51200, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 53248, batch.n_tokens = 2048, progress = 0.814563
slot update_slots: id 0 | task 0 | n_tokens = 53248, memory_seq_rm [53248, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 55296, batch.n_tokens = 2048, progress = 0.845893
slot update_slots: id 0 | task 0 | n_tokens = 55296, memory_seq_rm [55296, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 57344, batch.n_tokens = 2048, progress = 0.877222
slot update_slots: id 0 | task 0 | n_tokens = 57344, memory_seq_rm [57344, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 59392, batch.n_tokens = 2048, progress = 0.908551
slot update_slots: id 0 | task 0 | n_tokens = 59392, memory_seq_rm [59392, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 61440, batch.n_tokens = 2048, progress = 0.939881
slot update_slots: id 0 | task 0 | n_tokens = 61440, memory_seq_rm [61440, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 63488, batch.n_tokens = 2048, progress = 0.971210
slot update_slots: id 0 | task 0 | n_tokens = 63488, memory_seq_rm [63488, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_tokens = 65370, batch.n_tokens = 1882, progress = 1.000000
slot update_slots: id 0 | task 0 | prompt done, n_tokens = 65370, batch.n_tokens = 1882
slot init_sampler: id 0 | task 0 | init sampler, took 16.86 ms, tokens: text = 65370, total = 65370
CUDA error: invalid argument
/home/henry/AiModels/llama.cpp/ggml/src/ggml-cuda/ggml-cuda.cu:97: CUDA error
current device: 0, in function launch_fattn at /home/henry/AiModels/llama.cpp/ggml/src/ggml-cuda/template-instances/../fattn-common.cuh:1015
cudaGetLastError()
[New LWP 717217]
[New LWP 717216]
[New LWP 717215]
[New LWP 717214]
[New LWP 717213]
[New LWP 717212]
[New LWP 717211]
[New LWP 717210]
[New LWP 717209]
[New LWP 717208]
[New LWP 717207]
[New LWP 717206]
[New LWP 717205]
[New LWP 717204]
[New LWP 717203]
[New LWP 717159]
[New LWP 717158]
[New LWP 717157]
[New LWP 717156]
[New LWP 717155]
[New LWP 717154]
[New LWP 717153]
[New LWP 717152]
[New LWP 717151]
[New LWP 717150]
[New LWP 717149]
[New LWP 717148]
[New LWP 717147]
[New LWP 717146]
[New LWP 717145]
[New LWP 717144]
[New LWP 717143]
[New LWP 717142]
[New LWP 717141]
[New LWP 717140]
[New LWP 717139]
[New LWP 717138]
[New LWP 717137]
[New LWP 717136]
[New LWP 717135]
[New LWP 717134]
[New LWP 717133]
[New LWP 717132]
[New LWP 717131]
[New LWP 717130]
[New LWP 717129]
[New LWP 717128]
[New LWP 717127]
[New LWP 717126]
[New LWP 717125]
[New LWP 717124]
[New LWP 717119]
This GDB supports auto-downloading debuginfo from the following URLs:
<https://debuginfod.archlinux.org>
Enable debuginfod for this session? (y or [n]) [answered N; input not from terminal]
Debuginfod has been disabled.
To make this setting permanent, add 'set debuginfod enabled off' to .gdbinit.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/libthread_db.so.1".
0x00007a72cbc6d002 in ?? () from /usr/lib/libc.so.6
#0 0x00007a72cbc6d002 in ?? () from /usr/lib/libc.so.6
#1 0x00007a72cbc6116c in ?? () from /usr/lib/libc.so.6
#2 0x00007a72cbc611b4 in ?? () from /usr/lib/libc.so.6
#3 0x00007a72cbcd1d8f in wait4 () from /usr/lib/libc.so.6
#4 0x00007a72d205fbab in ggml_print_backtrace () from /home/henry/extracted-apps/bin/libggml-base.so.0
#5 0x00007a72d205fd10 in ggml_abort () from /home/henry/extracted-apps/bin/libggml-base.so.0
#6 0x00007a72d33196d1 in ggml_cuda_error(char const*, char const*, char const*, int, char const*) () from /home/henry/extracted-apps/bin/libggml-cuda.so.0
#7 0x00007a72d362a426 in void launch_fattn<512, 16, 4>(ggml_backend_cuda_context&, ggml_tensor*, void (*)(char const*, char const*, char const*, char const*, char const*, int const*, float*, float2*, float, float, float, float, unsigned int, float, int, uint3, int, int, int, int, int, int, int, int, int, int, int, long, int, int, long, int, int, int, int, int, long), int, unsigned long, int, bool, bool, bool, int) () from /home/henry/extracted-apps/bin/libggml-cuda.so.0
#8 0x00007a72d362aad2 in void ggml_cuda_flash_attn_ext_mma_f16_case<576, 512, 16, 4>(ggml_backend_cuda_context&, ggml_tensor*) () from /home/henry/extracted-apps/bin/libggml-cuda.so.0
#9 0x00007a72d3328ad8 in ggml_cuda_compute_forward(ggml_backend_cuda_context&, ggml_tensor*) () from /home/henry/extracted-apps/bin/libggml-cuda.so.0
#10 0x00007a72d332d83b in ggml_cuda_graph_evaluate_and_capture(ggml_backend_cuda_context*, ggml_cgraph*, bool, bool, void const*) () from /home/henry/extracted-apps/bin/libggml-cuda.so.0
#11 0x00007a72d332fa34 in ggml_backend_cuda_graph_compute(ggml_backend*, ggml_cgraph*) () from /home/henry/extracted-apps/bin/libggml-cuda.so.0
#12 0x00007a72d207b943 in ggml_backend_sched_graph_compute_async () from /home/henry/extracted-apps/bin/libggml-base.so.0
#13 0x00007a72d4436270 in llama_context::graph_compute(ggml_cgraph*, bool) () from /home/henry/extracted-apps/bin/libllama.so.0
#14 0x00007a72d4438176 in llama_context::process_ubatch(llama_ubatch const&, llm_graph_type, llama_memory_context_i*, ggml_status&) () from /home/henry/extracted-apps/bin/libllama.so.0
#15 0x00007a72d443f14f in llama_context::decode(llama_batch const&) () from /home/henry/extracted-apps/bin/libllama.so.0
#16 0x00007a72d4440d0e in llama_decode () from /home/henry/extracted-apps/bin/libllama.so.0
#17 0x000062a7dcf0eb88 in server_context_impl::update_slots() ()
#18 0x000062a7dcf5a70f in server_queue::start_loop(long) ()
#19 0x000062a7dce6ca3c in main ()
[Inferior 1 (process 717118) detached]
[1] 717118 IOT instruction (core dumped) ./run-llama.sh
Name and Version
llama-server
version: 8094 (b55dcde)
built with GNU 15.2.1 for Linux x86_64
Operating systems
Linux
GGML backends
CUDA
Hardware
NVIDIA GeForce RTX 4060 Laptop GPU, compute capability 8.9, VMM: yes
Models
GLM-4.7-Flash-Q4_K_S.gguf
Problem description & steps to reproduce
Run llama-server:
Grow a session to around 65k context. Crash will ensue.
First Bad Commit
Git bisect is too compute-heavy/time-expensive. I can do one, but it will have to wait for the weekend.
Relevant log output
Logs from a retry using OpenCode on a session that generated the crash:
Logs
Claude Sonnet 4.6 triage, which may contain hallucinated diagnostics:
ggml_cuda_flash_attn_ext_mma_f16_case<576, 512, 16, 4>
launch_fattn<512, 16, 4>
CUDA error: invalid argument
576 is kv_lora_rank(512) + rope_dim(64) — MLA's non-power-of-2 key head dimension. The FA MMA kernel hits a CUDA grid constraint at ~65k sequence length with this dimension. Known llama.cpp bug with deepseek2/MLA architecture.