Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Misc. bug: Missing <think> tag in response (DeepSeek R1) #11861

Open
9chu opened this issue Feb 14, 2025 · 11 comments
Open

Misc. bug: Missing <think> tag in response (DeepSeek R1) #11861

9chu opened this issue Feb 14, 2025 · 11 comments

Comments

@9chu
Copy link

9chu commented Feb 14, 2025

Name and Version

version: 4713 (a4f011e8)
built with x86_64-conda-linux-gnu-cc (Anaconda gcc) 11.2.0 for x86_64-conda-linux-gnu

I don't know whether it's a bug or not.

The latest Jinja chat template for the DeepSeek r1 model adds a <think>\n postfix to force the model into thinking.
However, this makes all the responses losing the heading <think> tag like this:

Image

I suggest manually adding the <think> prefix in response when add_generation_prompt = true.

Operating systems

Linux

Which llama.cpp modules do you know to be affected?

libllama (core library)

Command line

numactl --interleave=0-1 ./llama-server -ngl 0 --mlock --no-mmap --numa numactl -t 62 --port 10000 --host 0.0.0.0 -m ../../../DeepSeek-R1-UD-IQ2_XXS/DeepSeek-R1-UD-IQ2_XXS-00001-of-00004.gguf --jinja --chat-template-file ../../models/templates/llama-cpp-deepseek-r1.jinja --reasoning-format deepseek

Problem description & steps to reproduce

  1. Running llama-server
  2. Chatting with DeepSeek R1

First Bad Commit

No response

Relevant log output

@9chu 9chu changed the title Misc. bug: Lost <think> tag in response (DeepSeek R1) Misc. bug: Missing <think> tag in response (DeepSeek R1) Feb 14, 2025
@MoonRide303
Copy link
Contributor

MoonRide303 commented Feb 14, 2025

I observed the same problem when I was playing with non-thinking models and making them think within <think> and </think> tags with instructions in system message. It was somewhat working with standard chat template, but when I tried adding <think> tag into the template itself (at the beginning of model response, alike in the new deepseek-r1 chat template), then the default server UI stopped rendering it properly.

@ngxson?

@davidmroth
Copy link

davidmroth commented Feb 15, 2025

I had the same issue, but once I upgraded to a release greater than b4706, the issue went away. Looks like PR11607 resolved the problem.

I get both of my think tags (<think> </think>)

Here is how I am calling it (using shared library via llama-cpp-python:

Llama(
    {
        "model_path": "/data/weights/DeepSeek-R1-Distill-Qwen-32B-Q4_K_M.gguf",
        "n_ctx": 16384,
        "n_gpu_layers": -1,
        "n_threads": 8,
        "top_k": 1,
        "top_p": 1.0,
        "flash_attn": True,
        "temperature": 0.0,
        "verbose": True,
    }
)
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23324 MiB free
llama_model_loader: loaded meta data with 30 key-value pairs and 771 tensors from /data/weights/DeepSeek-R1-Distill-Qwen-32B-Q4_K_M.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Qwen 32B
llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv   4:                         general.size_label str              = 32B
llama_model_loader: - kv   5:                          qwen2.block_count u32              = 64
llama_model_loader: - kv   6:                       qwen2.context_length u32              = 131072
llama_model_loader: - kv   7:                     qwen2.embedding_length u32              = 5120
llama_model_loader: - kv   8:                  qwen2.feed_forward_length u32              = 27648
llama_model_loader: - kv   9:                 qwen2.attention.head_count u32              = 40
llama_model_loader: - kv  10:              qwen2.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  12:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  14:                         tokenizer.ggml.pre str              = deepseek-r1-qwen
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  16:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  17:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 151646
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 151643
llama_model_loader: - kv  20:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  21:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  22:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  23:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  24:               general.quantization_version u32              = 2
llama_model_loader: - kv  25:                          general.file_type u32              = 15
llama_model_loader: - kv  26:                      quantize.imatrix.file str              = /models_out/DeepSeek-R1-Distill-Qwen-...
llama_model_loader: - kv  27:                   quantize.imatrix.dataset str              = /training_dir/calibration_datav3.txt
llama_model_loader: - kv  28:             quantize.imatrix.entries_count i32              = 448
llama_model_loader: - kv  29:              quantize.imatrix.chunks_count i32              = 128
llama_model_loader: - type  f32:  321 tensors
llama_model_loader: - type q4_K:  385 tensors
llama_model_loader: - type q6_K:   65 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 18.48 GiB (4.85 BPW) 
init_tokenizer: initializing tokenizer for type 2
load: control token: 151660 '<|fim_middle|>' is not marked as EOG
load: control token: 151659 '<|fim_prefix|>' is not marked as EOG
load: control token: 151653 '<|vision_end|>' is not marked as EOG
load: control token: 151645 '<|Assistant|>' is not marked as EOG
load: control token: 151644 '<|User|>' is not marked as EOG
load: control token: 151655 '<|image_pad|>' is not marked as EOG
load: control token: 151651 '<|quad_end|>' is not marked as EOG
load: control token: 151646 '<|begin▁of▁sentence|>' is not marked as EOG
load: control token: 151643 '<|end▁of▁sentence|>' is not marked as EOG
load: control token: 151652 '<|vision_start|>' is not marked as EOG
load: control token: 151647 '<|EOT|>' is not marked as EOG
load: control token: 151654 '<|vision_pad|>' is not marked as EOG
load: control token: 151656 '<|video_pad|>' is not marked as EOG
load: control token: 151661 '<|fim_suffix|>' is not marked as EOG
load: control token: 151650 '<|quad_start|>' is not marked as EOG
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch             = qwen2
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 5120
print_info: n_layer          = 64
print_info: n_head           = 40
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 5
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: n_ff             = 27648
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 32B
print_info: model params     = 32.76 B
print_info: general.name     = DeepSeek R1 Distill Qwen 32B
print_info: vocab type       = BPE
print_info: n_vocab          = 152064
print_info: n_merges         = 151387
print_info: BOS token        = 151646 '<|begin▁of▁sentence|>'
print_info: EOS token        = 151643 '<|end▁of▁sentence|>'
print_info: EOT token        = 151643 '<|end▁of▁sentence|>'
print_info: PAD token        = 151643 '<|end▁of▁sentence|>'
print_info: LF token         = 148848 'ÄĬ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|end▁of▁sentence|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: layer   0 assigned to device CUDA0
load_tensors: layer   1 assigned to device CUDA0
load_tensors: layer   2 assigned to device CUDA0
load_tensors: layer   3 assigned to device CUDA0
load_tensors: layer   4 assigned to device CUDA0
load_tensors: layer   5 assigned to device CUDA0
load_tensors: layer   6 assigned to device CUDA0
load_tensors: layer   7 assigned to device CUDA0
load_tensors: layer   8 assigned to device CUDA0
load_tensors: layer   9 assigned to device CUDA0
load_tensors: layer  10 assigned to device CUDA0
load_tensors: layer  11 assigned to device CUDA0
load_tensors: layer  12 assigned to device CUDA0
load_tensors: layer  13 assigned to device CUDA0
load_tensors: layer  14 assigned to device CUDA0
load_tensors: layer  15 assigned to device CUDA0
load_tensors: layer  16 assigned to device CUDA0
load_tensors: layer  17 assigned to device CUDA0
load_tensors: layer  18 assigned to device CUDA0
load_tensors: layer  19 assigned to device CUDA0
load_tensors: layer  20 assigned to device CUDA0
load_tensors: layer  21 assigned to device CUDA0
load_tensors: layer  22 assigned to device CUDA0
load_tensors: layer  23 assigned to device CUDA0
load_tensors: layer  24 assigned to device CUDA0
load_tensors: layer  25 assigned to device CUDA0
load_tensors: layer  26 assigned to device CUDA0
load_tensors: layer  27 assigned to device CUDA0
load_tensors: layer  28 assigned to device CUDA0
load_tensors: layer  29 assigned to device CUDA0
load_tensors: layer  30 assigned to device CUDA0
load_tensors: layer  31 assigned to device CUDA0
load_tensors: layer  32 assigned to device CUDA0
load_tensors: layer  33 assigned to device CUDA0
load_tensors: layer  34 assigned to device CUDA0
load_tensors: layer  35 assigned to device CUDA0
load_tensors: layer  36 assigned to device CUDA0
load_tensors: layer  37 assigned to device CUDA0
load_tensors: layer  38 assigned to device CUDA0
load_tensors: layer  39 assigned to device CUDA0
load_tensors: layer  40 assigned to device CUDA0
load_tensors: layer  41 assigned to device CUDA0
load_tensors: layer  42 assigned to device CUDA0
load_tensors: layer  43 assigned to device CUDA0
load_tensors: layer  44 assigned to device CUDA0
load_tensors: layer  45 assigned to device CUDA0
load_tensors: layer  46 assigned to device CUDA0
load_tensors: layer  47 assigned to device CUDA0
load_tensors: layer  48 assigned to device CUDA0
load_tensors: layer  49 assigned to device CUDA0
load_tensors: layer  50 assigned to device CUDA0
load_tensors: layer  51 assigned to device CUDA0
load_tensors: layer  52 assigned to device CUDA0
load_tensors: layer  53 assigned to device CUDA0
load_tensors: layer  54 assigned to device CUDA0
load_tensors: layer  55 assigned to device CUDA0
load_tensors: layer  56 assigned to device CUDA0
load_tensors: layer  57 assigned to device CUDA0
load_tensors: layer  58 assigned to device CUDA0
load_tensors: layer  59 assigned to device CUDA0
load_tensors: layer  60 assigned to device CUDA0
load_tensors: layer  61 assigned to device CUDA0
load_tensors: layer  62 assigned to device CUDA0
load_tensors: layer  63 assigned to device CUDA0
load_tensors: layer  64 assigned to device CUDA0
load_tensors: tensor 'token_embd.weight' (q4_K) (and 0 others) cannot be used with preferred buffer type CPU_AARCH64, using CPU instead
load_tensors: offloading 64 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 65/65 layers to GPU
load_tensors:        CUDA0 model buffer size = 18508.35 MiB
load_tensors:   CPU_Mapped model buffer size =   417.66 MiB
llama_init_from_model: n_seq_max     = 1
llama_init_from_model: n_ctx         = 16384
llama_init_from_model: n_ctx_per_seq = 16384
llama_init_from_model: n_batch       = 512
llama_init_from_model: n_ubatch      = 512
llama_init_from_model: flash_attn    = 1
llama_init_from_model: freq_base     = 1000000.0
llama_init_from_model: freq_scale    = 1
llama_init_from_model: n_ctx_per_seq (16384) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 16384, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 64, can_shift = 1
llama_kv_cache_init: layer 0: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 1: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 2: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 3: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 4: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 5: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 6: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 7: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 8: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 9: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 10: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 11: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 12: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 13: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 14: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 15: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 16: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 17: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 18: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 19: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 20: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 21: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 22: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 23: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 24: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 25: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 26: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 27: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 28: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 29: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 30: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 31: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 32: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 33: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 34: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 35: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 36: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 37: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 38: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 39: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 40: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 41: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 42: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 43: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 44: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 45: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 46: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 47: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 48: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 49: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 50: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 51: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 52: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 53: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 54: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 55: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 56: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 57: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 58: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 59: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 60: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 61: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 62: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 63: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init:      CUDA0 KV buffer size =  4096.00 MiB
llama_init_from_model: KV self size  = 4096.00 MiB, K (f16): 2048.00 MiB, V (f16): 2048.00 MiB
llama_init_from_model:  CUDA_Host  output buffer size =     0.58 MiB
llama_init_from_model:      CUDA0 compute buffer size =   307.00 MiB
llama_init_from_model:  CUDA_Host compute buffer size =    42.01 MiB
llama_init_from_model: graph nodes  = 1991
llama_init_from_model: graph splits = 2
CUDA : ARCHS = 520,610,700,750 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | OPENMP = 1 | AARCH64_REPACK = 1 | 
Model metadata: {'quantize.imatrix.entries_count': '448', 'quantize.imatrix.dataset': '/training_dir/calibration_datav3.txt', 'quantize.imatrix.chunks_count': '128', 'quantize.imatrix.file': '/models_out/DeepSeek-R1-Distill-Qwen-32B-GGUF/DeepSeek-R1-Distill-Qwen-32B.imatrix', 'general.file_type': '15', 'tokenizer.ggml.add_eos_token': 'false', 'tokenizer.ggml.add_bos_token': 'true', 'tokenizer.ggml.bos_token_id': '151646', 'general.architecture': 'qwen2', 'tokenizer.ggml.padding_token_id': '151643', 'general.basename': 'DeepSeek-R1-Distill-Qwen', 'qwen2.embedding_length': '5120', 'tokenizer.ggml.pre': 'deepseek-r1-qwen', 'general.name': 'DeepSeek R1 Distill Qwen 32B', 'qwen2.block_count': '64', 'general.type': 'model', 'general.size_label': '32B', 'qwen2.context_length': '131072', 'tokenizer.chat_template': "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% set ns = namespace(is_first=false, is_tool=false, is_output_first=true, system_prompt='') %}{%- for message in messages %}{%- if message['role'] == 'system' %}{% set ns.system_prompt = message['content'] %}{%- endif %}{%- endfor %}{{bos_token}}{{ns.system_prompt}}{%- for message in messages %}{%- if message['role'] == 'user' %}{%- set ns.is_tool = false -%}{{'<|User|>' + message['content']}}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is none %}{%- set ns.is_tool = false -%}{%- for tool in message['tool_calls']%}{%- if not ns.is_first %}{{'<|Assistant|><|tool▁calls▁begin|><|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\\n' + '```json' + '\\n' + tool['function']['arguments'] + '\\n' + '```' + '<|tool▁call▁end|>'}}{%- set ns.is_first = true -%}{%- else %}{{'\\n' + '<|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\\n' + '```json' + '\\n' + tool['function']['arguments'] + '\\n' + '```' + '<|tool▁call▁end|>'}}{{'<|tool▁calls▁end|><|end▁of▁sentence|>'}}{%- endif %}{%- endfor %}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is not none %}{%- if ns.is_tool %}{{'<|tool▁outputs▁end|>' + message['content'] + '<|end▁of▁sentence|>'}}{%- set ns.is_tool = false -%}{%- else %}{% set content = message['content'] %}{% if '</think>' in content %}{% set content = content.split('</think>')[-1] %}{% endif %}{{'<|Assistant|>' + content + '<|end▁of▁sentence|>'}}{%- endif %}{%- endif %}{%- if message['role'] == 'tool' %}{%- set ns.is_tool = true -%}{%- if ns.is_output_first %}{{'<|tool▁outputs▁begin|><|tool▁output▁begin|>' + message['content'] + '<|tool▁output▁end|>'}}{%- set ns.is_output_first = false %}{%- else %}{{'\\n<|tool▁output▁begin|>' + message['content'] + '<|tool▁output▁end|>'}}{%- endif %}{%- endif %}{%- endfor -%}{% if ns.is_tool %}{{'<|tool▁outputs▁end|>'}}{% endif %}{% if add_generation_prompt and not ns.is_tool %}{{'<|Assistant|>'}}{% endif %}", 'qwen2.attention.head_count_kv': '8', 'general.quantization_version': '2', 'tokenizer.ggml.model': 'gpt2', 'qwen2.feed_forward_length': '27648', 'qwen2.attention.layer_norm_rms_epsilon': '0.000010', 'qwen2.attention.head_count': '40', 'tokenizer.ggml.eos_token_id': '151643', 'qwen2.rope.freq_base': '1000000.000000'}
Available chat formats from metadata: chat_template.default
Using gguf chat template: {% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% set ns = namespace(is_first=false, is_tool=false, is_output_first=true, system_prompt='') %}{%- for message in messages %}{%- if message['role'] == 'system' %}{% set ns.system_prompt = message['content'] %}{%- endif %}{%- endfor %}{{bos_token}}{{ns.system_prompt}}{%- for message in messages %}{%- if message['role'] == 'user' %}{%- set ns.is_tool = false -%}{{'<|User|>' + message['content']}}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is none %}{%- set ns.is_tool = false -%}{%- for tool in message['tool_calls']%}{%- if not ns.is_first %}{{'<|Assistant|><|tool▁calls▁begin|><|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\n' + '```json' + '\n' + tool['function']['arguments'] + '\n' + '```' + '<|tool▁call▁end|>'}}{%- set ns.is_first = true -%}{%- else %}{{'\n' + '<|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\n' + '```json' + '\n' + tool['function']['arguments'] + '\n' + '```' + '<|tool▁call▁end|>'}}{{'<|tool▁calls▁end|><|end▁of▁sentence|>'}}{%- endif %}{%- endfor %}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is not none %}{%- if ns.is_tool %}{{'<|tool▁outputs▁end|>' + message['content'] + '<|end▁of▁sentence|>'}}{%- set ns.is_tool = false -%}{%- else %}{% set content = message['content'] %}{% if '</think>' in content %}{% set content = content.split('</think>')[-1] %}{% endif %}{{'<|Assistant|>' + content + '<|end▁of▁sentence|>'}}{%- endif %}{%- endif %}{%- if message['role'] == 'tool' %}{%- set ns.is_tool = true -%}{%- if ns.is_output_first %}{{'<|tool▁outputs▁begin|><|tool▁output▁begin|>' + message['content'] + '<|tool▁output▁end|>'}}{%- set ns.is_output_first = false %}{%- else %}{{'\n<|tool▁output▁begin|>' + message['content'] + '<|tool▁output▁end|>'}}{%- endif %}{%- endif %}{%- endfor -%}{% if ns.is_tool %}{{'<|tool▁outputs▁end|>'}}{% endif %}{% if add_generation_prompt and not ns.is_tool %}{{'<|Assistant|>'}}{% endif %}
Using chat eos_token: <|end▁of▁sentence|>
Using chat bos_token: <|begin▁of▁sentence|>

@MoonRide303
Copy link
Contributor

MoonRide303 commented Mar 7, 2025

I noticed another model that injects <think> tag in the template, but it doesn't seem to be fully working, namely MistralThinker-v1.1 (template):

Image

I used b4837 and Q3_K_M quant from mradermacher, launched with

llama-server.exe -ngl 99 -m MistralThinker-v1.1.Q3_K_M.gguf --jinja -c 8192

@ochafik I know it's not a model from a major provider, but could you take a look if it's handled properly from llama.cpp side?

@ochafik
Copy link
Collaborator

ochafik commented Mar 7, 2025

The problem is the trend to add <think> at the end of the template / prompt, to force the model into thinking mode (DeepSeek R1 Distill started doing it in an update to their original template - which some ggufs still have - and I believe QwQ now also does it, see #12231).

In theory we shouldn’t output something that’s already in the prompt (kinda working as intended), but in practice we’ll have to special case this 👌.

@LorenzoBoccaccia
Copy link

New QwQ template contains a think token in the assistant turn so it doesn't get returned by the server api. this is ok if you manage the other side of the call, but for some frontends like ollama-webui results in a broken think infodump on the UI

Cannot really special case everything as I've seen a fair share of etc in templates, probably a fixed prepend parameter on the server, where a fixed string is added in front of every response?

@MoonRide303
Copy link
Contributor

MoonRide303 commented Mar 8, 2025

If major model providers like DeepSeek and Qwen are including <think> tag at the beginning of AI response in their templates, then it's no longer a "special case", but normal use case that should be properly handled. Currently UI for QwQ-32B (IQ3_XS quant from bartowski) is broken:

Image

Tested using b4855 (7ab3643), launched with llama-server.exe -ngl 99 -m QwQ-32B-IQ3_XS.gguf --jinja -c 4096.

@ochafik
Copy link
Collaborator

ochafik commented Mar 8, 2025

If major model providers like DeepSeek and Qwen are including <think> tag at the beginning of AI response in their templates, then it's no longer a "special case", but normal use case that should be properly handled.

@MoonRide303 I do plan on accommodating this, it's only a special case in the wider context of text generation from a prompt (which has always been about returning content generated after the prompt, esp. for streamed mode).

Missing opening <think> in the output is already handled for DeepSeek R1 in non-streamed mode. I plan on adding similar thoughts extraction for QwQ, but first I'm busy working on the streamed mode (early details here). Hopefully everything will work smoothly soon, please bear with me 😅

I should note however that <think> aside, DeepSeek's original template was full of... challenges (semantic & syntactic), which I spent considerable efforts working around (see #11607). We may need to draw the line at some point re/ how many workarounds we're happy to do, and escalate to model / template authors.

@ochafik
Copy link
Collaborator

ochafik commented Mar 8, 2025

Cannot really special case everything as I've seen a fair share of etc in templates, probably a fixed prepend parameter on the server, where a fixed string is added in front of every response?

@LorenzoBoccaccia Simplest way I can think of is to just let model-specific chat handlers set that prepend variable when they detect a trailing <think> in the prompt. Or create new chat format enums that carry that implicit think start semantics.

@ggerganov
Copy link
Member

If major model providers like DeepSeek and Qwen are including tag at the beginning of AI response in their templates, then it's no longer a "special case", but normal use case that should be properly handled.

Well regardless of who came up with it, it seems like a really stupid idea to me. It's just a matter of time before different think modes are introduced and I don't even want to imagine what the Jinja template would look like for these.

We may need to draw the line at some point re/ how many workarounds we're happy to do, and escalate to model / template authors.

Definitely, we should just not support these templates and focus on getting the basics right for now.

@MoonRide303
Copy link
Contributor

If major model providers like DeepSeek and Qwen are including tag at the beginning of AI response in their templates, then it's no longer a "special case", but normal use case that should be properly handled.

Well regardless of who came up with it, it seems like a really stupid idea to me. It's just a matter of time before different think modes are introduced and I don't even want to imagine what the Jinja template would look like for these.

We may need to draw the line at some point re/ how many workarounds we're happy to do, and escalate to model / template authors.

Definitely, we should just not support these templates and focus on getting the basics right for now.

Both QwQ-32B and DeepSeek-R1 using this technique are SotA open weight models in their weight classes, so... maybe it would be good to figure out a way to support templates like that? Arent both working just fine in transformers, without any ugly hacks?

@ochafik
Copy link
Collaborator

ochafik commented Mar 10, 2025

Well regardless of who came up with it, it seems like a really stupid idea to me. It's just a matter of time before different think modes are introduced and I don't even want to imagine what the Jinja template would look like for these.

@ggerganov I wish I could unsee DeepSeek R1 Distill's template (even before the <think> update). And its not just them, Microsoft is also causing templating headaches (see this thread)

Arent both working just fine in transformers, without any ugly hacks?

@MoonRide303 I'd love for someone to confirm. My experience in #11607 is that the official template is broken & does not make R1 Distill Qwen good at tool calling as it leaves the prompt dangling after tool call results (hence why I wrote an alternative template; but I still added a workaround to fix the original template)

Both QwQ-32B and DeepSeek-R1 using this technique are SotA open weight models in their weight classes, so... maybe it would be good to figure out a way to support templates like that?

I'm thinking of ways we can help model authors write SotA templates. Might get round to compiling an online template analyzer w/ Minja + WASM.

But anyway, I digress: QwQ is gonna start reporting thoughts w/ #12297 in non-streaming mode.

And streaming is on its way with promises of fixing everyone's sorrows (except mine; it's... a joyous mess 😓).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants