Skip to content

Bug: Tokenization adds a space to the first non-special token #8584

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
newsletternewsletter opened this issue Jul 19, 2024 · 8 comments
Closed
Labels
bug-unconfirmed medium severity Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable)

Comments

@newsletternewsletter
Copy link

newsletternewsletter commented Jul 19, 2024

What happened?

When tokenizing text with LlamaSharp, a C# library directly using llama.cpp, a space is added to the first non-special token.
Unfortunately, this changes the prompt templates and even breaks some of my use cases (e.g. stripping the last tokens then using anti-prompts in LlamaSharp).

In particular, it would be good if the following invariant always applied:
detokenize(tokenize("text", addBos:false)) == "text"

But this is not the case:
llama-tokenize.exe --no-bos -m "gemma-1.1-2b-it.Q6_K.gguf" -p "<pad>text<eos>"

     0 -> '<pad>'
  2793 -> ' text'
     1 -> '<eos>'

Token 2793 -> ' text' should have been 1082 -> 'text'.

Is there any possibility to fix this behavior, telling llama.cpp not to add any spaces when tokenizing and just process the raw text?

Name and Version

version: 3412 (3807c3d)
built with MSVC 19.29.30154.0 for x64

What operating system are you seeing the problem on?

Windows 10

Relevant log output

$ llama-tokenize.exe --no-bos -m "gemma-1.1-2b-it.Q6_K.gguf" -p "<pad>text<eos>"

llama_model_loader: loaded meta data with 24 key-value pairs and 164 tensors from gemma-1.1-2b-it.Q6_K.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = gemma
llama_model_loader: - kv   1:                               general.name str              = gemma-1.1-2b-it
llama_model_loader: - kv   2:                       gemma.context_length u32              = 8192
llama_model_loader: - kv   3:                     gemma.embedding_length u32              = 2048
llama_model_loader: - kv   4:                          gemma.block_count u32              = 18
llama_model_loader: - kv   5:                  gemma.feed_forward_length u32              = 16384
llama_model_loader: - kv   6:                 gemma.attention.head_count u32              = 8
llama_model_loader: - kv   7:              gemma.attention.head_count_kv u32              = 1
llama_model_loader: - kv   8:     gemma.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv   9:                 gemma.attention.key_length u32              = 256
llama_model_loader: - kv  10:               gemma.attention.value_length u32              = 256
llama_model_loader: - kv  11:                          general.file_type u32              = 18
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,256000]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,256000]  = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 2
llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  18:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  19:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  22:                    tokenizer.chat_template str              = {{ bos_token }}{% if messages[0]['rol...
llama_model_loader: - kv  23:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   37 tensors
llama_model_loader: - type q6_K:  127 tensors
llm_load_vocab: special tokens cache size = 4
llm_load_vocab: token to piece cache size = 1.6014 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = gemma
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 256000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 1
llm_load_print_meta: model type       = ?B
llm_load_print_meta: model ftype      = all F32
llm_load_print_meta: model params     = 2.51 B
llm_load_print_meta: model size       = 1.91 GiB (6.56 BPW)
llm_load_print_meta: general.name     = gemma-1.1-2b-it
llm_load_print_meta: BOS token        = 2 '<bos>'
llm_load_print_meta: EOS token        = 1 '<eos>'
llm_load_print_meta: UNK token        = 3 '<unk>'
llm_load_print_meta: PAD token        = 0 '<pad>'
llm_load_print_meta: LF token         = 227 '<0x0A>'
llm_load_print_meta: EOT token        = 107 '<end_of_turn>'
llm_load_print_meta: max token length = 93
llama_model_load: vocab only - skipping tensors
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 0.0
llama_new_context_with_model: freq_scale = 1
     0 -> '<pad>'
  2793 -> ' text'
     1 -> '<eos>'
@newsletternewsletter newsletternewsletter added bug-unconfirmed medium severity Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable) labels Jul 19, 2024
@shibe2
Copy link
Contributor

shibe2 commented Jul 19, 2024

This is a known problem with SentencePiece tokenizer. Different implementations deal with it in different ways. In llama.cpp, developers have different opinions about the matter. I.e. it can be argued that adding space is the correct behavior. I proposed to make it flexible in #3664, but so far we haven't agreed on a path forward.

The space is added here: https://github.com/ggerganov/llama.cpp/blob/3d0e4367d99087892e355ddbeebd232a0b2f40de/src/llama.cpp#L16450-L16453
You can disable or remove this code, then you'll need to add the space on client side for models that need it.

@steampunque
Copy link

To solve this problem I added an enable_space_prefix flag to the tokenize call:

static std::vector<llama_vocab::id> llama_tokenize_internal(const llama_vocab & vocab, std::string raw_text, bool add_special, bool parse_special,
                                                            bool enable_space_prefix) {

.
.

                       // prefix with space if previous is special
                        if (vocab.tokenizer_add_space_prefix && is_prev_special && enable_space_prefix) {
                            raw_text = " " + raw_text;
                        }

Then I created a separate tokenizer entry which turns off adding spaces:

int32_t llama_tokenize(
    const struct llama_model * model,
                  const char * text,
                     int32_t   text_len,
                 llama_token * tokens,
                     int32_t   n_tokens_max,
                        bool   add_special,
                        bool   parse_special) {
    auto res = llama_tokenize_internal(model->vocab, std::string(text, text_len), add_special, parse_special,true);
    if (n_tokens_max < (int) res.size()) {
        // LLAMA_LOG_ERROR("%s: too many tokens\n", __func__);
        return -((int) res.size());
    }

    for (size_t i = 0; i < res.size(); i++) {
        tokens[i] = res[i];
    }

    return res.size();
}

int32_t llama_tokenize_nospace(
    const struct llama_model * model,
                  const char * text,
                     int32_t   text_len,
                 llama_token * tokens,
                     int32_t   n_tokens_max,
                        bool   add_special,
                        bool   parse_special) {
    auto res = llama_tokenize_internal(model->vocab, std::string(text, text_len), add_special, parse_special,false);
    if (n_tokens_max < (int) res.size()) {
        // LLAMA_LOG_ERROR("%s: too many tokens\n", __func__);
        return -((int) res.size());
    }

    for (size_t i = 0; i < res.size(); i++) {
        tokens[i] = res[i];
    }

    return res.size();
}

Its annoying to support this patch because patch breaks a lot when anything is touched in this file but
it gets rid of the unwanted spaces guaranteed.

@newsletternewsletter
Copy link
Author

This is a known problem with SentencePiece tokenizer. Different implementations deal with it in different ways. In llama.cpp, developers have different opinions about the matter. I.e. it can be argued that adding space is the correct behavior. I proposed to make it flexible in #3664, but so far we haven't agreed on a path forward.

The space is added here:

https://github.com/ggerganov/llama.cpp/blob/3d0e4367d99087892e355ddbeebd232a0b2f40de/src/llama.cpp#L16450-L16453

You can disable or remove this code, then you'll need to add the space on client side for models that need it.

Very interesting! I presume that adding "tokenizer.ggml.add_space_prefix" to the model's metadata and setting its value to false (or a KV override) should falsify vocab.tokenizer_add_space_prefix and therefore also disable this behavior, without touching the llama.cpp binary used in LlamaSharp?

@compilade
Copy link
Collaborator

This has already been fixed for new conversions in #8248, (but there's currently another problem #7897, which prevents conversion for instruct Gemma models, I'll try to fix this soon).

Otherwise, you should be able to use --override-kv tokenizer.ggml.add_space_prefix=bool:false, which will not add a space prefix, although this doesn't work with llama-tokenize (because it doesn't handle the --override-kv flag).

@newsletternewsletter
Copy link
Author

It's interesting that the default value for tokenizer.ggml.add_space_prefix is true, especially for models that were converted before this version and could not have known about it, but as long as tokenization adding spaces can be disabled, I'm happy!

@newsletternewsletter
Copy link
Author

Unfortunately, the KV override does not seem to work. Using Phi-3.1-mini-128k-instruct for this test:

"llama-cli.exe" -v --verbose-prompt -co -s 0 -c 4096 -p "<|system|>\nBe concise!<|end|>\n<|user|>\nWho are you?<|end|>\n<|assistant|>\n" -e -sp -m "Phi-3.1-mini-128k-instruct.Q6_K.gguf" --override-kv tokenizer.ggml.add_space_prefix=bool:false

tokenizes the prompt as

 32006 -> '<|system|>'
  1522 -> ' Be'
  3022 -> ' conc'
   895 -> 'ise'
 29991 -> '!'
 32007 -> '<|end|>'
 32010 -> '<|user|>'
 11644 -> ' Who'
   526 -> ' are'
   366 -> ' you'
 29973 -> '?'
 32007 -> '<|end|>'
 32001 -> '<|assistant|>'

Token 1522 -> ' Be' should have been 3629 -> 'Be' and 11644 -> ' Who' should have been 22110 -> 'Who'.

Patching the model by adding tokenizer.ggml.add_space_prefix and setting it to false in the KV metadata with gguf-py/scripts/gguf_new_metadata.py yields the correct result:

 32006 -> '<|system|>'
  3629 -> 'Be'
  3022 -> ' conc'
   895 -> 'ise'
 29991 -> '!'
 32007 -> '<|end|>'
 32010 -> '<|user|>'
 22110 -> 'Who'
   526 -> ' are'
   366 -> ' you'
 29973 -> '?'
 32007 -> '<|end|>'
 32001 -> '<|assistant|>'

Interestingly, the patched model ignores the KV override when setting it to true with --override-kv tokenizer.ggml.add_space_prefix=bool:true. I would have expected the same behavior as with the unpatched model.

Is this a bug?

@ggerganov
Copy link
Member

Is this a bug?

Yes, it's addressed in #8614

@newsletternewsletter
Copy link
Author

Very good! Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-unconfirmed medium severity Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable)
Projects
None yet
Development

No branches or pull requests

5 participants