Special tokens didn't tokenize correctly #837
earzamastsev
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I try using OpenAI-like API with vicuna LLM: python3 -m llama_cpp.server --n_gpu_layers 43 --model ./models/vicuna-13b-v1.5.Q8_0.gguf --port 8010 --host 0.0.0.0 --chat_format vicuna
Send request to endpoint /v1/chat/completions:
{
"max_tokens": 1024,
"temperature": 0.1,
"messages": [
{
"content": "Hello, what is your name?",
"role": "user"
},
{
"content": "My name is AI-asisstant",
"role": "assistant"
},
{
"content": "Can you repeat your name please?",
"role": "user"
}
]
}
And checking final prompt and final tokens. So, i see this:
PROMPT: A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hello, what is your name? ASSISTANT: My name is AI-asisstant
</s>
USER: Can you repeat your name please? ASSISTANT:PROMPT TOKENS: [1, 319, 13563, 1546, 263, 12758, 1404, 322, 385, 23116, 21082, 20255, 29889, 450, 20255, 4076, 8444, 29892, 13173, 29892, 322, 1248, 568, 6089, 304, 278, 1404, 29915, 29879, 5155, 29889, 3148, 1001, 29901, 15043, 29892, 825, 338, 596, 1024, 29973, 319, 1799, 9047, 13566, 29901, 1619, 1024, 338, 319, 29902, 29899, 25101, 303, 424, 829, 29879, 29958, 11889, 29901, 1815, 366, 12312, 596, 1024, 3113, 29973, 319, 1799, 9047, 13566, 29901]
I see that special token
</s>
didn't correctly converted to token id (should be token id = 2) but converted to [29879, 29958]. It is bug?I saw similar issue in discussion llama.cpp github repo - ggml-org/llama.cpp#1812
Beta Was this translation helpful? Give feedback.
All reactions