-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Header prompt displayed using Llama3.1 with ollama #1484
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
You're missing the initial generation prompt from your I'd recommend setting If that's not possible, you can fix your
didn't try it but that should work, let me know! |
Hello @nsarrazin I tried the tokenizer solution. Here is the corresponding MONGODB_URL=mongodb://mongodb:27017
HF_TOKEN=<hf-token>
PUBLIC_APP_NAME=<name>
MODELS=`[
{
"name": "Ollama | Llama3.1",
"tokenizer": {
"tokenizerUrl": "https://huggingface.co/nsarrazin/llama3.1-tokenizer/resolve/main/tokenizer.json",
"tokenizerConfigUrl": "https://huggingface.co/nsarrazin/llama3.1-tokenizer/raw/main/tokenizer_config.json"
},
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 3072,
"max_new_tokens": 1024,
"stop": ["<|end_of_text|>", "<|eot_id|>"]
},
"endpoints": [
{
"type": "ollama",
"url" : "http://ollama:11434",
"ollamaName" : "llama3.1:latest"
}
]
}
]` But this solution didn't work, so here's the corresponding log :
So I tried the 2nd solution, which was to modify the Here is the corresponding MONGODB_URL=mongodb://mongodb:27017
HF_TOKEN=<hf-token>
PUBLIC_APP_NAME=<name>
MODELS=`[
{
"name": "Ollama | Llama3.1",
"chatPromptTemplate": "<|begin_of_text|>{{#if @root.preprompt}}<|start_header_id|>system<|end_header_id|>\n\n{{@root.preprompt}}<|eot_id|>{{/if}}{{#each messages}}{{#ifUser}}<|start_header_id|>user<|end_header_id|>\n\n{{content}}<|eot_id|>{{/ifUser}}{{#ifAssistant}}<|start_header_id|>assistant<|end_header_id|>\n\n{{content}}<|eot_id|>{{/ifAssistant}}{{/each}}<|start_header_id|>assistant<|end_header_id|>\n\n",
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 3072,
"max_new_tokens": 1024,
"stop": ["<|end_of_text|>", "<|eot_id|>"]
},
"endpoints": [
{
"type": "ollama",
"url" : "http://ollama:11434",
"ollamaName" : "llama3.1:latest"
}
]
}
]` And this second solution works ! Thank you for your help. |
Glad that it worked! I'll see why the first solution didn't work, it should have worked 👀 |
Hello,
I'm using the
llama3.1:latest
model withollama
and I'm having trouble correctly initializing thechatPromptTemplate
variable.I used this Github issue to initialize this variable : #1035
Here is my
.env.local
file :But
<|start_header_id|>assistant<|end_header_id|>
appears on every response :Can you help me make it disappear by modifying
chatPromptTemplate
variable ?Thanks in advance.
The text was updated successfully, but these errors were encountered: