-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Open
Labels
bugSomething isn't workingSomething isn't workingmodelsThis issue is related to model performance/reliabilityThis issue is related to model performance/reliability
Description
I keep getting this error after adding LLAMA-CPP inference endpoint locally. Adding this line causes this error.
"endpoints": [
{
"url": "http://localhost:8080",
"type": "llamacpp"
}
]
Not sure how to fix it.
[
{
"code": "too_small",
"minimum": 1,
"type": "string",
"inclusive": true,
"exact": false,
"message": "String must contain at least 1 character(s)",
"path": [
0,
"endpoints",
0,
"accessToken"
]
}
]
ZodError: [
{
"code": "too_small",
"minimum": 1,
"type": "string",
"inclusive": true,
"exact": false,
"message": "String must contain at least 1 character(s)",
"path": [
0,
"endpoints",
0,
"accessToken"
]
}
]
at get error [as error] (file:///C:/Users/SRU/Desktop/chatui/node_modules/zod/lib/index.mjs:538:31)
at ZodArray.parse (file:///C:/Users/SRU/Desktop/chatui/node_modules/zod/lib/index.mjs:638:22)
at C:\Users\SRU\Desktop\chatui\src\lib\server\models.ts:75:40
at async instantiateModule (file:///C:/Users/SRU/Desktop/chatui/node_modules/vite/dist/node/chunks/dep-529
Full Config:
# Use .env.local to change these variables
# DO NOT EDIT THIS FILE WITH SENSITIVE DATA
MONGODB_URL=mongodb://localhost:27017/
MONGODB_DB_NAME=chat-ui
MONGODB_DIRECT_CONNECTION=false
COOKIE_NAME=hf-chat
HF_TOKEN=#hf_<token> from from https://huggingface.co/settings/token
HF_API_ROOT=https://api-inference.huggingface.co/models
OPENAI_API_KEY=#your openai api key here
HF_ACCESS_TOKEN=#LEGACY! Use HF_TOKEN instead
# used to activate search with web functionality. disabled if none are defined. choose one of the following:
YDC_API_KEY=#your docs.you.com api key here
SERPER_API_KEY=#your serper.dev api key here
SERPAPI_KEY=#your serpapi key here
SERPSTACK_API_KEY=#your serpstack api key here
USE_LOCAL_WEBSEARCH=#set to true to parse google results yourself, overrides other API keys
SEARXNG_QUERY_URL=# where '<query>' will be replaced with query keywords see https://docs.searxng.org/dev/search_api.html eg https://searxng.yourdomain.com/search?q=<query>&engines=duckduckgo,google&format=json
WEBSEARCH_ALLOWLIST=`[]` # if it's defined, allow websites from only this list.
WEBSEARCH_BLOCKLIST=`[]` # if it's defined, block websites from this list.
# Parameters to enable open id login
OPENID_CONFIG=`{
"PROVIDER_URL": "",
"CLIENT_ID": "",
"CLIENT_SECRET": "",
"SCOPES": ""
}`
# /!\ legacy openid settings, prefer the config above
OPENID_CLIENT_ID=
OPENID_CLIENT_SECRET=
OPENID_SCOPES="openid profile" # Add "email" for some providers like Google that do not provide preferred_username
OPENID_PROVIDER_URL=https://huggingface.co # for Google, use https://accounts.google.com
OPENID_TOLERANCE=
OPENID_RESOURCE=
# Parameters to enable a global mTLS context for client fetch requests
USE_CLIENT_CERTIFICATE=false
CERT_PATH=#
KEY_PATH=#
CA_PATH=#
CLIENT_KEY_PASSWORD=#
REJECT_UNAUTHORIZED=true
MODELS=`[
{
"name": "mistralai/Mistral-7B-Instruct-v0.1",
"displayName": "mistralai/Mistral-7B-Instruct-v0.1",
"description": "Mistral 7B is a new Apache 2.0 model, released by Mistral AI that outperforms Llama2 13B in benchmarks.",
"chatPromptTemplate" : "<s>{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}} [/INST]{{/ifUser}}{{#ifAssistant}}{{content}}</s>{{/ifAssistant}}{{/each}}",
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 3072,
"max_new_tokens": 1024,
"stop": ["</s>"]
},
"promptExamples": [
{
"title": "Write an email from bullet list",
"prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)"
}, {
"title": "Code a snake game",
"prompt": "Code a basic snake game in python, give explanations for each step."
}, {
"title": "Assist in a task",
"prompt": "How do I make a delicious lemon cheesecake?"
}
],
"endpoints": [
{
"url": "http://localhost:8080",
"type": "llamacpp"
}
]
}
]`
OLD_MODELS=`[]`
PUBLIC_ORIGIN=#https://huggingface.co
PUBLIC_SHARE_PREFIX=#https://hf.co/chat
PUBLIC_GOOGLE_ANALYTICS_ID=#G-XXXXXXXX / Leave empty to disable
PUBLIC_PLAUSIBLE_SCRIPT_URL=#/js/script.js / Leave empty to disable
PUBLIC_ANNOUNCEMENT_BANNERS=`[
{
"title": "Code Llama 70B is available! 🦙",
"linkTitle": "try it",
"linkHref": "https://huggingface.co/chat?model=codellama/CodeLlama-70b-Instruct-hf"
}
]`
PARQUET_EXPORT_DATASET=
PARQUET_EXPORT_HF_TOKEN=
PARQUET_EXPORT_SECRET=
RATE_LIMIT= # requests per minute
MESSAGES_BEFORE_LOGIN=# how many messages a user can send in a conversation before having to login. set to 0 to force login right away
APP_BASE="" # base path of the app, e.g. /chat, left blank as default
PUBLIC_APP_NAME=ChatUI # name used as title throughout the app
PUBLIC_APP_ASSETS=chatui # used to find logos & favicons in static/$PUBLIC_APP_ASSETS
PUBLIC_APP_COLOR=blue # can be any of tailwind colors: https://tailwindcss.com/docs/customizing-colors#default-color-palette
PUBLIC_APP_DESCRIPTION=# description used throughout the app (if not set, a default one will be used)
PUBLIC_APP_DATA_SHARING=#set to 1 to enable options & text regarding data sharing
PUBLIC_APP_DISCLAIMER=#set to 1 to show a disclaimer on login page
PUBLIC_APP_DISCLAIMER_MESSAGE="Disclaimer: AI is an area of active research with known problems such as biased generation and misinformation. Do not use this application for high-stakes decisions or advice."
LLM_SUMMERIZATION=true
EXPOSE_API=true
ENABLE_ASSISTANTS=false #set to true to enable assistants feature
ALTERNATIVE_REDIRECT_URLS=`[]` #valide alternative redirect URL for OAuth
WEBHOOK_URL_REPORT_ASSISTANT=#provide webhook url to get notified when an assistant gets reported
ALLOWED_USER_EMAILS=`[]` # if it's defined, only these emails will be allowed to use the app
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingmodelsThis issue is related to model performance/reliabilityThis issue is related to model performance/reliability