-
Notifications
You must be signed in to change notification settings - Fork 6
[install-help]: Embedding server issue #178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Nobody to help me ? thanks in advance |
hello, The shown error message will be improved too but it means that the internal embedding server could not start for some reason. The embedder's log files in the docker container can give us a clue. docker exec -it nc_app_context_chat_backend bash
tail /nc_app_context_chat_backend_data/embedding_server_* |
Hi, thank you for your help. The log files related to the embedding server are all empty.... |
ah I'm sorry the path for the logs is incorrect. docker exec nc_app_context_chat_backend cat /nc_app_context_chat_backend_data/logs/embedding_server_* |
ah okay. and does your system have avx support? |
I have a QNAP TS-464, no AVX Docker support... so I have tried to change the port in the config.yml but with no success |
my config :
|
well that's a bummer. We don't support systems without AVX.
Which IP has been used here? Can you try with |
its the IP of my host .but I have tried localhost,127.0.0.1, 0.0.0.0... same result this is not a manual setup .I have install the context chat backend dorectly from the Nextcloud application |
is it to execute in the container or do I rebuild a new image ?
|
can you confirm is nothing is running on port 6787? And if something on your system is preventing the binding of the port like selinux or apparmor.
also, the logs should show the exception message if the embedding server does not start. Would you mind upgrading context_chat_backend to 4.3.0 and posting the logs?
to execute in the container. Unfortunately it has to be done after every update. We might change things in the long run to automate it for people wanting to customize the build process of llama-cpp-python. |
Describe the issue
Hello,
Before asking I have read I think all the threads about this Nextcloud application, here and everywhere…
I use the last version of Nextcloud AIO on a QNAP ts-464 NAS.
For the AI integration, I use the openAI connector application with a MistralAi paid account.
Because my current installation of the Context Chat Backend seems not to work (" Failed request (500): Embedding Request Error: Error: the embedding server is not responding") , I wonder what is this Embedding server ? I have seen the related configuration in the config.yaml but do I need an external application (server) to use it ? is it an Ollama or Local AI instance ? or an internal server in the application itself ?
Thanks in advance for your precious answers
Setup Details (please complete the following information):
The text was updated successfully, but these errors were encountered: