Skip to content

Allowing LocalLLMs #134

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
panchi64 opened this issue Apr 25, 2025 · 3 comments
Open

Allowing LocalLLMs #134

panchi64 opened this issue Apr 25, 2025 · 3 comments

Comments

@panchi64
Copy link

I'd love for the ability to use local LLMs instead of having to use OpenAI or Anthropic. Are there any plans to add support for locally hosted LLMs?

@rezwanahmedsami
Copy link

I'd love for the ability to use local LLMs instead of having to use OpenAI or Anthropic. Are there any plans to add support for locally hosted LLMs?

Yeah, it can be good

@ahreeeman
Copy link

This should do it:

  1. Clone repo & create env files:

git clone https://github.com/kortix-ai/suna.git
cd suna/backend && cp .env.example .env
cd ../frontend && cp .env.example .env.local

  1. Point Suna at your local Ollama:

Ollama speaks an OpenAI-compatible API on port 11434. Set two variables in backend/.env (leave OPENAI_API_KEY as any non-empty string):
OPENAI_API_BASE=http://localhost:11434/v1
MODEL_TO_USE=llama3:instruct

  1. Minimal docker-compose (Jetson-friendly)Create suna-stack.yml in the repo root:

services:
redis:
image: redis:7-alpine
restart: unless-stopped
ollama:
image: ollama/ollama:latest
command: ["serve"]
volumes:
- ~/.ollama:/root/.ollama
network_mode: host
backend:
build: ./backend
env_file: ./backend/.env
depends_on: [redis]
network_mode: host
frontend:
build: ./frontend
env_file: ./frontend/.env.local
depends_on: [backend]
network_mode: host

Run everything:

docker compose -f suna-stack.yml up -d

@sahariarpku
Copy link

i tried with other hosted api, like one from siliconflow, i get error:

services.llm.LLMRetryError: Failed to make API call after 3 attempts. Last error: litellm.AuthenticationError: AnthropicException - b'{"type":"error","error":{"type":"authentication_error","message":"invalid x-api-key"}}'
2025-04-28 04:04:47,750 - INFO - Running in local development mode - billing checks are disabled
2025-04-28 04:04:48,488 - INFO - Starting thread execution for thread 228ed5c0-66c2-4fef-94de-d0b71a89dafe
2025-04-28 04:04:48,691 - INFO - Thread 228ed5c0-66c2-4fef-94de-d0b71a89dafe token count: 9720/120000 (8.1%)
2025-04-28 04:04:48,691 - INFO - Automatic summarization disabled. Skipping token count check and summarization.

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.

2025-04-28 04:04:49,665 - WARNING - Error on attempt 1/3: litellm.AuthenticationError: AnthropicException - b'{"type":"error","error":{"type":"authentication_error","message":"invalid x-api-key"}}'

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.

2025-04-28 04:04:55,927 - WARNING - Error on attempt 2/3: litellm.AuthenticationError: AnthropicException - b'{"type":"error","error":{"type":"authentication_error","message":"invalid x-api-key"}}'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants