Skip to content

Fix support for hardware accelerated embedding generation via ollama #2008

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Feb 4, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions docs/how-to/embeddings.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,17 +52,17 @@ To get setup:

```
QDRANT_ENCODER=vector_search.encoders.litellm.LiteLLMEncoder
LITELLM_API_BASE=http://docker.for.mac.host.internal:11434
LITELLM_API_BASE=http://docker.for.mac.host.internal:11434/v1/
QDRANT_DENSE_MODEL=<ollama model name>
```

_Note_ - "LITELLM_API_BASE=http://docker.for.mac.host.internal:11434" is Mac specific - if you are using another OS you will need to figure out what your host machine's docker address is.
_Note_ - "LITELLM_API_BASE=http://docker.for.mac.host.internal:11434/v1/" is Mac specific - if you are using another OS you will need to figure out what your host machine's docker address is.

Sample .env file configuration on Mac:

```
QDRANT_ENCODER=vector_search.encoders.litellm.LiteLLMEncoder
LITELLM_API_BASE=http://docker.for.mac.host.internal:11434
LITELLM_API_BASE=http://docker.for.mac.host.internal:11434/v1/
QDRANT_DENSE_MODEL=all-minilm
```

Expand Down
2 changes: 1 addition & 1 deletion main/settings.py
Original file line number Diff line number Diff line change
Expand Up @@ -816,7 +816,7 @@ def get_all_config_keys():
LITELLM_TOKEN_ENCODING_NAME = get_string(
name="LITELLM_TOKEN_ENCODING_NAME", default=None
)
LITELLM_CUSTOM_PROVIDER = get_string(name="LITELLM_CUSTOM_PROVIDER", default="ollama")
LITELLM_CUSTOM_PROVIDER = get_string(name="LITELLM_CUSTOM_PROVIDER", default="openai")
LITELLM_API_BASE = get_string(name="LITELLM_API_BASE", default=None)


Expand Down
17 changes: 9 additions & 8 deletions vector_search/encoders/litellm.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,11 +28,12 @@ def embed_documents(self, documents):
return [result["embedding"] for result in self.get_embedding(documents)["data"]]

def get_embedding(self, texts):
if settings.LITELLM_CUSTOM_PROVIDER and settings.LITELLM_API_BASE:
return embedding(
model=self.model_name,
input=texts,
api_base=settings.LITELLM_API_BASE,
custom_llm_provider=settings.LITELLM_CUSTOM_PROVIDER,
).to_dict()
return embedding(model=self.model_name, input=texts).to_dict()
config = {
"model": self.model_name,
"input": texts,
}
if settings.LITELLM_CUSTOM_PROVIDER:
config["custom_llm_provider"] = settings.LITELLM_CUSTOM_PROVIDER
if settings.LITELLM_API_BASE:
config["api_base"] = settings.LITELLM_API_BASE
return embedding(**config).to_dict()
Loading