Skip to content

Commit 497ca15

Browse files
authored
Updating docs (#629)
## Description Updates to intro and Quickstart ## PR Type <!-- Delete the types that don't apply --!> 📚 Documentation ## Checklist - [X] I have added unit tests that prove my fix/feature works - [X] New and existing tests pass locally - [X] Documentation was updated where necessary - [X] I have read and followed the [contribution guidelines](https://github.com/mozilla-ai/any-llm/blob/main/CONTRIBUTING.md)```
1 parent 59eb8c6 commit 497ca15

File tree

2 files changed

+59
-54
lines changed

2 files changed

+59
-54
lines changed

docs/index.md

Lines changed: 36 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,51 @@
11
<p align="center">
22
<picture>
3-
<img src="./images/any-llm-logo.png" width="20%" alt="Project logo"/>
3+
<img src="./images/any-llm-logo.png" width="20%" alt="any-llm logo"/>
44
</picture>
5+
<p align="center"> <b>Stop rewriting code for every LLM provider </b></p>
56
</p>
67

78
`any-llm` is a Python library providing a single interface to different llm providers.
89

9-
### Demo
10+
```python
11+
from any_llm import completion
12+
13+
# Using the messages format
14+
response = completion(
15+
model="gpt-4o-mini",
16+
messages=[{"role": "user", "content": "What is Python?"}],
17+
provider="openai"
18+
)
19+
print(response)
20+
21+
# Switch providers without changing your code
22+
response = completion(
23+
model="claude-sonnet-4-5-20250929",
24+
messages=[{"role": "user", "content": "What is Python?"}],
25+
provider="anthropic"
26+
)
27+
print(response)
28+
```
29+
30+
### Why any-llm
31+
- Switch providers in one line
32+
- Consistent error handling across providers
33+
- Simple API, powerful features
34+
35+
[View supported providers →](./providers.md)
1036

11-
Try `any-llm` in action with our interactive chat demo that showcases streaming completions and provider switching:
37+
### Getting Started
1238

13-
**[📂 Run the Demo](https://github.com/mozilla-ai/any-llm/tree/main/demos/chat#readme)**
39+
**[Get started in 5 minutes →](./quickstart.md)** - Install the library and run your first API call.
1440

15-
The demo features real-time streaming responses, multiple provider support, and collapsible "thinking" content display.
1641

17-
### Getting Started
42+
### Demo
43+
44+
Try `any-llm` in action with our interactive chat demo:
45+
46+
**[📂 Run the Demo](https://github.com/mozilla-ai/any-llm/tree/main/demos/chat#readme)**
1847

19-
Refer to the [Quickstart](./quickstart.md) for instructions on installation and usage.
48+
Features: real-time streaming responses, multiple provider support, and collapsible "thinking" content display.
2049

2150
### API Documentation
2251

docs/quickstart.md

Lines changed: 23 additions & 47 deletions
Original file line numberDiff line numberDiff line change
@@ -3,59 +3,41 @@
33
### Requirements
44

55
- Python 3.11 or newer
6-
- API_KEYS to access to whichever LLM you choose to use.
6+
- API keys for your chosen LLM provider
77

88
### Installation
9+
```bash
10+
pip install any-llm-sdk[all] # Install with all provider support
11+
```
912

10-
#### Direct Usage
13+
#### Installing Specific Providers
1114

12-
In your pip install, include the [supported providers](./providers.md) that you plan on using, or use the `all` option if you want to install support for all `any-llm` supported providers.
15+
If you want to install a specific provider from our [supported providers](./providers.md):
1316

1417
```bash
1518
pip install any-llm-sdk[mistral] # For Mistral provider
1619
pip install any-llm-sdk[ollama] # For Ollama provider
1720
# install multiple providers
1821
pip install any-llm-sdk[mistral,ollama]
19-
# or install support for all providers
20-
pip install any-llm-sdk[all]
2122
```
2223

2324
#### Library Integration
2425

25-
If you're integrating `any-llm` into your own library that others will use, you only need to install the base package:
26-
27-
```bash
28-
pip install any-llm-sdk
29-
```
30-
31-
In this scenario, the end users of your library will be responsible for installing the appropriate provider dependencies when they want to use specific providers. `any-llm` is designed so that you'll only encounter exceptions at runtime if you try to use a provider without having the required dependencies installed.
32-
33-
Those exceptions will clearly describe what needs to be installed to resolve the issue.
34-
35-
Make sure you have the appropriate API key environment variable set for your provider. Alternatively,
36-
you could use the `api_key` parameter when making a completion call instead of setting an environment variable.
37-
38-
```bash
39-
export MISTRAL_API_KEY="YOUR_KEY_HERE" # or OPENAI_API_KEY, etc
40-
```
41-
42-
### Basic Usage
43-
44-
`any-llm` provides two main approaches for working with LLM providers, each optimized for different use cases:
45-
46-
#### Option 1: Direct API Functions
26+
If you're building a library, install just the base package (`pip install any-llm-sdk`) and let your users install provider dependencies.
4727

48-
[`completion`][any_llm.completion] and [`acompletion`][any_llm.acompletion] provide a unified interface across all providers - perfect for simple use cases and quick prototyping.
28+
> **API Keys:** Set your provider's API key as an environment variable (e.g., `export MISTRAL_API_KEY="your-key"`) or pass it directly using the `api_key` parameter.
4929
50-
**Recommended approach:** Use separate `provider` and `model` parameters:
30+
### Your First API Call
5131

5232
```python
5333
import os
5434

5535
from any_llm import completion
5636

57-
# Make sure you have the appropriate environment variable set
58-
assert os.environ.get('MISTRAL_API_KEY')
37+
# Make sure you have the appropriate API key set
38+
api_key = os.environ.get('MISTRAL_API_KEY')
39+
if not api_key:
40+
raise ValueError("Please set MISTRAL_API_KEY environment variable")
5941

6042
# Recommended: separate provider and model parameters
6143
response = completion(
@@ -66,26 +48,19 @@ response = completion(
6648
print(response.choices[0].message.content)
6749
```
6850

69-
**Alternative syntax:** You can also use the combined `provider:model` format:
51+
### Advanced: Using the AnyLLM Class
7052

71-
```python
72-
response = completion(
73-
model="mistral:mistral-small-latest", # <provider_id>:<model_id>
74-
messages=[{"role": "user", "content": "Hello!"}]
75-
)
76-
```
77-
78-
#### Option 2: AnyLLM Class
79-
80-
For advanced use cases that require provider reuse, metadata access, or more control over configuration:
53+
For applications making multiple requests with the same provider, use the `AnyLLM` class to avoid repeated provider instantiation:
8154

8255
```python
8356
import os
8457

8558
from any_llm import AnyLLM
8659

87-
# Make sure you have the appropriate environment variable set
88-
assert os.environ.get('MISTRAL_API_KEY')
60+
# Make sure you have the appropriate API key set
61+
api_key = os.environ.get('MISTRAL_API_KEY')
62+
if not api_key:
63+
raise ValueError("Please set MISTRAL_API_KEY environment variable")
8964

9065
llm = AnyLLM.create("mistral")
9166

@@ -113,9 +88,7 @@ print(f"Supports tools: {metadata.completion}")
11388
- Building applications that make multiple requests with the same provider
11489
- You want to avoid repeated provider instantiation overhead
11590

116-
The provider_id should be specified according to the [provider ids supported by any-llm](./providers.md).
117-
The `model_id` portion is passed directly to the provider internals: to understand what model ids are available for a provider,
118-
you will need to refer to the provider documentation or use our [`list_models`](./api/list_models.md) API if the provider supports that API.
91+
**Finding model names:** Check the [providers page](./providers.md) for provider IDs, or use the [`list_models`](./api/list_models.md) API to see available models for your provider.
11992

12093
### Streaming
12194

@@ -171,6 +144,9 @@ def get_weather(location: str, unit: str = "F") -> str:
171144
Args:
172145
location: The city or location to get weather for
173146
unit: Temperature unit, either 'C' or 'F'
147+
148+
Returns:
149+
Current weather description
174150
"""
175151
return f"Weather in {location} is sunny and 75{unit}!"
176152

0 commit comments

Comments
 (0)