Skip to content

Commit 02b849b

Browse files
docs: sync Core Integrations API reference (togetherai) on Docusaurus (#9930)
* Sync Core Integrations API reference (togetherai) on Docusaurus * remove old integration: together_ai --------- Co-authored-by: anakin87 <[email protected]> Co-authored-by: anakin87 <[email protected]>
1 parent f4e22b1 commit 02b849b

File tree

4 files changed

+68
-428
lines changed

4 files changed

+68
-428
lines changed

docs-website/reference/integrations-api/together_ai.md renamed to docs-website/reference/integrations-api/togetherai.md

Lines changed: 17 additions & 107 deletions
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,23 @@
11
---
22
title: "Together AI"
3-
id: integrations-together-ai
3+
id: integrations-togetherai
44
description: "Together AI integration for Haystack"
5-
slug: "/integrations-together-ai"
5+
slug: "/integrations-togetherai"
66
---
77

8-
<a id="haystack_integrations.components.generators.together_ai.generator"></a>
8+
<a id="haystack_integrations.components.generators.togetherai.generator"></a>
99

10-
## Module haystack\_integrations.components.generators.together\_ai.generator
10+
## Module haystack\_integrations.components.generators.togetherai.generator
1111

12-
<a id="haystack_integrations.components.generators.together_ai.generator.TogetherAIGenerator"></a>
12+
<a id="haystack_integrations.components.generators.togetherai.generator.TogetherAIGenerator"></a>
1313

1414
### TogetherAIGenerator
1515

1616
Provides an interface to generate text using an LLM running on Together AI.
1717

1818
Usage example:
1919
```python
20-
from haystack_integrations.components.generators.together_ai import TogetherAIGenerator
20+
from haystack_integrations.components.generators.togetherai import TogetherAIGenerator
2121

2222
generator = TogetherAIGenerator(model="deepseek-ai/DeepSeek-R1",
2323
generation_kwargs={
@@ -27,7 +27,7 @@ generator = TogetherAIGenerator(model="deepseek-ai/DeepSeek-R1",
2727
print(generator.run("Who is the best Italian actor?"))
2828
```
2929

30-
<a id="haystack_integrations.components.generators.together_ai.generator.TogetherAIGenerator.__init__"></a>
30+
<a id="haystack_integrations.components.generators.togetherai.generator.TogetherAIGenerator.__init__"></a>
3131

3232
#### TogetherAIGenerator.\_\_init\_\_
3333

@@ -77,7 +77,7 @@ variable or set to 30.
7777
- `max_retries`: Maximum retries to establish contact with Together AI if it returns an internal error, if not set it is
7878
inferred from the `OPENAI_MAX_RETRIES` environment variable or set to 5.
7979

80-
<a id="haystack_integrations.components.generators.together_ai.generator.TogetherAIGenerator.to_dict"></a>
80+
<a id="haystack_integrations.components.generators.togetherai.generator.TogetherAIGenerator.to_dict"></a>
8181

8282
#### TogetherAIGenerator.to\_dict
8383

@@ -91,7 +91,7 @@ Serialize this component to a dictionary.
9191

9292
The serialized component as a dictionary.
9393

94-
<a id="haystack_integrations.components.generators.together_ai.generator.TogetherAIGenerator.from_dict"></a>
94+
<a id="haystack_integrations.components.generators.togetherai.generator.TogetherAIGenerator.from_dict"></a>
9595

9696
#### TogetherAIGenerator.from\_dict
9797

@@ -110,7 +110,7 @@ Deserialize this component from a dictionary.
110110

111111
The deserialized component instance.
112112

113-
<a id="haystack_integrations.components.generators.together_ai.generator.TogetherAIGenerator.run"></a>
113+
<a id="haystack_integrations.components.generators.togetherai.generator.TogetherAIGenerator.run"></a>
114114

115115
#### TogetherAIGenerator.run
116116

@@ -142,7 +142,7 @@ A dictionary with the following keys:
142142
- `meta`: A list of metadata dictionaries containing information about each generation,
143143
including model name, finish reason, and token usage statistics.
144144

145-
<a id="haystack_integrations.components.generators.together_ai.generator.TogetherAIGenerator.run_async"></a>
145+
<a id="haystack_integrations.components.generators.togetherai.generator.TogetherAIGenerator.run_async"></a>
146146

147147
#### TogetherAIGenerator.run\_async
148148

@@ -174,11 +174,11 @@ A dictionary with the following keys:
174174
- `meta`: A list of metadata dictionaries containing information about each generation,
175175
including model name, finish reason, and token usage statistics.
176176

177-
<a id="haystack_integrations.components.generators.together_ai.chat.chat_generator"></a>
177+
<a id="haystack_integrations.components.generators.togetherai.chat.chat_generator"></a>
178178

179-
## Module haystack\_integrations.components.generators.together\_ai.chat.chat\_generator
179+
## Module haystack\_integrations.components.generators.togetherai.chat.chat\_generator
180180

181-
<a id="haystack_integrations.components.generators.together_ai.chat.chat_generator.TogetherAIChatGenerator"></a>
181+
<a id="haystack_integrations.components.generators.togetherai.chat.chat_generator.TogetherAIChatGenerator"></a>
182182

183183
### TogetherAIChatGenerator
184184

@@ -204,7 +204,7 @@ For more details on the parameters supported by the Together AI API, refer to th
204204

205205
Usage example:
206206
```python
207-
from haystack_integrations.components.generators.together_ai import TogetherAIChatGenerator
207+
from haystack_integrations.components.generators.togetherai import TogetherAIChatGenerator
208208
from haystack.dataclasses import ChatMessage
209209

210210
messages = [ChatMessage.from_user("What's Natural Language Processing?")]
@@ -220,7 +220,7 @@ print(response)
220220
>>'usage': {'prompt_tokens': 15, 'completion_tokens': 36, 'total_tokens': 51}})]}
221221
```
222222

223-
<a id="haystack_integrations.components.generators.together_ai.chat.chat_generator.TogetherAIChatGenerator.__init__"></a>
223+
<a id="haystack_integrations.components.generators.togetherai.chat.chat_generator.TogetherAIChatGenerator.__init__"></a>
224224

225225
#### TogetherAIChatGenerator.\_\_init\_\_
226226

@@ -271,7 +271,7 @@ If not set, it defaults to either the `OPENAI_MAX_RETRIES` environment variable,
271271
- `http_client_kwargs`: A dictionary of keyword arguments to configure a custom `httpx.Client`or `httpx.AsyncClient`.
272272
For more information, see the [HTTPX documentation](https://www.python-httpx.org/api/`client`).
273273

274-
<a id="haystack_integrations.components.generators.together_ai.chat.chat_generator.TogetherAIChatGenerator.to_dict"></a>
274+
<a id="haystack_integrations.components.generators.togetherai.chat.chat_generator.TogetherAIChatGenerator.to_dict"></a>
275275

276276
#### TogetherAIChatGenerator.to\_dict
277277

@@ -285,93 +285,3 @@ Serialize this component to a dictionary.
285285

286286
The serialized component as a dictionary.
287287

288-
<a id="haystack_integrations.components.generators.together_ai.chat.chat_generator.TogetherAIChatGenerator.from_dict"></a>
289-
290-
#### TogetherAIChatGenerator.from\_dict
291-
292-
```python
293-
@classmethod
294-
def from_dict(cls, data: dict[str, Any]) -> "OpenAIChatGenerator"
295-
```
296-
297-
Deserialize this component from a dictionary.
298-
299-
**Arguments**:
300-
301-
- `data`: The dictionary representation of this component.
302-
303-
**Returns**:
304-
305-
The deserialized component instance.
306-
307-
<a id="haystack_integrations.components.generators.together_ai.chat.chat_generator.TogetherAIChatGenerator.run"></a>
308-
309-
#### TogetherAIChatGenerator.run
310-
311-
```python
312-
@component.output_types(replies=list[ChatMessage])
313-
def run(messages: list[ChatMessage],
314-
streaming_callback: Optional[StreamingCallbackT] = None,
315-
generation_kwargs: Optional[dict[str, Any]] = None,
316-
*,
317-
tools: Optional[ToolsType] = None,
318-
tools_strict: Optional[bool] = None)
319-
```
320-
321-
Invokes chat completion based on the provided messages and generation parameters.
322-
323-
**Arguments**:
324-
325-
- `messages`: A list of ChatMessage instances representing the input messages.
326-
- `streaming_callback`: A callback function that is called when a new token is received from the stream.
327-
- `generation_kwargs`: Additional keyword arguments for text generation. These parameters will
328-
override the parameters passed during component initialization.
329-
For details on OpenAI API parameters, see [OpenAI documentation](https://platform.openai.com/docs/api-reference/chat/create).
330-
- `tools`: A list of Tool and/or Toolset objects, or a single Toolset for which the model can prepare calls.
331-
If set, it will override the `tools` parameter provided during initialization.
332-
- `tools_strict`: Whether to enable strict schema adherence for tool calls. If set to `True`, the model will follow exactly
333-
the schema provided in the `parameters` field of the tool definition, but this may increase latency.
334-
If set, it will override the `tools_strict` parameter set during component initialization.
335-
336-
**Returns**:
337-
338-
A dictionary with the following key:
339-
- `replies`: A list containing the generated responses as ChatMessage instances.
340-
341-
<a id="haystack_integrations.components.generators.together_ai.chat.chat_generator.TogetherAIChatGenerator.run_async"></a>
342-
343-
#### TogetherAIChatGenerator.run\_async
344-
345-
```python
346-
@component.output_types(replies=list[ChatMessage])
347-
async def run_async(messages: list[ChatMessage],
348-
streaming_callback: Optional[StreamingCallbackT] = None,
349-
generation_kwargs: Optional[dict[str, Any]] = None,
350-
*,
351-
tools: Optional[ToolsType] = None,
352-
tools_strict: Optional[bool] = None)
353-
```
354-
355-
Asynchronously invokes chat completion based on the provided messages and generation parameters.
356-
357-
This is the asynchronous version of the `run` method. It has the same parameters and return values
358-
but can be used with `await` in async code.
359-
360-
**Arguments**:
361-
362-
- `messages`: A list of ChatMessage instances representing the input messages.
363-
- `streaming_callback`: A callback function that is called when a new token is received from the stream.
364-
Must be a coroutine.
365-
- `generation_kwargs`: Additional keyword arguments for text generation. These parameters will
366-
override the parameters passed during component initialization.
367-
For details on OpenAI API parameters, see [OpenAI documentation](https://platform.openai.com/docs/api-reference/chat/create).
368-
- `tools`: A list of Tool and/or Toolset objects, or a single Toolset for which the model can prepare calls.
369-
If set, it will override the `tools` parameter provided during initialization.
370-
- `tools_strict`: Whether to enable strict schema adherence for tool calls. If set to `True`, the model will follow exactly
371-
the schema provided in the `parameters` field of the tool definition, but this may increase latency.
372-
If set, it will override the `tools_strict` parameter set during component initialization.
373-
374-
**Returns**:
375-
376-
A dictionary with the following key:
377-
- `replies`: A list containing the generated responses as ChatMessage instances.

0 commit comments

Comments
 (0)