Automatic JSONL logging for LLM APIs. Wrap your OpenAI or OpenRouter client, and every request gets logged to a file. No code changes needed.
| Package | Language | Install |
|---|---|---|
| vottur | Python 3.12+ | pip install vottur |
| vottur | Node.js 18+ | npm install vottur |
Both packages produce identical JSONL output. Mix and match in your stack.
Vottur wraps your SDK client and intercepts every API call. When you call chat.completions.create(), it:
- Captures the request (model, messages, parameters)
- Passes it through to the real SDK
- Captures the response (content, tokens, latency)
- Logs everything to a JSONL file in a background thread
Your code stays exactly the same. The response object is unchanged. Vottur just watches and logs.
from vottur import create_client
client = create_client()
response = client.chat.completions.create(
model="gpt-5.2",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)
# Logged to .vottur/logs.jsonl automaticallyimport { createClient } from 'vottur';
const client = createClient();
const response = await client.chat.completions.create({
model: 'gpt-5.2',
messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(response.choices[0].message.content);
// Logged to .vottur/logs.jsonl automatically- Transparent - response objects are unchanged, all methods work normally
- Non-blocking - logging happens in a background thread, zero latency impact
- Zero dependencies - core library uses only Python standard library
- Streaming support - works with streaming responses, logs on completion
- Session tracking - group related requests with session IDs
- Compatible - same JSONL format across Python and TypeScript
For multi-agent systems, track parent-child relationships with _spawnedBy. Vottur exposes trace_id on every response:
# Root orchestrator
orchestrator = client.chat.completions.create(
model="gpt-5.2",
messages=[{"role": "user", "content": "Plan the task"}],
_name="orchestrator",
)
# Child agent - spawned by orchestrator
worker = client.chat.completions.create(
model="gpt-5.2",
messages=[{"role": "user", "content": "Execute subtask"}],
_name="worker",
_spawnedBy=orchestrator.trace_id, # Links to parent
)This creates a hierarchy in your logs where spawned_by matches trace_id:
{"trace_id": "tr_abc...", "name": "orchestrator", "spawned_by": null}
{"trace_id": "tr_def...", "name": "worker", "spawned_by": "tr_abc..."}Works with any depth of nesting.
Each line is a JSON object:
{
"trace_id": "tr_abc123",
"session_id": "sess_xyz789",
"timestamp": "2025-01-15T10:30:00.000Z",
"latency_ms": 1234.5,
"model": "gpt-5.2",
"name": "greeting",
"spawned_by": "tr_parent",
"input": {
"messages": [{"role": "user", "content": "Hello!"}]
},
"output": {
"content": "Hello! How can I help you?",
"finish_reason": "stop"
},
"usage": {
"prompt_tokens": 10,
"completion_tokens": 8,
"total_tokens": 18
},
"streaming": false
}- OpenAI SDK - OpenAI, Azure OpenAI, Ollama, any OpenAI-compatible API
- OpenRouter SDK - All OpenRouter models
- Raw HTTP - Direct fetch/httpx wrapper for any API
MIT