Skip to content

visendi-labs/vottur

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Vottur

Automatic JSONL logging for LLM APIs. Wrap your OpenAI or OpenRouter client, and every request gets logged to a file. No code changes needed.

Packages

Package Language Install
vottur Python 3.12+ pip install vottur
vottur Node.js 18+ npm install vottur

Both packages produce identical JSONL output. Mix and match in your stack.

How it works

Vottur wraps your SDK client and intercepts every API call. When you call chat.completions.create(), it:

  1. Captures the request (model, messages, parameters)
  2. Passes it through to the real SDK
  3. Captures the response (content, tokens, latency)
  4. Logs everything to a JSONL file in a background thread

Your code stays exactly the same. The response object is unchanged. Vottur just watches and logs.

Quick Start

Python

from vottur import create_client

client = create_client()

response = client.chat.completions.create(
    model="gpt-5.2",
    messages=[{"role": "user", "content": "Hello!"}],
)

print(response.choices[0].message.content)
# Logged to .vottur/logs.jsonl automatically

TypeScript

import { createClient } from 'vottur';

const client = createClient();

const response = await client.chat.completions.create({
  model: 'gpt-5.2',
  messages: [{ role: 'user', content: 'Hello!' }],
});

console.log(response.choices[0].message.content);
// Logged to .vottur/logs.jsonl automatically

Features

  • Transparent - response objects are unchanged, all methods work normally
  • Non-blocking - logging happens in a background thread, zero latency impact
  • Zero dependencies - core library uses only Python standard library
  • Streaming support - works with streaming responses, logs on completion
  • Session tracking - group related requests with session IDs
  • Compatible - same JSONL format across Python and TypeScript

Agent hierarchy tracking

For multi-agent systems, track parent-child relationships with _spawnedBy. Vottur exposes trace_id on every response:

# Root orchestrator
orchestrator = client.chat.completions.create(
    model="gpt-5.2",
    messages=[{"role": "user", "content": "Plan the task"}],
    _name="orchestrator",
)

# Child agent - spawned by orchestrator
worker = client.chat.completions.create(
    model="gpt-5.2",
    messages=[{"role": "user", "content": "Execute subtask"}],
    _name="worker",
    _spawnedBy=orchestrator.trace_id,  # Links to parent
)

This creates a hierarchy in your logs where spawned_by matches trace_id:

{"trace_id": "tr_abc...", "name": "orchestrator", "spawned_by": null}
{"trace_id": "tr_def...", "name": "worker", "spawned_by": "tr_abc..."}

Works with any depth of nesting.

Log format

Each line is a JSON object:

{
  "trace_id": "tr_abc123",
  "session_id": "sess_xyz789",
  "timestamp": "2025-01-15T10:30:00.000Z",
  "latency_ms": 1234.5,
  "model": "gpt-5.2",
  "name": "greeting",
  "spawned_by": "tr_parent",
  "input": {
    "messages": [{"role": "user", "content": "Hello!"}]
  },
  "output": {
    "content": "Hello! How can I help you?",
    "finish_reason": "stop"
  },
  "usage": {
    "prompt_tokens": 10,
    "completion_tokens": 8,
    "total_tokens": 18
  },
  "streaming": false
}

Supported APIs

  • OpenAI SDK - OpenAI, Azure OpenAI, Ollama, any OpenAI-compatible API
  • OpenRouter SDK - All OpenRouter models
  • Raw HTTP - Direct fetch/httpx wrapper for any API

License

MIT

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published