As litellm just merged something to always get the pricing from openrouter's usage answer I don't think this project is useful anymore and will archive it.
As of early october 2025, I noticed a few problematic things with openrouter's pricing API:
- The load balancing is based on the "cost" of a model but openrouter does not explain how they turn the prompt cost + completion cost + caching cost + image cost into a single "cost" value.
- The
throughputandlatencyare not retuned in the endpoint API call (seecurl https://openrouter.ai/api/v1/models/anthropic/claude-sonnet-4.5/endpoints | jq). That means that I can't have the information needed to know which provider would be used. Similarly, if you useanthropic/claude-sonnet-4.5:nitrothen your query will go to the highest throughput provider but the API does not provide the means to know which one that would be. So yeah, the script works but as long as openrouter does not fix the above it can be imprecise. By just a bit or by a lot depending on how you use openrouter.
I reached out to openrouter about this to see if they're willing to address this. If you happen to be someone of inluence, don't hesitate to reach out!
Edit: after seeing with the team by emaim they seem to be working on a fix. Will update when they update me.
A Python script that automatically syncs model pricing data from OpenRouter to Langfuse's pricing page. This tool fetches the latest pricing information from OpenRouter's API and creates corresponding models in Langfuse with accurate pricing data.
You might also be interested in my other script: LiteLLM Proxy OpenRouter Price Updater.
This script was created with assistance from aider.chat.
r
- π Automatic Sync: Fetches pricing data from OpenRouter and creates models in Langfuse
- π§Ή Clean Management: Automatically deletes previously created models before adding new ones
- π Continuous Operation: Safe to run repeatedly - handles updates gracefully
- π Dry Run Mode: Test operations without making actual changes
- ποΈ Reset Mode: Option to only delete script-managed models without creating new ones
- β‘ Rate Limiting: Built-in rate limiting to respect API limits
- π Progress Tracking: Visual progress bars and detailed logging
- π§ Flexible Configuration: Environment variables or CLI arguments
- β Tested with Langfuse v3
- π Python 3.7+
- Clone this repository:
git clone <repository-url>
cd <repository-name>- Install required dependencies:
pip install click requests tqdm loguruThe script requires Langfuse credentials to sync models. You can provide these via environment variables or CLI arguments.
export LANGFUSE_PUBLIC_KEY="your_public_key"
export LANGFUSE_SECRET_KEY="your_secret_key"
export LANGFUSE_HOST="https://your-langfuse-instance.com"Alternatively, pass credentials directly:
python script.py --langfuse-public-key="your_key" --langfuse-secret-key="your_secret" --langfuse-host="https://your-instance.com"Sync all OpenRouter models to Langfuse:
python script.pyTest the sync without making actual changes:
python script.py --dryDelete only the script-managed models (useful for cleanup):
python script.py --resetSet a specific start date for model pricing (dd-mm-yyyy format):
python script.py --start-date="01-12-2024"Add delays between API calls to respect rate limits:
python script.py --rate-limit-delay=0.5- Fetch Data: Retrieves current model pricing from OpenRouter API
- Clean Slate: Identifies and deletes previously created models (those starting with
openrouter_script_) - Create Models: Creates new Langfuse models with current pricing data
- Smart Naming: Handles duplicate model names and special cases (e.g., "thinking" models)
- Tokenizer Detection: Automatically assigns appropriate tokenizers based on model provider
The script creates models in Langfuse with:
- Naming Pattern:
openrouter_script_{canonical_slug} - Match Pattern: Regex to match the OpenRouter model ID
- Pricing: Input/output token prices from OpenRouter
- Tokenizer: Automatically selected (Claude for Anthropic models, OpenAI for others)
Script-managed models are identified by:
- Model name starts with
openrouter_script_ isLangfuseManagedis set toFalse
This ensures the script only manages its own models and won't interfere with manually created ones.
| Option | Description | Default |
|---|---|---|
--langfuse-public-key |
Langfuse public key | From LANGFUSE_PUBLIC_KEY env var |
--langfuse-secret-key |
Langfuse secret key | From LANGFUSE_SECRET_KEY env var |
--langfuse-host |
Langfuse host URL | From LANGFUSE_HOST env var |
--dry |
Dry run mode (no actual changes) | False |
--reset |
Only delete script-managed models | False |
--start-date |
Start date for models (dd-mm-yyyy) | Today's date |
--rate-limit-delay |
Delay between API calls (seconds) | 0 |
The script creates detailed logs in script.log with:
- Automatic rotation (10 MB files)
- 7-day retention
- INFO level and above
This script is designed for continuous operation:
- Safe Re-runs: Each execution cleans up its previous models before creating new ones
- No Conflicts: Only manages models it created (identified by naming pattern)
- Incremental Updates: Handles pricing changes and new models automatically
You can safely run this script on a schedule (e.g., daily) to keep your Langfuse pricing data up-to-date with OpenRouter.
- API Failures: Graceful handling of network issues and API errors
- Rate Limiting: Built-in delays to respect API limits
- Data Validation: Skips models with missing or invalid pricing data
- Detailed Logging: Comprehensive error reporting and debugging information
Issues and pull requests are welcome! Please feel free to:
- π Report bugs
- π‘ Suggest new features
- π§ Submit improvements
- π Improve documentation
This project is licensed under the GNU General Public License v3.0 - see the LICENSE file for details.
- Created with assistance from aider.chat
- Uses OpenRouter API for model pricing data
- Integrates with Langfuse for LLM observability and pricing management