A benchmarking framework for Python optimization and operations research solvers. Compare performance across OR-Tools, Pyomo, SciPy, solvOR, and PuLP with an interactive Marimo dashboard.
- Multi-solver support: Unified interface for OR-Tools, Pyomo, SciPy, solvOR, and PuLP
- Problem library: Pre-built LP, MIP, and NLP problem definitions
- Scalable benchmarks: Test solvers across varying problem sizes (variables, constraints, data)
- Interactive UI: Marimo-powered dashboard for visual benchmarking
- Metrics collection: Solve time, setup time, memory usage, solution quality
- Export & reporting: Results in CSV/JSON with comparison charts
| Solver | Type | Status |
|---|---|---|
| OR-Tools | LP, MIP, CP | Planned |
| Pyomo | LP, MIP, NLP | Planned |
| SciPy | LP, MIP, NLP | Planned |
| solvOR | LP, MIP, CP, NLP, Graph | Planned |
| PuLP | LP, MIP | Planned |
| HiGHS | LP, MIP, QP | Planned |
| CVXPY | LP, QP, SOCP, SDP | Planned |
| Gurobi | LP, MIP, QP, MIQP | Planned |
Requires uv for package management.
# Clone the repository
git clone https://github.com/StevenBtw/benchmORk.git
cd benchmORk
# Install all dependencies
uv sync# Run a quick benchmark
python -m benchmork.runner --config configs/quick.yaml
# Run with specific solvers
python -m benchmork.runner --solvers ortools pulp --problem knapsackThe benchmark dashboard is built with Marimo, a reactive Python notebook.
# Launch the interactive dashboard
uv run marimo run app/main.pyThis opens a browser-based dashboard where you can:
- Select problem type (LP, MIP) - filters available solvers automatically
- Choose solvers to compare (multi-select from available solvers)
- Adjust problem size with sliders
- Run benchmarks and view results with timing charts
- Compare performance across different solver/problem combinations
For development mode with hot-reloading:
uv run marimo edit app/main.pyfrom benchmork.problems import Knapsack
from benchmork.runner import BenchmarkRunner
from solvers import ORToolsSolver, PuLPSolver
# Define a problem
problem = Knapsack(n_items=100, capacity=500)
# Run benchmark
runner = BenchmarkRunner(
solvers=[ORToolsSolver(), PuLPSolver()],
problems=[problem]
)
results = runner.run()
# View results
print(results.summary())- Transportation problem
- Diet problem
- Blending problem
- Production planning
- Knapsack problem
- Assignment problem
- Traveling Salesman (TSP)
- Bin packing
- Set covering / Set partitioning
- Facility location
- Vehicle routing (VRP)
- Job shop scheduling
- Graph coloring
- Rosenbrock function
- Portfolio optimization (Markowitz)
- Rastrigin function
- Ackley function
- Sphere function
- N-Queens
- Sudoku
- Graph coloring (SAT encoding)
- Random 3-SAT
- Pigeonhole principle
- Maze solving
- Grid navigation
- Shortest path (weighted/unweighted)
- All-pairs shortest paths
- Maximum flow
- Minimum cost flow
- Minimum spanning tree
- Pentomino tiling
- Sudoku (exact cover encoding)
- N-Queens (exact cover encoding)
- Quadratic Assignment (QAP)
- Flow shop scheduling
- Permutation flow shop
- Hyperparameter tuning (synthetic)
- Noisy function optimization
Benchmark configurations are defined in YAML:
# configs/standard.yaml
solvers:
- ortools
- pulp
- pyomo
problems:
- type: knapsack
sizes: [100, 500, 1000, 5000]
- type: transportation
sizes: [10x10, 50x50, 100x100]
metrics:
- solve_time
- setup_time
- memory_peak
- solution_valueSee CONTRIBUTING.md for development setup and guidelines.
Apache 2.0 License, free for personal and commercial use.