This project runs Ruby benchmarks on different cloud providers. Use it to find the fastest cloud instance for your Ruby apps.
Results are posted to https://speedshop.co/rubybench.
Cloud providers offer many instance types. These instance types can vary significantly in single thread "straight line" performance. These instance types also vary quite a bit in cost: it may be worth it run a slower, older instance type if it's significantly cheaper.
This project is designed to help you answer a couple of different questions:
- What instance type is the fastest on my chosen cloud? And by how much?
- How does my cloud stack up against others? - Test AWS, Azure, and other clouds side by side
- Is my instance type cost-performant? - Is it worth upgrading or downgrading to slower or faster instances to get a big cost savings?
The tool runs ruby-bench on each instance type. This is the official Ruby benchmark suite. It then creates an HTML report with charts and tables.
- You create a config file listing which instance types to test
- The tool spins up cloud instances via Terraform
- Each instance runs the ruby-bench suite
- Results upload to S3
- The tool generates an HTML report comparing all results
- Ruby 3.4+
- Fish shell
- Terraform
- Docker
gum(for pretty output)- Cloud provider credentials (AWS, Azure)
Test the system on your local machine using Docker:
./bench-new/master/run.fish -c bench-new/config/example.json --local-orchestratorThis runs benchmarks in a local Docker container. Use it to test before spinning up real cloud servers.
Create a config file (see bench-new/config/aws-full.json for an example):
{
"ruby_version": "3.4.7",
"runs_per_instance_type": 3,
"aws": [
{
"instance_type": "c8g.medium",
"alias": "c8g"
},
{
"instance_type": "c7g.medium",
"alias": "c7g"
},
{
"instance_type": "c6g.medium",
"alias": "c6g"
}
]
}Each instance is an object with instance_type (the cloud provider's instance type name) and alias (a short name used in results folders and reports).
You'll then need to provide environment variables to get your AWS credentials working. See the Terraform for more information.
Then run:
./bench-new/master/run.fish -c your-config.jsonThe tool creates all needed cloud resources. Results appear in the results/ folder when done.
After a benchmark run completes:
ruby site/generate_report.rbThis creates site/public/index.html with your results.
The config file controls what gets benchmarked:
| Field | Description |
|---|---|
ruby_version |
Ruby version to test (e.g., "3.4.7") |
runs_per_instance_type |
How many times to run benchmarks per instance |
aws |
Array of AWS instance objects |
azure |
Array of Azure instance objects |
local |
Array of local Docker instance objects |
task_runners.count |
Number of parallel task runners per instance type (default: 1) |
Each instance object has:
instance_type: The cloud provider's instance type name (or "docker" for local)alias: A short name used in results folders and reports
{
"ruby_version": "3.4.7",
"runs_per_instance_type": 3,
"aws": [
{ "instance_type": "c8g.medium", "alias": "c8g" },
{ "instance_type": "c7g.medium", "alias": "c7g" }
],
"azure": [
{ "instance_type": "Standard_D2pls_v6", "alias": "d2pls-v6" },
{ "instance_type": "Standard_D2als_v6", "alias": "d2als-v6" }
]
}Master Script (bench-new/master/run.fish)
The main entry point. It sets up infrastructure, starts the orchestrator, and collects results.
Orchestrator (bench-new/orchestrator/)
A Rails app that gives tasks to workers and tracks progress. Workers ask it for work and send back results.
Task Runner (bench-new/task-runner/)
Runs on each cloud instance. Claims tasks, runs benchmarks, and uploads results.
Site Generator (site/generate_report.rb)
Reads result files and creates an HTML report with charts.
Results land in results/<run-id>/. Each instance gets its own folder named <alias>-<run-number>/ containing:
output.json- Structured benchmark resultsoutput.csv- CSV format of resultsoutput.txt- Raw benchmark outputmetadata.json- Info about the run (provider, instance type, ruby version)
The HTML report shows:
- A summary table ranking instances by total time
- Charts showing individual benchmark times
- 95% confidence intervals for each measurement
The master script cleans up infrastructure when done. If something goes wrong, run:
cd bench-new/nuke
./nuke.fishThis destroys all cloud resources created by the tool.
- Fork this repo
- Create a feature branch
- Make your changes
- If you changed the orchestrator, run the orchestrator's tests:
cd bench-new/orchestrator && bin/rails test - Open a pull request