You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/blog/2025-10-02-50-million-zsets/index.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,12 +2,12 @@
2
2
title="How Valkey 8.1 Handles 50 Million Sorted Set Inserts"
3
3
date=2025-10-02 00:00:01
4
4
description= """
5
-
hi
5
+
The latest Valkey 8.1 release introduces a redesigned hash table and other optimizations that promise lower memory usage and higher throughput. In this post, we put Valkey 8.1 under pressure by benchmarking it against Valkey 8.0, inserting 50 million members into a sorted set and measuring memory consumption and throughput along the way.
When you run infrastructure at scale, the smallest efficiencies compound into massive savings. Sorted sets (ZSETs) are the backing data structure for far more than leaderboards. They're used for time‑ordered feeds, priority queues, recommendation rankings and more. Each entry carries per‑item overhead; when you're inserting tens of millions of items, those bytes accumulate into gigabytes. The latest Valkey 8.1 release introduces a [redesigned hash table](https://valkey.io/blog/valkey-8-1-0-ga/) and other optimizations that promise lower memory usage and higher throughput. In this post, we put Valkey 8.1 under pressure by benchmarking it against Valkey 8.0, inserting 50 million members into a sorted set and measuring memory consumption and throughput along the way.
@@ -28,7 +28,7 @@ To provide a fair comparison, both Valkey 8.0 and Valkey 8.1.1 were run on the
28
28
* 250,000 item batch size per pipeline
29
29
* Metrics collected every 1,000,000 inserts (flush between runs)
30
30
31
-
The benchmark code is open sourced and straightforward to reproduce: it connects to both servers, flushes the test key, performs batched inserts, and records `used_memory`, `used_memory_rss`, total elapsed time, and throughput after each million inserts. This repeatability mirrors the ethos of Valkey's community - every optimization is measurable, and anyone can verify the results.
31
+
The [benchmark code is open sourced](https://github.com/momentohq/sorted-set-benchmark) and straightforward to reproduce: it connects to both servers, flushes the test key, performs batched inserts, and records `used_memory`, `used_memory_rss`, total elapsed time, and throughput after each million inserts. This repeatability mirrors the ethos of Valkey's community - every optimization is measurable, and anyone can verify the results.
32
32
33
33
## Memory Usage – 8.1 vs 8.0
34
34
@@ -38,7 +38,7 @@ Valkey 8.1's redesigned dictionary structure cuts [roughly 20–30 bytes per k
38
38
39
39
At 1 million inserts, Valkey 8.0 used ~95 MB while Valkey 8.1 used ~81 MB. As the ZSET grew, the gap widened. By 10 million inserts, 8.0 consumed 1.06 GB versus 0.77 GB for 8.1 - **a 27% reduction**. At the end of the run (50 million inserts), 8.1 used 3.77 GB compared to 4.83 GB on 8.0, saving 1.06 GB (≈22 %).
40
40
41
-
These numbers align with the release notes. Valkey's 8.1 announcement highlights lower per‑key overheads and improved data structure handling; Linuxiac notes that 8.1's architectural changes can reduce memory footprints by [approximately 20 bytes per KV pair](https://linuxiac.com/valkey-8-1-in-memory-data-store-unleashes-10-faster-throughput), with each pair normally consuming 100 bytes (a 20% reduction!). Our results confirm those claims on a large ZSET workload.
41
+
These numbers align with the release notes. Valkey's 8.1 announcement highlights lower per‑key overheads and improved data structure handling; Linuxiac notes that 8.1's architectural changes can reduce memory footprints by [approximately 20 bytes per KV pair](https://linuxiac.com/valkey-8-1-in-memory-data-store-unleashes-10-faster-throughput), with each pair normally consuming 100 bytes (a 20% reduction!). Our results also confirm a similar improvement on large ZSET workloads.
42
42
43
43
## Throughput and Total Time
44
44
@@ -60,6 +60,6 @@ These benefits don't just apply to leaderboards. Time‑ordered feeds (such as a
60
60
61
61
As an operator who lives and breathes real‑time data systems, I'm continually amazed by the pace of innovation in the Valkey community. We didn't tune any configuration knobs to achieve these results - Valkey 8.1's efficiency is built in, and the improvements materialized instantly once we upgraded. On a 50 million‑entry benchmark, the new release used 27% less memory while delivering about 8% higher throughput, and completed the workload seven seconds faster than its predecessor. Those deltas may seem small in isolation, but at hyperscale they compound into transformative savings and more resilient services.
62
62
63
-
If you're running sorted set heavy workloads - whether leaderboards, feeds, queues or scoring engines - I encourage you to upgrade to Valkey 8.1 and run this benchmark yourself. Interested in how Valkey 8.1 stacked up to its competitors? [So were we](#)!
63
+
If you're running sorted set heavy workloads - whether leaderboards, feeds, queues or scoring engines - I encourage you to upgrade to Valkey 8.1 and run this benchmark yourself. Interested in how Valkey 8.1 stacked up to its competitors? [So were we](https://www.gomomento.com/blog/valkey-vs-redis-memory-efficiency-at-hyperscale/)!
64
64
65
65
Our code is [open source](https://github.com/momentohq/sorted-set-benchmark) and available to run on your own hardware. I think you'll be pleasantly surprised by how much headroom you gain and how effortlessly Valkey handles pressure. The power of open source and community‑driven engineering continues to shine through in every release.
0 commit comments