-
-
Notifications
You must be signed in to change notification settings - Fork 33.9k
buffer: remove async tick and improve perf by 48% #44966
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
07ed225 to
214f1c8
Compare
|
@anonrig Did you run the benchmark against |
|
@RafaelGSS I run the command only on |
214f1c8 to
fe750c0
Compare
|
Benchmark CI: https://ci.nodejs.org/view/Node.js%20benchmark/job/benchmark-node-micro-benchmarks/1195/ Results |
|
I updated the benchmark result and the title according to the changes in the benchmark. |
BridgeAR
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am uncertain if it's really worth it as it will likely only improve the situation for very small entries that won't be the common case.
If it's really an API that is called millions of times with very small values, it's a different story. Using promise .then() has the downside of not providing async stack traces. Thus, I am -0.5.
| const { Blob } = require('buffer'); | ||
|
|
||
| const bench = common.createBenchmark(main, { | ||
| bytes: [0, 8, 128], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
128 bytes seems rather small. I guess this is mainly used for bigger values such as 128kb, 1mb and similar.
mcollina
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
|
The benchmark CI didn't show any perf improvement with this PR 🤔 |
fe750c0 to
b0c0746
Compare
|
My branch was not up to date (by 35 commits). I rebased, built main & branch from scratch, force-pushed to this branch, removed |
|
Can we run the benchmark on v18 and v16 branches instead of main? I suspect v8 version bump fixed this regression. |
|
Benchmark CI (main): https://ci.nodejs.org/view/Node.js%20benchmark/job/benchmark-node-micro-benchmarks/1196/ N.B.: some of the above CIs are queued, the links will 404 until it starts. |
BridgeAR
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The benchmark shows no improvement:
23:50:37 confidence improvement accuracy (*) (**) (***)
23:50:38 blob/blob-text.js operation='arrayBuffer' n=1000000 bytes=0 1.80 % ±4.26% ±5.68% ±7.40%
23:50:38 blob/blob-text.js operation='arrayBuffer' n=1000000 bytes=128 -0.31 % ±4.68% ±6.22% ±8.10%
23:50:38 blob/blob-text.js operation='arrayBuffer' n=1000000 bytes=8 -2.30 % ±5.96% ±7.93% ±10.32%
23:50:38 blob/blob-text.js operation='text' n=1000000 bytes=0 -3.52 % ±5.14% ±6.83% ±8.90%
23:50:38 blob/blob-text.js operation='text' n=1000000 bytes=128 0.61 % ±6.97% ±9.27% ±12.07%
23:50:38 blob/blob-text.js operation='text' n=1000000 bytes=8 -1.03 % ±2.79% ±3.71% ±4.85%
The patch does not apply cleanly to older versions and can't automatically be checked against those versions due to that.
If this would really change anything, it would only be noticeable for super tiny blobs and they should not be of any concern. As such, I recommend to stick to what we have.
|
Sorry for the late reply. I rebuild this change on both v16 and v18, and I could not reproduce the same benchmark result on either branch. Therefore, I'm closing this pull request and opening a pull-request only for the benchmark: #44990. I appreciate anyone who can help me understand how I got the initial 640% and 40% diff on benchmarks. Nevertheless, Thank you all for the reviews and comments. |
I ran the benchmarks only for
blob.text()since it's the only relevant one, but added more for future reference.