-
Notifications
You must be signed in to change notification settings - Fork 18k
proposal: expand the go1 bench suite #20384
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I suggest |
Seems reasonable. Maybe x/benchmarks/compiler? Although if we have that, why do we also need the go1 benchmarks? Should we move them over to seed it? |
No. For benchmarks to be useful they need to not change over time. I think we should leave the go1 benchmarks frozen as they are and put new stuff in |
Then we will have two competing sets of benchmarks and two separate places I have to go to benchmark my compiler changes. There should be only one. (In fact, see that we already have that--x/benchmarks has a build benchmark measuring toolchain performance, but we also have x/tools/cmd/compilebench.) |
But "not change over time" is orthogonal to where they are stored. Move them or add to them, as long as they're not altered and the tooling only tries to compare benchmarks that were available both before and after it shouldn't be an issue. That makes me think of something else. Lets say someone designs a new, very good benchmark (whatever that means). It might be valuable to have it run against previous versions of Go to compare performance of this benchmark historically. That becomes more difficult if the benchmarks are stored |
If these are third-party-authored, please put them outside the main repo (x/benchmarks is fine). Also please hook into the existing x/benchmarks framework instead of making new standalone ones, so that all the standard data that x/benchmarks benchmarks report comes for free. But x/benchmarks is the place, probably. go1 is fine for new smaller tests (roughly same size as the ones there) that are important and locally authored. |
Closing since that should make clear where things go (for Josh's case, x/benchmarks). |
[moved from #16192]
It's great to have a standard compiler bench suite. But the go1 suite is deep (fmt) and not particularly broad. Though it is slow and inconvenient to run lots of benchmarks, it is also very useful to have a large variety of code to validate compiler changes that have non-obvious trade-offs, like inlining and code layout.
I propose that we expand the go1 bench suite. I propose that we do it by pulling from real world code that people have filed performance bugs about, preferring medium-sized benchmarks. (That is, not so slow they only execute once in a second, but doing enough non-trivial work that they should be decently robust.) We would get IP permission and do just enough cleanup to make them fit the go1 mold. Two potential examples include #16192 and #16122.
We could do it ad hoc (just send a CL when we see a good candidate) or accumulate a list and occasionally evaluate it. If the latter, the list could be accumulated here or with GitHub labels. I don't feel strongly about either of those decisions; input welcome.
cc @davecheney @navytux
The text was updated successfully, but these errors were encountered: