Description
[moved from #16192]
It's great to have a standard compiler bench suite. But the go1 suite is deep (fmt) and not particularly broad. Though it is slow and inconvenient to run lots of benchmarks, it is also very useful to have a large variety of code to validate compiler changes that have non-obvious trade-offs, like inlining and code layout.
I propose that we expand the go1 bench suite. I propose that we do it by pulling from real world code that people have filed performance bugs about, preferring medium-sized benchmarks. (That is, not so slow they only execute once in a second, but doing enough non-trivial work that they should be decently robust.) We would get IP permission and do just enough cleanup to make them fit the go1 mold. Two potential examples include #16192 and #16122.
We could do it ad hoc (just send a CL when we see a good candidate) or accumulate a list and occasionally evaluate it. If the latter, the list could be accumulated here or with GitHub labels. I don't feel strongly about either of those decisions; input welcome.