-
Notifications
You must be signed in to change notification settings - Fork 18k
cmd/compile: clarify whether "-l" and "-N" compiler flags are actually supported #49390
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
(CC @FiloSottile, @bradfitz, @josharian, @cuonglm, @cherrymui) |
I think the question is what "supported" means. I would say "supported" means "the flag does what is specified in the doc, and will continue to do so". The flag's specified behavior could be printing extra debugging information, altering compiler's behavior, making the compiler slower (or faster), including even generating broken programs. It doesn't mean all generated programs have the same expected behaviors (like, a test is expected to pass). That said, I think Action 2 above is a reasonable approach. That also said, https://golang.org/doc/go1compat#tools said tools can change (although it is unlikely that we'll intentionally break it). I don't think we want to declare anything stronger than that (so what "fully supported" actually means?). |
Action 1 is a non-starter, as it would prevent us from writing tests that protect the performance characteristics of low level code (like the proximate cause of this bug, whether UDP send/recv allocates). Action 2 seems perfectly reasonable to me. It's how we deal with a similar problem, namely alloc tests that fail under the race detector. I will note that build tags are annoyingly verbose to use for skipping tests. Related in multiple ways: #36477. (For our purposes, we can have helpers in an internal package.) |
I agree that that's the heart of the question. To me, “supported” means:
|
@josharian, course (1) would not prevent us from writing tests that protect performance characteristics. It would only prevent us from running those tests by default as part of the package's test. Other options that would remain available include:
FWIW, I think a flag- or environment-guarded test is likely to be less annoying to write than one guarded by a build constraint, although it's more likely to slip through until the TryBots are run during code review. |
@bcmills all those options make running the test difficult or rare. I don't want to have to explain that when developing package net you need to export an envvar to pretend to be a builder or that running the package net tests is insufficient. And the track record for using benchmarks to prevent regressions is terrible. The way you make it not terrible is with tooling. And we have tooling—it's alloc-per-run. Running with -N is the unusual case. Let's support it, but let's not have the tail wag the dog. |
That's why I suggest a flag. then you don't need to export a variable to pretend to be a builder; you just pass a flag to
And the failure message in the TryBot and the default skip message can even tell you exactly which flag to set to run it. I agree that the track record for using benchmarks to prevent regressions is not good, but I also don't think it's unreasonable to expect developers to consider allocations, run the benchmarks, and check for regressions when making a change to performance-sensitive code. A TryBot failure that reminds you to run the benchmarks does not seem at all unreasonable to me. |
@bcmills thanks. I would think all (non-default) flags have the same support level, at least in theory. -N and -l are not more supported than -B (again theoretically). We can still have tests for them to ensure that it does what it should do, namely -N disables optimization, -l disables inlining (same as -B disables bounds checks), and nothing more than that (e.g. none of them should crash the compiler or generating a binary that doesn't run). But "deprecated" (in your Action 3) doesn't sound what we want, though. (I interpret "deprecated" as "you should not use it; it may be removed in the future".) |
My understanding is that I don't see a problem with the writing tests that only work if the code is optimized, especially when testing the compiler itself, so I think the only question is how tests should detect that fact. I don't think a build tag is quite right, but it's there and it's easy to use and I can't think of anything else that is simple, so I'm OK with it. |
If we use a build tag there is no API change here and this doesn't have to go through the proposal process. We can just drop a couple of files in internal/testenv and use that in the standard library. Other packages will have to make their own choices, but other packages are less likely to write tests that are affected by this and are also less likely to actually run their tests with |
I don't think that addresses the problem: if the build tag is internal to the standard library, then either the test is excluded by default (a situation @josharian wants to avoid), or it fails with |
These are not tests of the compiler itself — they are tests of packages like |
ObPedantic: they are in effect tests of the compiler, expressed as tests of other packages: they are testing that the compiler is behaving as the package expects. I'm not suggesting that the build tag is internal to the standard library. A build tag is a build tag. I'm suggesting that in internal/testenv we check for the "noopt" build tag and provide a function that the standard library can call to see whether it is running in a non-optimized build. Then when a builder that uses It seems to me that this completely addresses the problem with the standard library. It doesn't help at all with people running their own tests outside of the standard library. I'm suggesting that if those people want to test their code with The only reason we would need to define an official build tag would be non-test code needs to change. As far as I know, it does not. |
As I stated in the original post, the tests for packages in The specific case I am concerned about is:
In step (4), the user — in the course of diagnosing a problem in their own program — ends up running the tests for packages in That is: either the user needs to know to do something else in step (3), or the test needs to do something else in step (4), or we need some documentation to explain to the user what they did “wrong” to cause the tests to fail. |
Thanks. I guess that I have a hard time caring very much about that case. I think that building with I suggest that we use a standard-library-specific build tag and internal/testenv function as I suggested, and we make that internal/testenv function
|
A testenv function sounds good and would let us patch it in BoringCrypto to always skip, to centralize the workarounds for the BoringCrypto-introduced allocations. /cc @rolandshoemaker |
This proposal has been added to the active column of the proposals project |
@ianlancetaylor wrote:
Is that the answer? Does anyone object to that as the answer? |
It sounds like that's the answer. |
Based on the discussion above, this proposal seems like a likely accept. |
No change in consensus, so accepted. 🎉 |
Change https://go.dev/cl/423256 mentions this issue: |
Update Dec 15 2021 The answer appears to be:
-@rsc
I've seen a number of changes lately where test failures on the
linux-amd64-noopt
builder are worked around by skipping tests based on the presence of-noopt
in theGO_BUILDER_NAME
environment variable:https://cs.opensource.google/search?q=%22-noopt%22%20case:yes&ss=go%2Fgo
Looking at the git history, it appears that all of these environment-based skips were added within the past year. (As far as I can tell, this is a recent shift, not a long-standing practice.)
Those tests are for packages within
std
, and thus may be run as part ofgo test all
within a user's module. Users are not expected to setGO_BUILDER_NAME
, but may reasonably set any supported compiler flag in theGOFLAGS
environment variable.go test all
should be a reasonable thing for users to do, and should pass reliably for any supported build configuration.That leads to the central question of this issue: are the
-l
and-N
flags still a supported way for Go users to build their programs?To me, the existence of the
linux-amd64-noopt
builder suggests that they are. However, if they are supported, then the tests instd
should also pass in that supported configuration when run by users, and today they systematically do not.I propose that we take one of the following courses of action. I don't have a strong preference as to which one.
Declare that the
-l
and-N
flags are fully supported. Remove the test skips based onGO_BUILDER_NAME
containing-noopt
, and fix the tests throughout the project so that they pass by default when the-l
and/or-N
flags are set. (Perhaps convert the tests to benchmarks, and/or skip them whenGO_BUILDER_NAME
is not set.)Declare that the
-l
and-N
flags are fully supported, and also define and document a build tag (such asnoopt
) that users can set (and check for) to explicitly disable tests that assume optimized builds. Switch the tests to use that build constraint instead ofGO_BUILDER_NAME
. (Thenoopt
build tag would be conceptually similar to thepurego
tag discussed in doc: mention "purego" build tag convention somewhere #23172.)Declare that the
-l
and-N
flags are deprecated but remain available for one-off debugging, and document that in thecmd/compile
documentation and a release note. Continue to run the builder, and allow builder-based skips going forward.The text was updated successfully, but these errors were encountered: