Skip to content

compiletest: assert that debugger provided for debuginfo tests and any tests actually collected for run #145326

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

klensy
Copy link
Contributor

@klensy klensy commented Aug 12, 2025

I've run tests on some machine which for some reason missed any debuggers, and x.py test tests/debuginfo happily reported that it successfully run 0 of 0 test. That is bad.

Added two asserts: that for debuginfo suite there actually found any debugger; and that for current config any (non zero) number of tests collected.

Feel free to suggest better wording for errors.

@rustbot
Copy link
Collaborator

rustbot commented Aug 12, 2025

r? @Mark-Simulacrum

rustbot has assigned @Mark-Simulacrum.
They will have a look at your PR within the next two weeks and either review your PR or reassign to another reviewer.

Use r? to explicitly pick a reviewer

@rustbot rustbot added A-compiletest Area: The compiletest test runner A-testsuite Area: The testsuite used to check the correctness of rustc S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-bootstrap Relevant to the bootstrap subteam: Rust's build system (x.py and src/bootstrap) labels Aug 12, 2025
@rustbot
Copy link
Collaborator

rustbot commented Aug 12, 2025

Some changes occurred in src/tools/compiletest

cc @jieyouxu

@Mark-Simulacrum
Copy link
Member

Hm, so on the one hand, this feels reasonable. On the other hand, if you genuinely don't have any debuggers I think this will cause e.g. x.py test to fail -- and that doesn't seem great.

I think if you're explicitly asking to run debugger-using tests it makes more sense to treat that as a failure...

@rust-lang/bootstrap (for lack of a better group), any opinions on this change? Do we think failing with zero tests found is right?

@jieyouxu
Copy link
Member

IMO this feels a bit weird (but it's due to how debuginfo suite is implemented, which IMO suboptimal in the first place). I've wanted to change the debuginfo setup to make it not report 0 of 0, but properly report like all the tests as ignored.

@Mark-Simulacrum
Copy link
Member

Ah, so this wouldn't actually panic if we reported all the tests as ignored? That seems like good reason to not do it, then...

@jieyouxu
Copy link
Member

IIRC how it currently works is a hack, if you don't have the debugger available (without specifying explicit debugger --debugger or sth), the test suite will silently pass as 0 of 0 tests run...

@Kobzol
Copy link
Member

Kobzol commented Aug 22, 2025

Hm, so on the one hand, this feels reasonable. On the other hand, if you genuinely don't have any debuggers I think this will cause e.g. x.py test to fail -- and that doesn't seem great.

I don't think that we should aim for x test to just work on every computer by default, even if the environment misses crucial components. I very much doubt many people do (successfully) run the whole x test suite on their local machines, it's very expensive to run. Basing that on recent Zulip discussions and my experience.

I'd rather error out here - if you run debuginfo tests and you don't have a debugger, it should be a failure.

If we want to make this situation a success, we should remove debuginfo tests from x test by default if you don't have a debugger. But if you explicitly run a debuginfo suite without a debugger, it should IMO be a loud error.

@Mark-Simulacrum
Copy link
Member

I think we agree in spirit, I'm not sure the current proposal ("no tests ran") feels right though. Is it OK to run debuginfo tests with just a single version of gdb? Presumably, yes, at least in their current formulation. But it seems clear that might run a very small amount of tests in practice, especially if it's an old version.

Does running 1/50 tests, with the rest ignored due to missing lldb/cdb/gdb (or wrong version) seem reasonably a "success"?

@Kobzol
Copy link
Member

Kobzol commented Aug 22, 2025

Does running 1/50 tests, with the rest ignored due to missing lldb/cdb/gdb (or wrong version) seem reasonably a "success"?

I would say so, as long as it clearly tells you that the rest was skipped because of some specific reason that you then might want to fix locally to run more tests (e.g. installing a debugger).

@Mark-Simulacrum
Copy link
Member

So, then it seems like:

FIXME: not really true, few tests in tests/debuginfo do not require any debugger

Should imply we drop that panic/assertion from this PR :)

@klensy
Copy link
Contributor Author

klensy commented Aug 22, 2025

Maybe print something like "Warning: tests that require debugger foo, bar will be skipped, only tests that require debugger baz will run" if there at least some debugger.

@klensy
Copy link
Contributor Author

klensy commented Aug 22, 2025

So, then it seems like:

FIXME: not really true, few tests in tests/debuginfo do not require any debugger

Should imply we drop that panic/assertion from this PR :)

Yes, things like https://github.com/rust-lang/rust/blob/master/tests/debuginfo/issue-14411.rs didn't require debugger, but i remember that there was some issues with running it anyway, i'll try to check what's wrong again.

@klensy
Copy link
Contributor Author

klensy commented Aug 22, 2025

Or by default all debuggers required, any error on locating them is hard error; and add option where you explicitly excludes usage of some debuggers, so not to rely on whether they found or not. Used/unused should still should be logged.

@Mark-Simulacrum
Copy link
Member

all debuggers required

This is impossible, right? E.g., cdb I think is Windows-only? It also doesn't get into the versioning -- but I wouldn't generally expect it to be easy to get a full set of debuggers to run the full test suite on one machine, at least not as a user. Maybe we should punt on this kind of check until some rework of debug info tests (e.g., I could imagine trying to build lldb if that's not too hard -- we do have llvm sources already -- so we have a reliable version, or similar stories that make it easier to get a baseline check; I could also imagine trying to lint DWARF information in generated binaries/libraries rather than using off the shelf debuggers).

@klensy
Copy link
Contributor Author

klensy commented Aug 23, 2025

No, i mean that by default all debuggers required - means that you should disable unused ones manually via compiletest/bootstrap/config.toml/etc. So when debuginfo test run: they will fail if debuggers not set/not found/etc -> user will review this and unset ones that he don't want to use, so user will have full info why some tests skipped.

@jieyouxu
Copy link
Member

jieyouxu commented Aug 23, 2025

I think the current behavior (that all debuggers are required and you can opt-out of certain ones) is very awkward. I wonder if it's... less annoying to run the test suite if:

  1. We allow the user to control run tests relevant to one or more of the debuggers via something like ./x test tests/debuginfo --debuggers=cdb,lldb,gdb. I don't know, the run all {cdb, lldb, gdb} behavior I never was able to successfully run.
  2. We explicitly have bootstrap configuration to thread through which exact debuggers will be used
  3. We hard error if the user requested one of the debuggers explicitly, but there wasn't a debugger path specified.

But perhaps before that, we may need to revisit what use cases we are looking for from the debuginfo test suite, because this has been a very awkward test suite for contributors to run IME. Like I think ./x test tests/debuginfo is itself a bit weird -- that depending on what debuggers you happen to have in the local environment, you could be implicitly running one or more subsets of tests, or even none.

For (2), it's because for instance, you can have multiple MSVC toolchains, but IIRC compiletest currently tries to be "smart" and discover a cdb (?)


Basically, I think I don't want to land this assertion but not because I feel it's wrong per se, but rather IMHO the current testing devex of the debuginfo test suite needs a redesign, and that needing this assertion to plug the situation where no tests get run is a symptom.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-compiletest Area: The compiletest test runner A-testsuite Area: The testsuite used to check the correctness of rustc S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-bootstrap Relevant to the bootstrap subteam: Rust's build system (x.py and src/bootstrap)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants