-
Notifications
You must be signed in to change notification settings - Fork 223
test(pathfinder): preserve logging info but require opt-in behavior #1034
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test(pathfinder): preserve logging info but require opt-in behavior #1034
Conversation
|
Auto-sync is disabled for ready for review pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
dcd4f86 to
95296f5
Compare
|
/ok to test |
|
I think it'd be unfortunate if this is merged:
What is the motivation for this PR? Before this PR: With this PR: |
|
The motivation is that I see large amounts of output in the summary when I run The effect is that I ignore it entirely because I would never use this information to debug the library. A bunch of it is "print the value of a variable", which I can do inside the interpreter using
I don't think I follow why per test logging would lead to more overlooking. I think it is noisier, but I don't understand how the new output is significantly worse than the old. Personally I overlook this output every time, because the signal to noise ratio is very low. Can we at least make the summary opt in? It's really cumbersome to have to sift through all the output every time I run the test suite. |
|
There's also an argument for using builtin tools instead of rolling our own custom fixtures: it's overall less maintenance because they are simpler to understand and operate. The current fixture-returning-a-function seems unnecessarily complex. |
|
Wait, also, logging will show up on failed tests by default, and isn't that where we want to see the summary anyway? |
|
Would it work for your purposes if we tied the We could tell the SWQA teams to always run with Another idea: Could you help me understand your situation more by giving me a reproducer for this:
|
You need a python3.13 or python3.14 free-threaded build for that command to work (along with the |
It's more subtle: It can be really important to see which I'll play with your command, thanks! |
|
The I want to have a way to enable this that explicitly opts-in and leave things as-is by default. |
That's at odds with why I chose the INFO summary route ("It's much harder to guess information than to ignore"). Let's please try a little harder to not just replace one problem with another. I agree we shouldn't create a distracting flood in your development workflow. |
|
I'm experimenting with this PR as-is, except that I added two prints: diff --git a/cuda_pathfinder/tests/test_find_nvidia_headers.py b/cuda_pathfinder/tests/test_find_nvidia_headers.py
index ac00291c5..b60f50d8d 100644
--- a/cuda_pathfinder/tests/test_find_nvidia_headers.py
+++ b/cuda_pathfinder/tests/test_find_nvidia_headers.py
@@ -72,6 +72,7 @@ def test_supported_headers_site_packages_ctk_consistency():
@pytest.mark.parametrize("libname", SUPPORTED_HEADERS_CTK.keys())
def test_find_ctk_headers(libname):
+ print(f"\nLOOOK test_find_ctk_headers({libname=!r})", flush=True)
hdr_dir = find_nvidia_header_directory(libname)
logging.info(f"{hdr_dir=!r}")
if hdr_dir:
diff --git a/cuda_pathfinder/tests/test_load_nvidia_dynamic_lib.py b/cuda_pathfinder/tests/test_load_nvidia_dynamic_lib.py
index 6fa3a2930..38e497a2e 100644
--- a/cuda_pathfinder/tests/test_load_nvidia_dynamic_lib.py
+++ b/cuda_pathfinder/tests/test_load_nvidia_dynamic_lib.py
@@ -95,6 +95,7 @@ def _get_libnames_for_test_load_nvidia_dynamic_lib():
@pytest.mark.parametrize("libname", _get_libnames_for_test_load_nvidia_dynamic_lib())
def test_load_nvidia_dynamic_lib(libname):
+ print(f"\nLOOOK test_load_nvidia_dynamic_lib({libname=!r})", flush=True)
# We intentionally run each dynamic library operation in a child process
# to ensure isolation of global dynamic linking state (e.g., dlopen handles).
# Without child processes, loading/unloading libraries during testing couldI'm using this (Ubuntu 24.04 WSL2 on Windows 11 24H2): With that I see the below. Each test is run 201 times. What could explain that? (It's NOT this PR I'm sure. Also not my prints: |
|
Dumping an alternative tiny patch here for now: diff --git a/cuda_pathfinder/tests/conftest.py b/cuda_pathfinder/tests/conftest.py
index cfef9a954..378c82eec 100644
--- a/cuda_pathfinder/tests/conftest.py
+++ b/cuda_pathfinder/tests/conftest.py
@@ -9,7 +9,16 @@ def pytest_configure(config):
config.custom_info = []
+def _cli_has_flag(args, flag):
+ return any(arg == flag or arg.startswith(flag + "=") for arg in args)
+
+
def pytest_terminal_summary(terminalreporter, exitstatus, config): # noqa: ARG001
+ if not config.getoption("verbose"):
+ return
+ if _cli_has_flag(config.invocation_params.args, "--iterations"):
+ return
+
if config.custom_info:
terminalreporter.write_sep("=", "INFO summary")
for msg in config.custom_info:It does two things:
There should be no more flooding, which I assume was in large part due to the surprising (to me) 201 repeats. |
|
This feels like an overly specific solution to support a single QA testing scenario. Why are we special-casing that scenario? Isn't it more common for a developer to simply run the tests? |
|
@rwgk If you want to put up that patch I can close this one. My objections are noted here, so I don't think we need to keep going back and forth if the outcome is going to be that patch. |
|
Will do, thanks, I really appreciate if we can try and see how it goes. I'm committed to actively helping you if the feature is still causing issues in your workflow, after we try the small patch. The feature has been very useful to me. I'm usually running with |
This PR attempts to allow developers to opt in to the info level logging
when running
cuda_pathfindertests, instead of the logging always showing upin a summary, including when terminating tests early using ^C.
Now, if you want the extra info you have to pass
--log-cli-level=INFO.The main difference now is that the logging will be live instead of in a summary.
I would argue this is more useful when you've opted into the logging, since the
logs will appear immediately below the test name and thus can be easily associated with
the logging invocation.
Finally, this also requires less fixture hacking and less pytest configuration.