Skip to content

gh-130363: add a test resource to mark tests that need an idle system #130508

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

rossburton
Copy link
Contributor

@rossburton rossburton commented Feb 24, 2025

Some tests are very sensitive to timing and will fail on a loaded system, typically because there are small windows for timeouts to trigger in.

Some of these are poorly implemented tests which can be improved, but others may genuinely have strict timing requirements.

Add a test resource so that these tests can be marked as such, and only ran when the system is known to be idle.

…system

Some tests are very sensitive to timing and will fail on a loaded system,
typically because there are small windows for timeouts to trigger in.

Some of these are poorly implemented tests which can be improved, but
others may genuinely have strict timing requirements.

Add a test resource so that these tests can be marked as such, and only
ran when the system is known to be idle.
@vstinner
Copy link
Member

vstinner commented Feb 24, 2025

Some tests are very sensitive to timing and will fail on a loaded system

Which tests? Do you have examples?

The Python test suite is reliable on busy / heavy loaded systems.

@rossburton
Copy link
Contributor Author

The latest example which triggered this:

FAIL: test_timerfd_initval (test.test_os.TimerfdTests.test_timerfd_initval)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python3.13/test/test_os.py", line 4166, in test_timerfd_initval
    self.assertAlmostEqual(next_expiration, initial_expiration, places=self.CLOCK_RES_PLACES)
    ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: 0.24403756000000001 != 0.25 within 3 places (0.0059624399999999855 difference)

The test is checking how accurately the timer is.

@vstinner
Copy link
Member

Would be be possible to tune TimerfdTests to accept a large time difference, rather than skipping the test?

@rossburton
Copy link
Contributor Author

rossburton commented Feb 24, 2025

Isn't the point of the test to check the timings though? Would it be considered a pass if the request was a 0.1 second timer and a bug meant it waited for 0.1 hours instead?

Found an existing bug for that specific test on another Linux system: #126112.

@mhsmith
Copy link
Member

mhsmith commented Feb 24, 2025

Found an existing bug for that specific test on another Linux system: #126112

These are two different timerfd tests. The one quoted above isn't really testing the timer at all – it never expires. It's really testing that a few lines of Python code can run within 1 millisecond. This has already been increased to 10 ms on Android emulators (see the comment at the top of the test class), and I think it wouldn't be unreasonable to extend that to all platforms.

By comparison, the test mentioned in #126112 has a much wider timing margin – 125 ms if I understand correctly – so either it's caused by something else, or my understanding is wrong.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants