Skip to content

Conversation

@Liam-DeVoe
Copy link
Contributor

@Liam-DeVoe Liam-DeVoe commented Jul 16, 2025

Hypothesis is ~basically thread safe as of 6.135.32 (see HypothesisWorks/hypothesis#4451). This changes the check for hypothesis to be version-based instead of unconditional. Probably shouldn't be merged until Hypothesis is tested on a few more repositories in the wild.

I won't be offended if you close or don't use this PR if it's less effort for you that way, I just had it locally and figured I would PR.

@ngoldbaum
Copy link
Collaborator

I triggered CI and it looks like tests are failing: https://github.com/Quansight-Labs/pytest-run-parallel/actions/runs/16329573950/job/46129022108?pr=96#step:7:67

Since hypothesis itself is using pytest-run-parallel to debug this stuff, there probably isn't much point in adding hypothesis uses in the tests here, but it looks like you need to update the test that we have now.

@Liam-DeVoe
Copy link
Contributor Author

Liam-DeVoe commented Jul 16, 2025

tox is pinning hypothesis to 6.113.0 since we dropped support for python 3.8 in 6.114.0 (https://hypothesis.readthedocs.io/en/latest/changelog.html#v6-114-0), I'll remove that job? or would you prefer to keep it and conditionally run the hypothesis test on 3.9+?

@ngoldbaum
Copy link
Collaborator

I'll remove that job? or would you prefer to keep it and conditionally run the hypothesis test on 3.9+?

I think we can drop 3.8 support in the next release that adds 3.14 support but let’s wait for @lysnikolaou to confirm.

@lysnikolaou
Copy link
Member

lysnikolaou commented Jul 17, 2025

I think we can drop 3.8 support in the next release that adds 3.14 support.

Sounds good to me! We should probably do a release before all of this gets merged and then do a proper minor release with all of this after that also drops 3.8 support.

@ngoldbaum
Copy link
Collaborator

ngoldbaum commented Jul 17, 2025

OK cool, let's drop Python 3.8 support here and add 3.14 support in #84. Should the AST parsing issue for numpy.testing.assert_warns be fixed at this point?

@ngoldbaum
Copy link
Collaborator

Test failure looks a little like a pypy-specific thread safety issue in hypothesis.

@Liam-DeVoe
Copy link
Contributor Author

Liam-DeVoe commented Jul 18, 2025

Thanks, I think I see how this could happen in general: HypothesisWorks/hypothesis#4476.

Self-note:

  • update hypothesis version check when merged

@Liam-DeVoe
Copy link
Contributor Author

Liam-DeVoe commented Jul 21, 2025

pypy failure is a DeadlineExceeded. I've opened HypothesisWorks/hypothesis#4478 for discussion.

We can easily work around this here with @settings(deadline=None), even before/if it gets resolved upstream.

@ngoldbaum
Copy link
Collaborator

ngoldbaum commented Jul 21, 2025

@lysnikolaou and I chatted and I think we'd like to add a new configuration knob that people can use to disable running hypothesis tests. We'll default to running hypothesis tests. The knob is just in case turning on hypothesis tests leads to new test failures in buggy code or tests that were previously masked by skipping hypothesis tests. That will let people quickly update to the newest pytest-run-parallel version but then fix their code and tests at their leisure.

I'll be adding a similar feature when I finish up #84, so you can crib off of me once that's done, hopefully in the next few days.

@Liam-DeVoe
Copy link
Contributor Author

That makes sense to me, I wouldn't want a hypothesis bug to block downstream from updating pytest-run-parallel 👍

@ngoldbaum
Copy link
Collaborator

I'll be adding a similar feature when I finish up #84, so you can crib off of me once that's done, hopefully in the next few days.

This is done now, see the PR: #84

@ngoldbaum
Copy link
Collaborator

I triggered the tests - looks like everything is OK.

I noticed some oddness on the 3.14 CI which I missed when we added it, so I opened #99 to fix that.

I'm not sure why github is showing the 3.8 tests as being required to pass. Maybe if you rebase on main once #99 is merged that will clear it? I'm not sure if protected branch settings require that to be fixed or if we can just merge.

Copy link
Collaborator

@ngoldbaum ngoldbaum left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just one small docs suggestion.

@ngoldbaum
Copy link
Collaborator

LGTM, @lysnikolaou did you want to look at this again before merging?

@Liam-DeVoe
Copy link
Contributor Author

I'm not sure why github is showing the 3.8 tests as being required to pass.

I'm not sure yeah...maybe github doesn't like a non-member modifying CI workflows. Worst case you can try PRing this

Copy link
Member

@lysnikolaou lysnikolaou left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM too! Let's merge it! Thanks @tybug!

@lysnikolaou
Copy link
Member

I'm not sure why github is showing the 3.8 tests as being required to pass.

I'm not sure yeah...maybe github doesn't like a non-member modifying CI workflows. Worst case you can try PRing this

That's because those were required checks according our branch protection rules. I've removed them, so all good now. Thanks again!

@lysnikolaou lysnikolaou merged commit 3b7c72e into Quansight-Labs:main Jul 24, 2025
10 checks passed
@Liam-DeVoe
Copy link
Contributor Author

Thanks Nathan and Lysandros! I'll add a pytest-run-parallel CI job to Hypothesis whenever pytest-run-parallel cuts a release 👀

@ngoldbaum
Copy link
Collaborator

I'd like to quickly get a fix in for #100 today, but other than that I think we're ready for a 0.6.0 release.

@lysnikolaou
Copy link
Member

Hey @tybug, I tried to release 0.6.0 today but it seems that 3.14 CI is failing because of hypothesis. Could you give that a look?

Here's the test report: https://github.com/Quansight-Labs/pytest-run-parallel/actions/runs/16649513802/job/47118238943#step:5:204

@Liam-DeVoe
Copy link
Contributor Author

looking into it now

@Liam-DeVoe
Copy link
Contributor Author

initial reaction: I don't know if this the fault of hypothesis, or a bug in 3.14t which happens to be induced by hypothesis. The base error looks pathlib-internal:

E           and: '    @property'
E           and: '    def _raw_path(self):'
E           and: '        paths = self._raw_paths'
E           and: '        if len(paths) == 1:'
E           and: '            return paths[0]'
E           and: '        elif paths:'
E           and: '            # Join path segments from the initializer.'
E           and: '>           path = self.parser.join(*paths)'
E           and: '                   ^^^^^^^^^^^^^^^^^^^^^^^^'
E           and: "E           TypeError: join() missing 1 required positional argument: 'a'"

https://github.com/python/cpython/blob/438cbd857a875efc105b4215b1ae131e67af37e1/Lib/pathlib/__init__.py#L332

@ngoldbaum
Copy link
Collaborator

ngoldbaum commented Jul 31, 2025

It looks like pathlib did have some changes in the 3.14 release cycle. Maybe we need to make a CPython bug report. Ping @barneygale in case you're interested in a possible pathlib bug in 3.14 found by hypothesis.

@Liam-DeVoe
Copy link
Contributor Author

Rerunning the tests with export HYPOTHESIS_NO_TRACEBACK_TRIM=1 set may elicit a better stacktrace error here.

@Liam-DeVoe
Copy link
Contributor Author

I can't reproduce this locally on macos 3.14t. But since it's pathlib-based, it's possible this only reproduces on linux. I also ran the hypothesis test suite under 3.14t - again, on macos - and didn't see this error occur.

 ~/Desktop/Liam/coding/pytest-run-parallel λ python3.14t -m pytest -k test_runs_hypothesis_in_parallel --skip-thread-unsafe true
=========================================================================== test session starts ===========================================================================
platform darwin -- Python 3.14.0rc1, pytest-8.4.1, pluggy-1.6.0
rootdir: /Users/tybug/Desktop/Liam/coding/pytest-run-parallel
configfile: pyproject.toml
plugins: xdist-3.8.0, hypothesis-6.136.6, run-parallel-0.5.1.dev0
collected 47 items / 46 deselected / 1 selected                                                                                                                           
Collected 0 items to run in parallel

tests/test_run_parallel.py .                                                                                                                                        [100%]

==================================================================== 1 passed, 46 deselected in 0.06s =====================================================================
 ~/Desktop/Liam/coding/pytest-run-parallel λ 

@lysnikolaou
Copy link
Member

I cannot reproduce this locally either and rerunning the tests on CI made them pass. It probably looks like a thread safety issue in pathlib so I'm going ahead an releasing. If an error happens, we can always push a bigfix release fairly quickly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants