-
Notifications
You must be signed in to change notification settings - Fork 925
Run full test suite on Windows #2708
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Maybe we shouldn't run on |
|
@shyuep These lines don't work on windows. Is there any point trying to fix that? for pkg in cmd_line/*;
do echo "$(pwd)/cmd_line/$pkg/Linux_64bit" >> "$GITHUB_PATH";
doneNo tests fail without them (presumably because tests are skipped if CLI commands can't be found). So we could just skip those lines on Windows. |
|
There is no need for full tests on windows or macOS. |
|
Why not on windows? From my experience, there's a large number of errors that only show up on Windows. I think we should try to catch and fix them early. |
… and windows
delete .github/workflows/test-{max,win}.yml
bf9f97e to
0e2d074
Compare
0e2d074 to
464504a
Compare
|
I'd be pro testing on more platforms too (Windows especially, since we have seen some odd errors in the past). I don't feel the need for this on every PR, although I am not against that either, but on the main branch sure. |
|
Because some CLI tools are not available on windows. If you want to compile all the CLI tools for all platforms and keep them updated, sure. But I only care that those work on Linux at the minimum. |
Agreed, I should have been clearer. My proposal was all the tests that can easily be run on Windows should be. For stuff that doesn't install, I agree we should skip it (like I did in f89b31b). |
|
I was chatting with @janosh a bit. Our group has also had some issues with pymatgen on Windows in the past, I think that was mostly on the installation side when trying to install pymatgen via Anaconda (I generally prefer miniconda environments). |
|
@sgbaird raises a good point too; in our test suite, we don't actually test our packaging at all. Packaging issues have actually caused several issues in the past (I'm thinking specifically of one issue we had with Linux wheels, another issue we've had a few times with missing files), none of which are caught by our current CI. |
|
I am not sure how packaging can be tested. We already automate most of it and missing files happen rarely only when someone added some new config file but forgot to add it into META or setup.py. Those usually affect a very small number of people. |
|
@shyuep Something like this maybe? from glob import glob
from os.path import dirname, exists
import pytest
ROOT = dirname(dirname(__file__))
package_sources_path = f"{ROOT}/pymatgen.egg-info/SOURCES.txt"
@pytest.mark.skipif(
not exists(package_sources_path),
reason="No pymatgen.egg-info/SOURCES.txt file, run pip install . to create",
)
def test_egg_sources():
"""Check that all source and data files under pymatgen/**/* are listed in pymatgen.egg-info/SOURCES.txt."""
with open(package_sources_path) as file:
sources = file.read()
src_files = [x for x in glob(f"{ROOT}/pymatgen/**/*", recursive=True) if "tests" not in x]
for filepath in src_files:
rel_path = filepath.split(f"{ROOT}/pymatgen/")[1]
assert rel_path in sourcesI added a similar test in CompRhys/aviary#46 after we repeatedly forgot to package some JSON data/embedding files. |
|
I wouldn't do it as a unittest. That is a violation of what a unittest means. I would add it as a release check in the invoke tasks.py file. |
|
@shyuep Happy to add it to
Do you mean we're not testing the code base itself but its deployment? I would say let's test whatever gives us peace of mind. |
|
Can't one test the packaging also simply by having a workflow that does a |
|
@mkhorton We could do that too. Though considering we're at <80% coverage, there's no guarantee we would catch files missing from the package this way. |
|
Fair. I was really thinking e.g. of the one issue we had with Linux installs a while back. It seems like installation issues are ones that can affect large numbers of people, but are also ones that have slipped through our CI without being noticed. With that said, we usually hear about it fairly quickly if it happens so can fix... |
|
@janosh I would say unittest are meant to test code functionality, not code packaging or deployment. Regardless of packaging, someone working from the Github fork will always have a working code if the unittests pass. Missing package files are a deployment/release issue. I am all for implementing checks on the release process to ensure that we do not have packaging issues. I don't think we need to put it in unittests because it is not a code functionality issue. |
reported in h5py/h5py#2110 Current thread 0x00001910 (most recent call first): File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed File "<frozen importlib._bootstrap_external>", line 1174 in exec_module File "<frozen importlib._bootstrap>", line 671 in _load_unlocked File "<frozen importlib._bootstrap>", line 975 in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 991 in _find_and_load File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed File "<frozen importlib._bootstrap>", line 1042 in _handle_fromlist File "c:\hostedtoolcache\windows\python\3.8.10\x64\lib\site-packages\h5py\__init__.py", line 56 in <module>
Now that we auto-split tests onto 10 parallel runners using
pytest-split(#2704), we could run all tests on all platforms (ubuntu,mac,windows).Before, the largest test suite (don't even think it was the full one) was run only on
ubuntu.Still a WIP. CI is expected to fail due to install errors from BoltzTrap2 on Windows and due to commenting out: