Skip to content

Conversation

@janosh
Copy link
Member

@janosh janosh commented Oct 25, 2022

Now that we auto-split tests onto 10 parallel runners using pytest-split (#2704), we could run all tests on all platforms (ubuntu, mac, windows).

Before, the largest test suite (don't even think it was the full one) was run only on ubuntu.

Still a WIP. CI is expected to fail due to install errors from BoltzTrap2 on Windows and due to commenting out:

# for pkg in cmd_line/*;
#   do echo "$(pwd)/cmd_line/$pkg/Linux_64bit" >> "$GITHUB_PATH";
# done

@janosh
Copy link
Member Author

janosh commented Oct 25, 2022

Maybe we shouldn't run on macos actually since it's limited to 5 concurrent runners so would come with significant wait times (docs on GH action usage limits) and it's unlikely to surface errors that wouldn't also show up on ubuntu.

@janosh
Copy link
Member Author

janosh commented Oct 25, 2022

@shyuep These lines don't work on windows. Is there any point trying to fix that? Linux_64bit sounds like the binary wouldn't run anyway?

for pkg in cmd_line/*;
  do echo "$(pwd)/cmd_line/$pkg/Linux_64bit" >> "$GITHUB_PATH";
done

No tests fail without them (presumably because tests are skipped if CLI commands can't be found). So we could just skip those lines on Windows.

@shyuep
Copy link
Member

shyuep commented Oct 25, 2022

There is no need for full tests on windows or macOS.

@janosh
Copy link
Member Author

janosh commented Oct 25, 2022

Why not on windows? From my experience, there's a large number of errors that only show up on Windows. I think we should try to catch and fix them early.

… and windows

delete .github/workflows/test-{max,win}.yml
@coveralls
Copy link

coveralls commented Oct 25, 2022

Coverage Status

Coverage decreased (-70.2%) to 0.0% when pulling f6b3b00 on single-ci-tests-file into 455c17a on master.

@janosh janosh force-pushed the single-ci-tests-file branch from bf9f97e to 0e2d074 Compare October 25, 2022 22:32
@janosh janosh force-pushed the single-ci-tests-file branch from 0e2d074 to 464504a Compare October 25, 2022 22:43
@mkhorton
Copy link
Member

I'd be pro testing on more platforms too (Windows especially, since we have seen some odd errors in the past).

I don't feel the need for this on every PR, although I am not against that either, but on the main branch sure.

@janosh janosh changed the title Run all tests on all platforms Run full test suite on Windows Oct 25, 2022
@shyuep
Copy link
Member

shyuep commented Oct 25, 2022

Because some CLI tools are not available on windows. If you want to compile all the CLI tools for all platforms and keep them updated, sure. But I only care that those work on Linux at the minimum.

@janosh
Copy link
Member Author

janosh commented Oct 25, 2022

Because some CLI tools are not available on windows. If you want to compile all the CLI tools for all platforms and keep them updated, sure. But I only care that those work on Linux at the minimum.

Agreed, I should have been clearer. My proposal was all the tests that can easily be run on Windows should be. For stuff that doesn't install, I agree we should skip it (like I did in f89b31b).

@sgbaird
Copy link

sgbaird commented Oct 26, 2022

I was chatting with @janosh a bit. Our group has also had some issues with pymatgen on Windows in the past, I think that was mostly on the installation side when trying to install pymatgen via Anaconda (I generally prefer miniconda environments).

@mkhorton
Copy link
Member

@sgbaird raises a good point too; in our test suite, we don't actually test our packaging at all. Packaging issues have actually caused several issues in the past (I'm thinking specifically of one issue we had with Linux wheels, another issue we've had a few times with missing files), none of which are caught by our current CI.

@shyuep
Copy link
Member

shyuep commented Oct 27, 2022

I am not sure how packaging can be tested. We already automate most of it and missing files happen rarely only when someone added some new config file but forgot to add it into META or setup.py. Those usually affect a very small number of people.

@janosh
Copy link
Member Author

janosh commented Oct 27, 2022

@shyuep Something like this maybe?

from glob import glob
from os.path import dirname, exists

import pytest

ROOT = dirname(dirname(__file__))
package_sources_path = f"{ROOT}/pymatgen.egg-info/SOURCES.txt"


@pytest.mark.skipif(
    not exists(package_sources_path),
    reason="No pymatgen.egg-info/SOURCES.txt file, run pip install . to create",
)
def test_egg_sources():
    """Check that all source and data files under pymatgen/**/* are listed in pymatgen.egg-info/SOURCES.txt."""
    with open(package_sources_path) as file:
        sources = file.read()

    src_files = [x for x in glob(f"{ROOT}/pymatgen/**/*", recursive=True) if "tests" not in x]

    for filepath in src_files:
        rel_path = filepath.split(f"{ROOT}/pymatgen/")[1]
        assert rel_path in sources

I added a similar test in CompRhys/aviary#46 after we repeatedly forgot to package some JSON data/embedding files.

@shyuep
Copy link
Member

shyuep commented Oct 27, 2022

I wouldn't do it as a unittest. That is a violation of what a unittest means. I would add it as a release check in the invoke tasks.py file.

@janosh
Copy link
Member Author

janosh commented Oct 27, 2022

@shyuep Happy to add it to tasks.py.

That is a violation of what a unittest means.

Do you mean we're not testing the code base itself but its deployment? I would say let's test whatever gives us peace of mind.

@mkhorton
Copy link
Member

Can't one test the packaging also simply by having a workflow that does a pip install or similar of the latest version, and run tests against that?

@janosh
Copy link
Member Author

janosh commented Oct 27, 2022

@mkhorton We could do that too. Though considering we're at <80% coverage, there's no guarantee we would catch files missing from the package this way.

@mkhorton
Copy link
Member

Fair. I was really thinking e.g. of the one issue we had with Linux installs a while back. It seems like installation issues are ones that can affect large numbers of people, but are also ones that have slipped through our CI without being noticed. With that said, we usually hear about it fairly quickly if it happens so can fix...

@shyuep
Copy link
Member

shyuep commented Oct 27, 2022

@janosh I would say unittest are meant to test code functionality, not code packaging or deployment. Regardless of packaging, someone working from the Github fork will always have a working code if the unittests pass.

Missing package files are a deployment/release issue. I am all for implementing checks on the release process to ensure that we do not have packaging issues. I don't think we need to put it in unittests because it is not a code functionality issue.

reported in h5py/h5py#2110

Current thread 0x00001910 (most recent call first):
  File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
  File "<frozen importlib._bootstrap_external>", line 1174 in exec_module
  File "<frozen importlib._bootstrap>", line 671 in _load_unlocked
  File "<frozen importlib._bootstrap>", line 975 in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 991 in _find_and_load
  File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
  File "<frozen importlib._bootstrap>", line 1042 in _handle_fromlist
  File "c:\hostedtoolcache\windows\python\3.8.10\x64\lib\site-packages\h5py\__init__.py", line 56 in <module>
@janosh janosh enabled auto-merge November 1, 2022 03:34
@janosh janosh merged commit c1f1e95 into master Nov 1, 2022
@janosh janosh deleted the single-ci-tests-file branch November 1, 2022 03:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants