Skip to content

TEST: Final nose purge #891

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Feb 19, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 0 additions & 18 deletions .azure-pipelines/windows.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,24 +36,6 @@ jobs:
python -m pip install .[$(CHECK_TYPE)]
SET NIBABEL_DATA_DIR=%CD%\\nibabel-data
displayName: 'Install nibabel'
- script: |
mkdir for_testing
cd for_testing
cp ../.coveragerc .
nosetests --with-doctest --with-coverage --cover-package nibabel nibabel ^
-I test_data ^
-I test_environment ^
-I test_euler ^
-I test_giftiio ^
-I test_netcdf ^
-I test_pkg_info ^
-I test_quaternions ^
-I test_scaling ^
-I test_scripts ^
-I test_spaces ^
-I test_testing
displayName: 'Nose tests'
condition: and(succeeded(), eq(variables['CHECK_TYPE'], 'nosetests'))
- script: |
mkdir for_testing
cd for_testing
Expand Down
21 changes: 0 additions & 21 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,10 +27,6 @@ python:

jobs:
include:
# Old nosetests - Remove soon
- python: 3.7
env:
- CHECK_TYPE="nosetests"
# Basic dependencies only
- python: 3.5
env:
Expand Down Expand Up @@ -127,23 +123,6 @@ script:
cd doc
make html;
make doctest;
elif [ "${CHECK_TYPE}" == "nosetests" ]; then
# Change into an innocuous directory and find tests from installation
mkdir for_testing
cd for_testing
cp ../.coveragerc .
nosetests --with-doctest --with-coverage --cover-package nibabel nibabel \
-I test_data \
-I test_environment \
-I test_euler \
-I test_giftiio \
-I test_netcdf \
-I test_pkg_info \
-I test_quaternions \
-I test_scaling \
-I test_scripts \
-I test_spaces \
-I test_testing
elif [ "${CHECK_TYPE}" == "test" ]; then
# Change into an innocuous directory and find tests from installation
mkdir for_testing
Expand Down
4 changes: 0 additions & 4 deletions azure-pipelines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,3 @@ jobs:
py38-x64:
PYTHON_VERSION: '3.8'
PYTHON_ARCH: 'x64'
nosetests:
PYTHON_VERSION: '3.6'
PYTHON_ARCH: 'x64'
CHECK_TYPE: 'nosetests'
1 change: 0 additions & 1 deletion dev-requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
# Requirements for running tests
-r requirements.txt
nose
pytest
2 changes: 1 addition & 1 deletion doc/source/devel/advanced_testing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Long-running tests
Long-running tests are not enabled by default, and can be resource-intensive. To run these tests:

* Set environment variable ``NIPY_EXTRA_TESTS=slow``;
* Run ``nosetests``.
* Run ``pytest nibabel``.

Note that some tests may require a machine with >4GB of RAM.

Expand Down
2 changes: 1 addition & 1 deletion doc/source/devel/make_release.rst
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ Release checklist

* Make sure all tests pass (from the nibabel root directory)::

nosetests --with-doctest nibabel
pytest --doctest-modules nibabel

* Make sure you are set up to use the ``try_branch.py`` - see
https://github.com/nipy/nibotmi/blob/master/install.rst#trying-a-set-of-changes-on-the-buildbots
Expand Down
4 changes: 2 additions & 2 deletions doc/source/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ Requirements
* h5py_ (optional, for MINC2 support)
* PyDICOM_ 0.9.9 or greater (optional, for DICOM support)
* `Python Imaging Library`_ (optional, for PNG conversion in DICOMFS)
* nose_ 0.11 or greater and pytest_ (optional, to run the tests)
* pytest_ (optional, to run the tests)
* sphinx_ (optional, to build the documentation)

Get the development sources
Expand Down Expand Up @@ -128,7 +128,7 @@ module to see if everything is fine. It should look something like this::
>>>


To run the nibabel test suite, from the terminal run ``nosetests nibabel`` or
To run the nibabel test suite, from the terminal run ``pytest nibabel`` or
``python -c "import nibabel; nibabel.test()``.

To run an extended test suite that validates ``nibabel`` for long-running and
Expand Down
125 changes: 97 additions & 28 deletions nibabel/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,29 +36,6 @@
For more detailed information see the :ref:`manual`.
"""

# Package-wide test setup and teardown
_test_states = {
# Numpy changed print options in 1.14; we can update docstrings and remove
# these when our minimum for building docs exceeds that
'legacy_printopt': None,
}

def setup_package():
""" Set numpy print style to legacy="1.13" for newer versions of numpy """
import numpy as np
from distutils.version import LooseVersion
if LooseVersion(np.__version__) >= LooseVersion('1.14'):
if _test_states.get('legacy_printopt') is None:
_test_states['legacy_printopt'] = np.get_printoptions().get('legacy')
np.set_printoptions(legacy="1.13")

def teardown_package():
""" Reset print options when tests finish """
import numpy as np
if _test_states.get('legacy_printopt') is not None:
np.set_printoptions(legacy=_test_states.pop('legacy_printopt'))


# module imports
from . import analyze as ana
from . import spm99analyze as spm99
Expand Down Expand Up @@ -92,13 +69,105 @@ def teardown_package():
from . import streamlines
from . import viewers

from numpy.testing import Tester
test = Tester().test
bench = Tester().bench
del Tester

from .pkg_info import get_pkg_info as _get_pkg_info


def get_info():
return _get_pkg_info(os.path.dirname(__file__))


def test(label=None, verbose=1, extra_argv=None,
doctests=False, coverage=False, raise_warnings=None,
timer=False):
"""
Run tests for nibabel using pytest

The protocol mimics the ``numpy.testing.NoseTester.test()``.
Not all features are currently implemented.

Parameters
----------
label : None
Unused.
verbose: int, optional
Verbosity value for test outputs. Positive values increase verbosity, and
negative values decrease it. Default is 1.
extra_argv : list, optional
List with any extra arguments to pass to pytest.
doctests: bool, optional
If True, run doctests in module. Default is False.
coverage: bool, optional
If True, report coverage of NumPy code. Default is False.
(This requires the
`coverage module <https://nedbatchelder.com/code/modules/coveragehtml>`_).
raise_warnings : None
Unused.
timer : False
Unused.

Returns
-------
code : ExitCode
Returns the result of running the tests as a ``pytest.ExitCode`` enum
"""
import pytest
args = []

if label is not None:
raise NotImplementedError("Labels cannot be set at present")

try:
verbose = int(verbose)
except ValueError:
pass
else:
if verbose > 0:
args.append("-" + "v" * verbose)
elif verbose < 0:
args.append("-" + "q" * -verbose)

if extra_argv:
args.extend(extra_argv)
if doctests:
args.append("--doctest-modules")
if coverage:
args.extend(["--cov", "nibabel"])
if raise_warnings:
raise NotImplementedError("Warning filters are not implemented")
if timer:
raise NotImplementedError("Timing is not implemented")

args.extend(["--pyargs", "nibabel"])

pytest.main(args=args)


def bench(label=None, verbose=1, extra_argv=None):
"""
Run benchmarks for nibabel using pytest

The protocol mimics the ``numpy.testing.NoseTester.bench()``.
Not all features are currently implemented.

Parameters
----------
label : None
Unused.
verbose: int, optional
Verbosity value for test outputs. Positive values increase verbosity, and
negative values decrease it. Default is 1.
extra_argv : list, optional
List with any extra arguments to pass to pytest.

Returns
-------
code : ExitCode
Returns the result of running the tests as a ``pytest.ExitCode`` enum
"""
from pkg_resources import resource_filename
config = resource_filename("nibabel", "benchmarks/pytest.benchmark.ini")
args = []
if extra_argv is not None:
args.extend(extra_argv)
args.extend(["-c", config])
test(label, verbose, extra_argv=args)
8 changes: 2 additions & 6 deletions nibabel/benchmarks/bench_array_to_file.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,9 @@
import nibabel as nib
nib.bench()

If you have doctests enabled by default in nose (with a noserc file or
environment variable), and you have a numpy version <= 1.6.1, this will also
run the doctests, let's hope they pass.
Run this benchmark with::

Run this benchmark with:

nosetests -s --match '(?:^|[\\b_\\.//-])[Bb]ench' /path/to/bench_load_save.py
pytest -c <path>/benchmarks/pytest.benchmark.ini <path>/benchmarks/bench_load_save.py
"""

import sys
Expand Down
8 changes: 2 additions & 6 deletions nibabel/benchmarks/bench_arrayproxy_slicing.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,9 @@
import nibabel as nib
nib.bench()

If you have doctests enabled by default in nose (with a noserc file or
environment variable), and you have a numpy version <= 1.6.1, this will also
run the doctests, let's hope they pass.
Run this benchmark with::

Run this benchmark with:

nosetests -s --match '(?:^|[\\b_\\.//-])[Bb]ench' /path/to/bench_arrayproxy_slicing.py
pytest -c <path>/benchmarks/pytest.benchmark.ini <path>/benchmarks/bench_arrayproxy_slicing.py
"""

from timeit import timeit
Expand Down
8 changes: 2 additions & 6 deletions nibabel/benchmarks/bench_fileslice.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,9 @@
import nibabel as nib
nib.bench()

If you have doctests enabled by default in nose (with a noserc file or
environment variable), and you have a numpy version <= 1.6.1, this will also
run the doctests, let's hope they pass.
Run this benchmark with::

Run this benchmark with:

nosetests -s --match '(?:^|[\\b_\\.//-])[Bb]ench' /path/to/bench_fileslice.py
pytest -c <path>/benchmarks/pytest.benchmark.ini <path>/benchmarks/bench_fileslice.py
"""

import sys
Expand Down
8 changes: 2 additions & 6 deletions nibabel/benchmarks/bench_finite_range.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,9 @@
import nibabel as nib
nib.bench()

If you have doctests enabled by default in nose (with a noserc file or
environment variable), and you have a numpy version <= 1.6.1, this will also
run the doctests, let's hope they pass.
Run this benchmark with::

Run this benchmark with:

nosetests -s --match '(?:^|[\\b_\\.//-])[Bb]ench' /path/to/bench_finite_range
pytest -c <path>/benchmarks/pytest.benchmark.ini <path>/benchmarks/bench_finite_range.py
"""

import sys
Expand Down
8 changes: 2 additions & 6 deletions nibabel/benchmarks/bench_load_save.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,9 @@
import nibabel as nib
nib.bench()

If you have doctests enabled by default in nose (with a noserc file or
environment variable), and you have a numpy version <= 1.6.1, this will also
run the doctests, let's hope they pass.
Run this benchmark with::

Run this benchmark with:

nosetests -s --match '(?:^|[\\b_\\.//-])[Bb]ench' /path/to/bench_load_save.py
pytest -c <path>/benchmarks/pytest.benchmark.ini <path>/benchmarks/bench_load_save.py
"""

import sys
Expand Down
8 changes: 2 additions & 6 deletions nibabel/benchmarks/bench_streamlines.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,9 @@
import nibabel as nib
nib.bench()

If you have doctests enabled by default in nose (with a noserc file or
environment variable), and you have a numpy version <= 1.6.1, this will also run
the doctests, let's hope they pass.
Run this benchmark with::

Run this benchmark with:

nosetests -s --match '(?:^|[\\b_\\.//-])[Bb]ench' /path/to/bench_streamlines.py
pytest -c <path>/benchmarks/pytest.benchmark.ini <path>/benchmarks/bench_streamlines.py
"""

import numpy as np
Expand Down
4 changes: 4 additions & 0 deletions nibabel/benchmarks/pytest.benchmark.ini
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
[pytest]
python_files = bench_*.py
python_functions = bench_*
addopts = --capture=no
6 changes: 2 additions & 4 deletions nibabel/cmdline/tests/test_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,6 @@
Test running scripts
"""

from numpy.testing import assert_raises

import pytest

import nibabel as nib
Expand Down Expand Up @@ -196,10 +194,10 @@ def test_main():
-7.24879837e+00]).astype(dtype="float32")]),
('DATA(md5)', ['0a2576dd6badbb25bfb3b12076df986b', 'b0abbc492b4fd533b2c80d82570062cf'])])

with assert_raises(SystemExit):
with pytest.raises(SystemExit):
np.testing.assert_equal(main(test_names, StringIO()), expected_difference)

test_names_2 = [pjoin(data_path, f) for f in ('standard.nii.gz', 'standard.nii.gz')]

with assert_raises(SystemExit):
with pytest.raises(SystemExit):
assert main(test_names_2, StringIO()) == "These files are identical."
Loading