Skip to content

Commit 3a38200

Browse files
authored
Merge pull request #891 from effigies/test/final_nose_purge
TEST: Final nose purge
2 parents 9e512bc + 154b941 commit 3a38200

22 files changed

+184
-178
lines changed

.azure-pipelines/windows.yml

Lines changed: 0 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -36,24 +36,6 @@ jobs:
3636
python -m pip install .[$(CHECK_TYPE)]
3737
SET NIBABEL_DATA_DIR=%CD%\\nibabel-data
3838
displayName: 'Install nibabel'
39-
- script: |
40-
mkdir for_testing
41-
cd for_testing
42-
cp ../.coveragerc .
43-
nosetests --with-doctest --with-coverage --cover-package nibabel nibabel ^
44-
-I test_data ^
45-
-I test_environment ^
46-
-I test_euler ^
47-
-I test_giftiio ^
48-
-I test_netcdf ^
49-
-I test_pkg_info ^
50-
-I test_quaternions ^
51-
-I test_scaling ^
52-
-I test_scripts ^
53-
-I test_spaces ^
54-
-I test_testing
55-
displayName: 'Nose tests'
56-
condition: and(succeeded(), eq(variables['CHECK_TYPE'], 'nosetests'))
5739
- script: |
5840
mkdir for_testing
5941
cd for_testing

.travis.yml

Lines changed: 0 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -27,10 +27,6 @@ python:
2727

2828
jobs:
2929
include:
30-
# Old nosetests - Remove soon
31-
- python: 3.7
32-
env:
33-
- CHECK_TYPE="nosetests"
3430
# Basic dependencies only
3531
- python: 3.5
3632
env:
@@ -140,23 +136,6 @@ script:
140136
cd doc
141137
make html;
142138
make doctest;
143-
elif [ "${CHECK_TYPE}" == "nosetests" ]; then
144-
# Change into an innocuous directory and find tests from installation
145-
mkdir for_testing
146-
cd for_testing
147-
cp ../.coveragerc .
148-
nosetests --with-doctest --with-coverage --cover-package nibabel nibabel \
149-
-I test_data \
150-
-I test_environment \
151-
-I test_euler \
152-
-I test_giftiio \
153-
-I test_netcdf \
154-
-I test_pkg_info \
155-
-I test_quaternions \
156-
-I test_scaling \
157-
-I test_scripts \
158-
-I test_spaces \
159-
-I test_testing
160139
elif [ "${CHECK_TYPE}" == "test" ]; then
161140
# Change into an innocuous directory and find tests from installation
162141
mkdir for_testing

azure-pipelines.yml

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,3 @@ jobs:
3434
py38-x64:
3535
PYTHON_VERSION: '3.8'
3636
PYTHON_ARCH: 'x64'
37-
nosetests:
38-
PYTHON_VERSION: '3.6'
39-
PYTHON_ARCH: 'x64'
40-
CHECK_TYPE: 'nosetests'

dev-requirements.txt

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
11
# Requirements for running tests
22
-r requirements.txt
3-
nose
43
pytest

doc/source/devel/advanced_testing.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ Long-running tests
2525
Long-running tests are not enabled by default, and can be resource-intensive. To run these tests:
2626

2727
* Set environment variable ``NIPY_EXTRA_TESTS=slow``;
28-
* Run ``nosetests``.
28+
* Run ``pytest nibabel``.
2929

3030
Note that some tests may require a machine with >4GB of RAM.
3131

doc/source/devel/make_release.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ Release checklist
7979

8080
* Make sure all tests pass (from the nibabel root directory)::
8181

82-
nosetests --with-doctest nibabel
82+
pytest --doctest-modules nibabel
8383

8484
* Make sure you are set up to use the ``try_branch.py`` - see
8585
https://github.com/nipy/nibotmi/blob/master/install.rst#trying-a-set-of-changes-on-the-buildbots

doc/source/installation.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ Requirements
9090
* h5py_ (optional, for MINC2 support)
9191
* PyDICOM_ 0.9.9 or greater (optional, for DICOM support)
9292
* `Python Imaging Library`_ (optional, for PNG conversion in DICOMFS)
93-
* nose_ 0.11 or greater and pytest_ (optional, to run the tests)
93+
* pytest_ (optional, to run the tests)
9494
* sphinx_ (optional, to build the documentation)
9595

9696
Get the development sources
@@ -128,7 +128,7 @@ module to see if everything is fine. It should look something like this::
128128
>>>
129129

130130

131-
To run the nibabel test suite, from the terminal run ``nosetests nibabel`` or
131+
To run the nibabel test suite, from the terminal run ``pytest nibabel`` or
132132
``python -c "import nibabel; nibabel.test()``.
133133

134134
To run an extended test suite that validates ``nibabel`` for long-running and

nibabel/__init__.py

Lines changed: 97 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -36,29 +36,6 @@
3636
For more detailed information see the :ref:`manual`.
3737
"""
3838

39-
# Package-wide test setup and teardown
40-
_test_states = {
41-
# Numpy changed print options in 1.14; we can update docstrings and remove
42-
# these when our minimum for building docs exceeds that
43-
'legacy_printopt': None,
44-
}
45-
46-
def setup_package():
47-
""" Set numpy print style to legacy="1.13" for newer versions of numpy """
48-
import numpy as np
49-
from distutils.version import LooseVersion
50-
if LooseVersion(np.__version__) >= LooseVersion('1.14'):
51-
if _test_states.get('legacy_printopt') is None:
52-
_test_states['legacy_printopt'] = np.get_printoptions().get('legacy')
53-
np.set_printoptions(legacy="1.13")
54-
55-
def teardown_package():
56-
""" Reset print options when tests finish """
57-
import numpy as np
58-
if _test_states.get('legacy_printopt') is not None:
59-
np.set_printoptions(legacy=_test_states.pop('legacy_printopt'))
60-
61-
6239
# module imports
6340
from . import analyze as ana
6441
from . import spm99analyze as spm99
@@ -92,13 +69,105 @@ def teardown_package():
9269
from . import streamlines
9370
from . import viewers
9471

95-
from numpy.testing import Tester
96-
test = Tester().test
97-
bench = Tester().bench
98-
del Tester
99-
10072
from .pkg_info import get_pkg_info as _get_pkg_info
10173

10274

10375
def get_info():
10476
return _get_pkg_info(os.path.dirname(__file__))
77+
78+
79+
def test(label=None, verbose=1, extra_argv=None,
80+
doctests=False, coverage=False, raise_warnings=None,
81+
timer=False):
82+
"""
83+
Run tests for nibabel using pytest
84+
85+
The protocol mimics the ``numpy.testing.NoseTester.test()``.
86+
Not all features are currently implemented.
87+
88+
Parameters
89+
----------
90+
label : None
91+
Unused.
92+
verbose: int, optional
93+
Verbosity value for test outputs. Positive values increase verbosity, and
94+
negative values decrease it. Default is 1.
95+
extra_argv : list, optional
96+
List with any extra arguments to pass to pytest.
97+
doctests: bool, optional
98+
If True, run doctests in module. Default is False.
99+
coverage: bool, optional
100+
If True, report coverage of NumPy code. Default is False.
101+
(This requires the
102+
`coverage module <https://nedbatchelder.com/code/modules/coveragehtml>`_).
103+
raise_warnings : None
104+
Unused.
105+
timer : False
106+
Unused.
107+
108+
Returns
109+
-------
110+
code : ExitCode
111+
Returns the result of running the tests as a ``pytest.ExitCode`` enum
112+
"""
113+
import pytest
114+
args = []
115+
116+
if label is not None:
117+
raise NotImplementedError("Labels cannot be set at present")
118+
119+
try:
120+
verbose = int(verbose)
121+
except ValueError:
122+
pass
123+
else:
124+
if verbose > 0:
125+
args.append("-" + "v" * verbose)
126+
elif verbose < 0:
127+
args.append("-" + "q" * -verbose)
128+
129+
if extra_argv:
130+
args.extend(extra_argv)
131+
if doctests:
132+
args.append("--doctest-modules")
133+
if coverage:
134+
args.extend(["--cov", "nibabel"])
135+
if raise_warnings:
136+
raise NotImplementedError("Warning filters are not implemented")
137+
if timer:
138+
raise NotImplementedError("Timing is not implemented")
139+
140+
args.extend(["--pyargs", "nibabel"])
141+
142+
pytest.main(args=args)
143+
144+
145+
def bench(label=None, verbose=1, extra_argv=None):
146+
"""
147+
Run benchmarks for nibabel using pytest
148+
149+
The protocol mimics the ``numpy.testing.NoseTester.bench()``.
150+
Not all features are currently implemented.
151+
152+
Parameters
153+
----------
154+
label : None
155+
Unused.
156+
verbose: int, optional
157+
Verbosity value for test outputs. Positive values increase verbosity, and
158+
negative values decrease it. Default is 1.
159+
extra_argv : list, optional
160+
List with any extra arguments to pass to pytest.
161+
162+
Returns
163+
-------
164+
code : ExitCode
165+
Returns the result of running the tests as a ``pytest.ExitCode`` enum
166+
"""
167+
from pkg_resources import resource_filename
168+
config = resource_filename("nibabel", "benchmarks/pytest.benchmark.ini")
169+
args = []
170+
if extra_argv is not None:
171+
args.extend(extra_argv)
172+
args.extend(["-c", config])
173+
test(label, verbose, extra_argv=args)

nibabel/benchmarks/bench_array_to_file.py

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,9 @@
55
import nibabel as nib
66
nib.bench()
77
8-
If you have doctests enabled by default in nose (with a noserc file or
9-
environment variable), and you have a numpy version <= 1.6.1, this will also
10-
run the doctests, let's hope they pass.
8+
Run this benchmark with::
119
12-
Run this benchmark with:
13-
14-
nosetests -s --match '(?:^|[\\b_\\.//-])[Bb]ench' /path/to/bench_load_save.py
10+
pytest -c <path>/benchmarks/pytest.benchmark.ini <path>/benchmarks/bench_load_save.py
1511
"""
1612

1713
import sys

nibabel/benchmarks/bench_arrayproxy_slicing.py

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,9 @@
55
import nibabel as nib
66
nib.bench()
77
8-
If you have doctests enabled by default in nose (with a noserc file or
9-
environment variable), and you have a numpy version <= 1.6.1, this will also
10-
run the doctests, let's hope they pass.
8+
Run this benchmark with::
119
12-
Run this benchmark with:
13-
14-
nosetests -s --match '(?:^|[\\b_\\.//-])[Bb]ench' /path/to/bench_arrayproxy_slicing.py
10+
pytest -c <path>/benchmarks/pytest.benchmark.ini <path>/benchmarks/bench_arrayproxy_slicing.py
1511
"""
1612

1713
from timeit import timeit

nibabel/benchmarks/bench_fileslice.py

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -3,13 +3,9 @@
33
import nibabel as nib
44
nib.bench()
55
6-
If you have doctests enabled by default in nose (with a noserc file or
7-
environment variable), and you have a numpy version <= 1.6.1, this will also
8-
run the doctests, let's hope they pass.
6+
Run this benchmark with::
97
10-
Run this benchmark with:
11-
12-
nosetests -s --match '(?:^|[\\b_\\.//-])[Bb]ench' /path/to/bench_fileslice.py
8+
pytest -c <path>/benchmarks/pytest.benchmark.ini <path>/benchmarks/bench_fileslice.py
139
"""
1410

1511
import sys

nibabel/benchmarks/bench_finite_range.py

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,9 @@
55
import nibabel as nib
66
nib.bench()
77
8-
If you have doctests enabled by default in nose (with a noserc file or
9-
environment variable), and you have a numpy version <= 1.6.1, this will also
10-
run the doctests, let's hope they pass.
8+
Run this benchmark with::
119
12-
Run this benchmark with:
13-
14-
nosetests -s --match '(?:^|[\\b_\\.//-])[Bb]ench' /path/to/bench_finite_range
10+
pytest -c <path>/benchmarks/pytest.benchmark.ini <path>/benchmarks/bench_finite_range.py
1511
"""
1612

1713
import sys

nibabel/benchmarks/bench_load_save.py

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,9 @@
55
import nibabel as nib
66
nib.bench()
77
8-
If you have doctests enabled by default in nose (with a noserc file or
9-
environment variable), and you have a numpy version <= 1.6.1, this will also
10-
run the doctests, let's hope they pass.
8+
Run this benchmark with::
119
12-
Run this benchmark with:
13-
14-
nosetests -s --match '(?:^|[\\b_\\.//-])[Bb]ench' /path/to/bench_load_save.py
10+
pytest -c <path>/benchmarks/pytest.benchmark.ini <path>/benchmarks/bench_load_save.py
1511
"""
1612

1713
import sys

nibabel/benchmarks/bench_streamlines.py

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,9 @@
55
import nibabel as nib
66
nib.bench()
77
8-
If you have doctests enabled by default in nose (with a noserc file or
9-
environment variable), and you have a numpy version <= 1.6.1, this will also run
10-
the doctests, let's hope they pass.
8+
Run this benchmark with::
119
12-
Run this benchmark with:
13-
14-
nosetests -s --match '(?:^|[\\b_\\.//-])[Bb]ench' /path/to/bench_streamlines.py
10+
pytest -c <path>/benchmarks/pytest.benchmark.ini <path>/benchmarks/bench_streamlines.py
1511
"""
1612

1713
import numpy as np
Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
[pytest]
2+
python_files = bench_*.py
3+
python_functions = bench_*
4+
addopts = --capture=no

nibabel/cmdline/tests/test_utils.py

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,6 @@
55
Test running scripts
66
"""
77

8-
from numpy.testing import assert_raises
9-
108
import pytest
119

1210
import nibabel as nib
@@ -196,10 +194,10 @@ def test_main():
196194
-7.24879837e+00]).astype(dtype="float32")]),
197195
('DATA(md5)', ['0a2576dd6badbb25bfb3b12076df986b', 'b0abbc492b4fd533b2c80d82570062cf'])])
198196

199-
with assert_raises(SystemExit):
197+
with pytest.raises(SystemExit):
200198
np.testing.assert_equal(main(test_names, StringIO()), expected_difference)
201199

202200
test_names_2 = [pjoin(data_path, f) for f in ('standard.nii.gz', 'standard.nii.gz')]
203201

204-
with assert_raises(SystemExit):
202+
with pytest.raises(SystemExit):
205203
assert main(test_names_2, StringIO()) == "These files are identical."

0 commit comments

Comments
 (0)