-
-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Start moving to pytest #1723
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Start moving to pytest #1723
Changes from all commits
a3db0c7
8b29ba0
6435c00
a57a633
1bf37e7
c2a3f9c
30089db
49ccce1
e0bdfac
7bf2f27
8257ee8
1efadc7
d709522
d42933b
e4beb6d
e704545
624b5f2
464057d
1506da5
5b7aba1
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,13 @@ | ||
from mypy.test.helpers import PytestSuite | ||
import inspect | ||
import pytest | ||
|
||
def pytest_addoption(parser): | ||
parser.addoption('--update-testcases', action='store_true', | ||
dest='UPDATE_TESTCASES') | ||
|
||
def pytest_pycollect_makeitem(collector, name, obj): | ||
if (inspect.isclass(obj) and issubclass(obj, PytestSuite) and | ||
obj is not PytestSuite): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can you avoid the Ah, in fact I think what it would take to make this work cleanly without that condition is just to add a |
||
print(name) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This looks like a print that you probably didn't mean to leave in. |
||
obj.collect_tests() | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Huh, does this work? The documented interface for the Empirically it seems to work... oh, I see. What's happening is that this implementation has the side effect of setting up a bunch of little methods on the class with names that look like tests; then you return None here, so pytest figures your implementation gave up and falls back to the built-in implementations of this hook, of which the final fallback sees those methods and makes tests of them. This is a little awkward but it does work. At a minimum it deserves a comment here explaining what's going on, because it isn't using the API provided by pytest in the documented manner. Alternatively, I think it wouldn't actually be hard to take the core logic you've written and put it into the form of using the documented API. ... I went and read some examples and found myself with a long comment, so I'll put the balance of it in a comment directly on the PR, which feels a little more spacious. |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -6,14 +6,14 @@ | |
from os import remove, rmdir | ||
import shutil | ||
|
||
from typing import Callable, List, Tuple | ||
from typing import Callable, List, Tuple, Any | ||
|
||
from mypy.myunit import TestCase, SkipTestCaseException | ||
|
||
|
||
def parse_test_cases( | ||
path: str, | ||
perform: Callable[['DataDrivenTestCase'], None], | ||
perform: Callable[..., None], | ||
base_path: str = '.', | ||
optional_out: bool = False, | ||
include_path: str = None, | ||
|
@@ -89,29 +89,35 @@ def parse_test_cases( | |
|
||
|
||
class DataDrivenTestCase(TestCase): | ||
name = None # type: str | ||
input = None # type: List[str] | ||
output = None # type: List[str] | ||
|
||
file = '' | ||
line = 0 | ||
|
||
perform = None # type: Callable[['DataDrivenTestCase'], None] | ||
# NOTE: For info on the ..., see `run`. | ||
perform = None # type: Callable[..., None] | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is this meant to be the same signature as the |
||
|
||
# (file path, file content) tuples | ||
files = None # type: List[Tuple[str, str]] | ||
|
||
clean_up = None # type: List[Tuple[bool, str]] | ||
|
||
marked_skip = False | ||
|
||
def __init__(self, name, input, output, file, line, lastline, | ||
perform, files): | ||
super().__init__(name) | ||
self.name = name | ||
self.input = input | ||
self.output = output | ||
self.lastline = lastline | ||
self.file = file | ||
self.line = line | ||
self.perform = perform | ||
self.files = files | ||
self.marked_skip = self.name.endswith('-skip') | ||
|
||
def set_up(self) -> None: | ||
super().set_up() | ||
|
@@ -137,19 +143,34 @@ def add_dirs(self, dir: str) -> List[str]: | |
os.mkdir(dir) | ||
return dirs | ||
|
||
def run(self): | ||
if self.name.endswith('-skip'): | ||
raise SkipTestCaseException() | ||
def run(self, obj: Any = None): | ||
if obj is None: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What is this Perhaps even a type. :-) |
||
# XXX: The unit tests are being converted over to pytest. Due to | ||
# modifications requires to make BOTH run at the moment, this branch | ||
# is necessary. It should be removed once all the tests relying on | ||
# DataDrivenTestCase are converted to pytest. | ||
if self.marked_skip: | ||
raise SkipTestCaseException() | ||
else: | ||
self.perform(self) | ||
else: | ||
self.perform(self) | ||
assert not self.marked_skip | ||
# Because perform is an unbound method, it needs to be passed it | ||
# own self, which is obj. In the future, after all tests are moved | ||
# over to pytest, this whole class should probably be generic, to | ||
# allow annotating obj. In the mean time, there isn't a cleaner way | ||
# to handle this... | ||
self.perform(obj, self) | ||
|
||
def tear_down(self) -> None: | ||
# First remove files. | ||
for is_dir, path in reversed(self.clean_up): | ||
if not is_dir: | ||
if not is_dir and os.path.exists(path): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Huh. Why does this condition become necessary where it wasn't before? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That isn't actually related to pytest. It's just because, if the test case were killed before it finished, the files wouldn't always exist, which ended up throwing an exception, which was annoying (to say the least). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I see. Sounds like a good bug to fix! Would you send this as a separate, tiny PR? That way it can have its own commit message that explains it's fixing this bug, and also it doesn't sit here in this unrelated PR (and ultimately this commit in the final history) causing the reader to wonder what purpose it serves and what mysterious interaction with pytest brings it into this change. I expect that separate PR can be merged very quickly, too. :-) |
||
remove(path) | ||
# Then remove directories. | ||
for is_dir, path in reversed(self.clean_up): | ||
if not os.path.exists(path): | ||
continue | ||
if is_dir: | ||
pycache = os.path.join(path, '__pycache__') | ||
if os.path.isdir(pycache): | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,6 +1,7 @@ | ||
import sys | ||
import re | ||
import os | ||
import pytest # type: ignore | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why the Ah, we don't have pytest in typeshed. This could be OK for now -- particularly as pytest is awfully metaprogramming-heavy and the stubs may end up with a fair number of And any time we write a |
||
|
||
from typing import List, Dict, Tuple | ||
|
||
|
@@ -263,18 +264,18 @@ def num_skipped_suffix_lines(a1: List[str], a2: List[str]) -> int: | |
return max(0, num_eq - 4) | ||
|
||
|
||
def testfile_pyversion(path: str) -> Tuple[int, int]: | ||
def pyversion_testfile(path: str) -> Tuple[int, int]: | ||
if path.endswith('python2.test'): | ||
return defaults.PYTHON2_VERSION | ||
else: | ||
return defaults.PYTHON3_VERSION | ||
|
||
|
||
def testcase_pyversion(path: str, testcase_name: str) -> Tuple[int, int]: | ||
def pyversion_testcase(path: str, testcase_name: str) -> Tuple[int, int]: | ||
if testcase_name.endswith('python2'): | ||
return defaults.PYTHON2_VERSION | ||
else: | ||
return testfile_pyversion(path) | ||
return pyversion_testfile(path) | ||
|
||
|
||
def normalize_error_messages(messages: List[str]) -> List[str]: | ||
|
@@ -284,3 +285,42 @@ def normalize_error_messages(messages: List[str]) -> List[str]: | |
for m in messages: | ||
a.append(m.replace(os.sep, '/')) | ||
return a | ||
|
||
|
||
@pytest.fixture(scope='function') | ||
def test(request): | ||
test = request.function.test | ||
test.set_up() | ||
request.addfinalizer(test.tear_down) | ||
return test | ||
|
||
|
||
class PytestSuite: | ||
"""Assists in setting up data-driven test cases for pytest.""" | ||
@classmethod | ||
def collect_tests(cls): | ||
""" | ||
Sets up the child class's test case. The child must have a method | ||
`cases` that returns a list of `DataDrivenTestCase`s. This method will | ||
load the data-driven test cases and use setattr to assign it to the | ||
class, which will allow pytest to recognize the test. | ||
|
||
This will be called during test collection (see conftest.py in the root | ||
of the repository). | ||
""" | ||
c = cls.cases() # type: List[DataDrivenTestCase] | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Let's add an implementation of
|
||
for test in c: | ||
def func(self, test): | ||
test.run(self) | ||
if test.marked_skip: | ||
func = pytest.mark.skip(reason='Test ends with -skip')(func) | ||
if 'FastParse' in test.name and not test.marked_skip: | ||
try: | ||
import mypy.fastparse | ||
except SystemExit: | ||
func = pytest.mark.skip( | ||
reason='You must install the typed_ast package in ' | ||
'order to run this test')(func) | ||
func.test = test | ||
setattr(cls, test.name, func) | ||
# setattr(cls, test.name.replace('test', 'test_', 1), func) |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -4,18 +4,18 @@ | |
import re | ||
import shutil | ||
import sys | ||
import mypy | ||
import pytest # type: ignore | ||
|
||
from typing import Tuple, List, Dict, Set | ||
|
||
from mypy import build | ||
import mypy.myunit # for mutable globals (ick!) | ||
from mypy.build import BuildSource, find_module_clear_caches | ||
from mypy.myunit import Suite, AssertionFailure | ||
from mypy.test.config import test_temp_dir, test_data_prefix | ||
from mypy.test.data import parse_test_cases, DataDrivenTestCase | ||
from mypy.test.helpers import ( | ||
assert_string_arrays_equal, normalize_error_messages, | ||
testcase_pyversion, update_testcase_output, | ||
normalize_error_messages, pyversion_testcase, update_testcase_output, | ||
PytestSuite, test | ||
) | ||
from mypy.errors import CompileError | ||
|
||
|
@@ -61,13 +61,13 @@ | |
] | ||
|
||
|
||
class TypeCheckSuite(Suite): | ||
|
||
def cases(self) -> List[DataDrivenTestCase]: | ||
class TestTypeCheck(PytestSuite): | ||
@classmethod | ||
def cases(cls): | ||
c = [] # type: List[DataDrivenTestCase] | ||
for f in files: | ||
c += parse_test_cases(os.path.join(test_data_prefix, f), | ||
self.run_test, test_temp_dir, True) | ||
cls.run_test, test_temp_dir, True) | ||
return c | ||
|
||
def run_test(self, testcase: DataDrivenTestCase) -> None: | ||
|
@@ -88,7 +88,7 @@ def clear_cache(self) -> None: | |
|
||
def run_test_once(self, testcase: DataDrivenTestCase, incremental=0) -> None: | ||
find_module_clear_caches() | ||
pyversion = testcase_pyversion(testcase.file, testcase.name) | ||
pyversion = pyversion_testcase(testcase.file, testcase.name) | ||
program_text = '\n'.join(testcase.input) | ||
module_name, program_name, program_text = self.parse_options(program_text) | ||
flags = self.parse_flags(program_text) | ||
|
@@ -122,13 +122,12 @@ def run_test_once(self, testcase: DataDrivenTestCase, incremental=0) -> None: | |
a = e.messages | ||
a = normalize_error_messages(a) | ||
|
||
if output != a and mypy.myunit.UPDATE_TESTCASES: | ||
if output != a and pytest.config.getoption('UPDATE_TESTCASES'): | ||
update_testcase_output(testcase, a, mypy.myunit.APPEND_TESTCASES) | ||
|
||
assert_string_arrays_equal( | ||
output, a, | ||
assert output == a, \ | ||
'Invalid type checker output ({}, line {})'.format( | ||
testcase.file, testcase.line)) | ||
testcase.file, testcase.line) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. How does the output from this turn out when there's a mismatch? The There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @gnprice Given the following toy test case: def test_xyz():
l1 = [
'abc',
'def',
'ghi'
]
l2 = [
'abc',
'dfg',
'ghi',
'jkl'
]
assert l1 == l2 Pytest prints:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah, I've used pytest in other projects, and I appreciate the fancy output on asserts. The thing is that For example, say I mess up one of the
Our existing myunit framework will print this output:
It prints both the actual and expected arrays as a full line for each line -- so that they look nearly just like the actual way these error messages would come out in normal operation. (They're slightly indented, which is probably helpful, and also abbreviated, which maybe we actually shouldn't do.) That's extremely useful when debugging a failure. Then it also highlights the mismatched lines with the suffix " (diff)", and it even drills into the first pair of different lines and highlights the column where they first differ. Your current version will print this:
It's a lot harder here to tell what's going on. Partly that's because there's a lot of goo which pytest is printing out that's from the test framework and from the More critically, there just isn't some of the useful information that's there in the myunit version. The differing pair of lines themselves are printed in full, but stuck together on one giant line with extra stuff around them, because the format is intended for shorter strings or other values. And the context lines aren't shown at all. I can add
That's better but still I think a good deal harder to read than the myunit version. And it's an extra step. So I'd like to see this PR leave There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Honestly I wish assert_string_array would print the output in the form of a
context diff -- I can read those in my sleep. The column info is not useful
to me.
BTW Has anyone managed to get the myunit -i/-u flags to work?
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah, a unified context diff (like I've actually never tried the
(and from a vague recollection of hearing about that kind of functionality in myunit.) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. To clarify when it comes to the scope of this PR: I'd leave any improvements to the diff output (like switching to a unified diff) to a separate PR -- which could even come sooner! should be easy to do, and independent of this change -- and in this one just keep invoking the same There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @gnprice @gvanrossum If I change the assertion to
Even if There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. With
|
||
|
||
if incremental and res: | ||
self.verify_cache(module_name, program_name, a, res.manager) | ||
|
@@ -145,9 +144,7 @@ def verify_cache(self, module_name: str, program_name: str, a: List[str], | |
modules = self.find_module_files() | ||
modules.update({module_name: program_name}) | ||
missing_paths = self.find_missing_cache_files(modules, manager) | ||
if missing_paths != error_paths: | ||
raise AssertionFailure("cache data discrepancy %s != %s" % | ||
(missing_paths, error_paths)) | ||
assert missing_paths == error_paths, 'cache data discrepancy' | ||
|
||
def find_error_paths(self, a: List[str]) -> Set[str]: | ||
hits = set() | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"To run these, you must use pytest instead", sounds like these tests might not run if you just say
./runtests.py
. They do, right?Maybe a clearer description here would be something like "To run these unit tests individually, you can run
py.test
directly, or pass inferior arguments with-t
." (I think you mean-t
rather than-k
-- is that right or am I misreading the example?)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍