Skip to content

Adding --validate option __main__ and run new validation #240

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Nov 13, 2019
Merged

Adding --validate option __main__ and run new validation #240

merged 4 commits into from
Nov 13, 2019

Conversation

datapythonista
Copy link
Contributor

Closes #213

python -m numpydoc keeps existing functionality, but when called with --validate runs the new validation:

(sklearn-dev) [mgarcia@xps numpydoc]$ PYTHONPATH=. python -m numpydoc --validate sklearn.linear_model.LinearRegression
sklearn.linear_model.LinearRegression:GL02:Closing quotes should be placed in the line after the last text in the docstring (do not close the quotes in the same line as the text, or leave a blank line between the last text and the quotes)
sklearn.linear_model.LinearRegression:GL03:Double line break found; please use only one blank line to separate sections or paragraphs, and do not leave blank lines at the end of docstrings
sklearn.linear_model.LinearRegression:ES01:No extended summary found
sklearn.linear_model.LinearRegression:PR06:Parameter "fit_intercept" type should use "bool" instead of "boolean"
sklearn.linear_model.LinearRegression:PR08:Parameter "fit_intercept" description should start with a capital letter
sklearn.linear_model.LinearRegression:PR06:Parameter "normalize" type should use "bool" instead of "boolean"
sklearn.linear_model.LinearRegression:PR06:Parameter "copy_X" type should use "bool" instead of "boolean"
sklearn.linear_model.LinearRegression:SA01:See Also section not found
sklearn.linear_model.LinearRegression:EX01:No examples section found

@larsoner
Copy link
Collaborator

Sounds good to me. This broke the test_main.py test. It would also be good to add a test for this to .travis.yml, maybe by calling numpydoc --validate numpydoc.<something> and asserting that it works for something good (zero exit code and no output) and for something bad (non-zero exit code and some output)

@datapythonista
Copy link
Contributor Author

Tests are passing now. Not sure exactly what was the goal of mocking modules in the original tests, but I reimplemented all tests in a way that seems simpler to me, and that everything is being tested. I also added calls to python -m numpydoc to travis directly to check the status codes.

I don't think the coverage warning is relevant, I think it's being caused by the lack of tests of the argparse stuff in if __name__ == '__main__':. I'm sure we don't want to test that.

@datapythonista
Copy link
Contributor Author

@larsoner did you have a chance to look at this? I think this should be ready, let me know if it's not.

Copy link
Collaborator

@larsoner larsoner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me.

Deal with --ignore in a separate PR?

@datapythonista
Copy link
Contributor Author

Yes, I think ignoring is complex enough to go into a separate PR. We probably want to specify it in setup.cfg too, and whether is better to whitelist or blacklist error codes may not be obvious.

@larsoner
Copy link
Collaborator

Okay let's get this in and keep iterating, thanks @datapythonista

@larsoner larsoner merged commit ed7f72d into numpy:master Nov 13, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Validation is not strict enough
2 participants