-
-
Notifications
You must be signed in to change notification settings - Fork 595
Closed
Description
The Benchmarks section of README.rst
suggests using this command to run the benchmarks manually (i.e. without tox):
$ python -m pyperf jsonschema/benchmarks/test_suite.py --hist --output results.json
However, this command fails with version 2.0.0 of pyperf:
$ python -m pyperf jsonschema/benchmarks/test_suite.py --hist --output results.json
usage: -m pyperf [-h] {show,hist,compare_to,stats,metadata,check,collect_metadata,timeit,system,convert,dump,slowest,command} ...
-m pyperf: error: argument action: invalid choice: 'jsonschema/benchmarks/test_suite.py' (choose from 'show', 'hist', 'compare_to', 'stats', 'metadata', 'check', 'collect_metadata', 'timeit', 'system', 'convert', 'dump', 'slowest', 'command')
Additionally, there is no such file jsonschema/benchmarks/test_suite.py
.
Having a look at tox.ini
, I see that the commands used to run pyperf are actually
perf: mkdir {envtmpdir}/benchmarks/
perf: {envpython} {toxinidir}/jsonschema/benchmarks/issue232.py --inherit-environ JSON_SCHEMA_TEST_SUITE --output {envtmpdir}/benchmarks/issue232.json
perf: {envpython} {toxinidir}/jsonschema/benchmarks/json_schema_test_suite.py --inherit-environ JSON_SCHEMA_TEST_SUITE --output {envtmpdir}/benchmarks/json_schema_test_suite.json
which suggests the correct commands for running the benchmarks manually would be
$ python jsonschema/benchmarks/issue232.py --inherit-environ JSON_SCHEMA_TEST_SUITE --output issue232_results.json
$ python jsonschema/benchmarks/json_schema_test_suite.py --inherit-environ JSON_SCHEMA_TEST_SUITE --output json_schema_test_suite_results.json
Metadata
Metadata
Assignees
Labels
No labels