Skip to content

Benchmark performance of PyGMT functions #2910

@weiji14

Description

@weiji14

Description of the desired feature

We are attempting some big refactoring steps in PyGMT to avoid the use of temporary intermediate files #2730, and will also be performing some updates to GMT 6.5.0 soon, so it would be good to track any performance improvements or regressions in terms of execution speed. Since #835, we have actually measured the execution time of the slowest tests (>0.2s) on CI for #584, but those execution times are not really tracked over time, and only hidden in the CI log files.

So, to better track performance over time, we are setting up a Continuous Benchmarking workflow in #2908. This is using pytest-codspeed, with the results logged to https://codspeed.io/GenericMappingTools/pygmt. The benchmarks is selective however, and will only be run on unit tests marked with @pytest.mark.benchmark.

Main question: Which tests do we decide to benchmark?

So which tests should we benchmark (and should we mark all those tests with @pytest.mark.benchmark in this PR or a follow-up one)?

I think we should focus on benchmarking low-level functions rather than modules' wrappers, since the low-level functions are heavily used in everywhere and most wrappers have very simple and similar code structures.

For example, most plotting modules have very simple code like the one in Figure.basemap

    kwargs = self._preprocess(**kwargs)                                         
    with Session() as lib:                                                      
        lib.call_module(module="basemap", args=build_arg_string(kwargs))

so we just need to benchmark one basemap test (e.g., test_basemap()) and don't need to benchmark other basemap's tests and other plotting methods (e.g., Figure.coast). Of course, there are few exceptions, for example, Figure.meca, Figure.plot, Figure.plot3d, Figure.text are among the complicated wrappers and should be benchmarked.

Similarly, for table-processing and grid-processing functions, benchmarking pygmt.select and pygmt.grdfill should be enough.

Originally posted by @seisman in #2908 (comment)

Other questions: When do we want to run the benchmarks? Should this be enabled on every Pull Request, or just Pull Requests that modify certain files perhaps? Edit: Answered in #2908 (comment)

Are you willing to help implement and maintain this feature?

Yes

Metadata

Metadata

Assignees

Labels

maintenanceBoring but important stuff for the core devs

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions