Skip to content
This repository was archived by the owner on Sep 21, 2019. It is now read-only.

Conversation

ches
Copy link
Contributor

@ches ches commented Jul 29, 2016

I've been doing a bit of work on a project to extract a core ENSIME client for Python that could potentially be shared with ensime-sublime or others (more of a rewrite of the core than extraction, but I'll save this whole discussion until it's really ready to talk about).

I set up unit test infrastructure for that project from the start and thought about whether to add it to ensime-vim too, but wasn't sure if there was a need yet. Now that I've started some work on #321, it's finally there—we're finally getting to the point where some substantial logic will be decoupled enough from complex dependencies to feasibly be unit tested. I've mentioned before how I think Lettuce is ideal for integration/acceptance/end-to-end tests but unit tests are a better fit for other cases.

An example is the config file parsing, which we had Lettuce tests for but they feel heavy because there's no need for integration there, it's extra work to write steps in code plus a prose feature for something so low-level. So I used that as a small contained example case for the initial unit test framework setup, converting those to unit tests and taking the opportunity to abstract it in a cute little mapping class that I've used in my spinoff project.

Briefly, a rundown of popular unit testing choices in the Python ecosystem—there are three main contenders:

  1. The standard library unittest module.
  2. Nose.
  3. pytest.

unittest has the advantage of being built-in, but it's very much jUnit-inspired and ceremonious: everything is a class, and the worst to me is that you need to learn a bunch of different assert* methods.

Nose and pytest by contrast share a lot in common, they both aim to be lightweight frameworks that are highly-plugin driven and let you just write simple functions, scaling up to using more complicated structures if you feel the need to. I chose Pytest for a few reasons:

  • Nose 1 development has basically stopped, they're working on Nose 2 which seems mostly ready but the ecosystem is still catching up.
  • With nose you either use a plugin to add assertion functions, use unittest for assertions with nose acting as a test runner, or use the built-in assert statement with not-very-good failure output, kind of like with Lettuce where there is more burden on you to write string explanations for assertions. pytest has badass output with simple standard assert statements.
  • I know there was concern about Lettuce not being actively maintained, and AFAIK not ready for Python 3. If we eventually need to leave it behind, pytest_bdd looks like a really nice Gherkin feature runner based on pytest, maybe even already more featureful and better-documented than Lettuce and working on Python 2/3.

A couple of things that might be negatives for pytest: it is somewhat "magical": its fantastic output for standard assert is achieved through bytecode manipulation. That said, it's been around for many years and is regarded as solid. Also, it has a novel approach to setup/teardown or test fixtures that is like dependency injection—it's actually quite nice and scales up and down very flexibly, but it takes a bit of getting used to since it is unusual. As the doc mentions, there is still support for classical setup/teardown functions at different scopes, though.

Let me know what you think. I'd ideally like to base my work for #321 atop this test setup as well as the logging in #323.

With an extraction of .ensime parsing logic into a mapping class
abstraction as a proof-of-concept.
@ktonga
Copy link
Contributor

ktonga commented Jul 29, 2016

I think this is great, I totally agree with your Lettuce vs Unit Tests analysis about when each kind of test is better.
When I had to write the tests for the symbol_format module it felt like a lot of work, maintain two separate files and so many extra steps to transform the data from what Gherkin supports to what the logic actually needs.

I'm more used to libs like specs2 where you have BDD-like DSL to organise your cases and all the power of the PL to do your testing. I think that having the cases descriptions in a separate files is a better fit for projects where the acceptance stage is managed by a separate team, most of the time non-developers, but I think it's a big overhead for a project like this with just a group of maintainers.

Having the alternative to eventually move to Python3 is a huge win also.

If this change is accepted I'd be happy to migrate the Suggestion Format scenario to unit tests.
And let me tell you that all the work you're doing is fantastic, it's much more enjoyable to contribute to a project well structured and with the right set of tools.

It's a big LGTM

@ches
Copy link
Contributor Author

ches commented Jul 29, 2016

I'm more used to libs like specs2 where you have BDD-like DSL to organise your cases and all the power of the PL to do your testing.

I hear you, coming from a bunch of Ruby and Scala development over the past many years those kinds of test frameworks have spoiled me, one of the things I miss most dearly doing Python. so_many_underscores_to_write_tests. The language just isn't optimized to support DSLs that way. Attempts like NoseOfYeti (what a name…) and Konira never catch on because they're crazy preprocessor-like hacks…

And let me tell you that all the work you're doing is fantastic, it's much more enjoyable to contribute to a project well structured and with the right set of tools.

Recently I've had some time to devote to the project, it probably won't last forever, so for myself too I feel a rush to try to whip the foundations into shape while I can so it's more comfortable to build upon… Thanks for the kind words!

It’s already in gitignore from earlier Coveralls setup.
@ches
Copy link
Contributor Author

ches commented Jul 30, 2016

Bonus: added coverage.py, with the coverage report combining the pytest and Lettuce suites.

@ches ches mentioned this pull request Jul 31, 2016
These don't conflict with pytest runner's test detection at all since
the file names don't match the standard convention of `test_*.py`, so
I thought this consolidation of the repository layout would be easier
for contributors to understand.
@ches
Copy link
Contributor Author

ches commented Aug 1, 2016

A little trivial update: moved the Lettuce tests into test/features, figured the consolidated layout would be easier for contributors to understand. That was basically just a git mv and change the path in the Makefile. Also fits with conventions many tools expect, e.g. no longer needed to tell coverage.py to explicitly ignore the *steps.py files as not being production code.

If we port the suggestion formatting tests to pytest also, there won't be any more Lettuce tests remaining for now, but #312 could change that…

@ches ches merged commit 056eea6 into master Aug 1, 2016
@ches ches deleted the unit-testing branch August 1, 2016 11:32
@ches ches removed the In progress label Aug 1, 2016
@ches ches mentioned this pull request Aug 1, 2016
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants