You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This would be used in most places that Skip is used today to skip a failing test. The difference is it would actually run the test and fail if it passes.
The number of known failing tests would be output just like the number of skipped tests in the test output.
This is useful to alert authors that a test is now passing and can be marked as such - instead of skipped tests which tend to remain skipped long after the underlying issue is actually fixed.
This would be supported everywhere that any regular metadata is supported, so an @Failing annotation (likely with a String reason field) - as well as at the group, test, and expect levels.
The main risk would be people using this as a form of negative test - which should be avoided generally.
The text was updated successfully, but these errors were encountered:
What does it mean for this to be used above the level of a test - does it mean that all tests in a group must fail, or that some test in the group must fail?
I think it's unlikely the top level annotation brings enough value, so I'd probably argue for only adding an argument at the test and group level.
What does it mean for this to be used above the level of a test - does it mean that all tests in a group must fail, or that some test in the group must fail?
All must fail.
I don't feel strongly about this being supported at the annotation level, but the effort to support that I believe is close to zero. I agree it wouldn't be used all that often - the use case is a test suite that doesn't have a top level group but where all the tests are failing. You can always add a new top level group though instead.
Uh oh!
There was an error while loading. Please reload this page.
This would be used in most places that
Skip
is used today to skip a failing test. The difference is it would actually run the test and fail if it passes.The number of known failing tests would be output just like the number of skipped tests in the test output.
This is useful to alert authors that a test is now passing and can be marked as such - instead of skipped tests which tend to remain skipped long after the underlying issue is actually fixed.
This would be supported everywhere that any regular metadata is supported, so an
@Failing
annotation (likely with aString reason
field) - as well as at the group, test, and expect levels.The main risk would be people using this as a form of negative test - which should be avoided generally.
The text was updated successfully, but these errors were encountered: