-
Notifications
You must be signed in to change notification settings - Fork 7.1k
feat: add functional center crop on mask #5961
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add functional center crop on mask #5961
Conversation
@pmeier, @vfdev-5 thoughts about testing the correctness of the function by mocking the function that really does the work? It's a proposal related to what we were saying in #5866. IMO, there is no logic in @vfdev-5, for the visualization side, the function |
@federicopozzi33 I agree, I have a similar reflection about that while working on
We have to think about how to test composed ops in general. 1) Ensure that we call base ops in a right way or 2) check if the output is what we expect (expected result could be either hard-coded, computed using base ops or recomputed in a different way) or 3) something else ?
You can make visualization for segm mask and then add bbox when available ? |
@federicopozzi33 fyi center crop for bbox PR: #5972 |
51d0361
to
e93cd31
Compare
I added a test with random input too, but IMO it's not needed here: I don't see any corner cases.
We can do both 1) and 2). I'd achieve the former by patching the base ops, and the latter with fixed input.
I've updated the description. |
@federicopozzi33 CI failures are real : https://github.com/pytorch/vision/runs/6376638110?check_suite_focus=true#step:9:127 |
@federicopozzi33 thanks for the update! Code and test looks good to me (I left a minor suggestion). Once you fixed the problem with CI (probably due to resolving conflicts) we can merge it. @pmeier any thoughts from your side about testing, particularly mocking part ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One minor nit about mocking in pytest
. Otherwise testing looks good. Still, I want to hear @NicolasHug's opinion on mocking in general for this use case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've discussed this with @NicolasHug offline and we both agree the mock test seems a little overkill here. The function is a one-liner without any additional logic. The test is orders of magnitude more complex. Thus, in the future it is more likely that the test breaks rather then the kernel.
Plus, we already have tests that check the numerical correctness. Thus, the test is not bringing any new information.
What was the reason to go for the complex mocking here?
It is the first time we opt to use mock. I agree that in case of mask it is one-liner and it does not bring any useful checks but it can serve as an example for other composite ops, IMO. |
As I said in some previous comment, this was just a proposal, and that can be useful in some cases, such as in #5866 (BTW, do you think that missing test cases can be covered using mocking? @vfdev-5). I agree with you that the function is very simple, but the function called inside is not straightforward to test. The purpose of mocking here is simple:
Anyway, I not sure if I did right removing the test with mocking. |
@federicopozzi33 I agree with the general sentiment, but I would go a different route. If |
If the only thing that my function does is to call another tested function, IMO, mocking is the way. It's the same that calling the inner function, but without really doing it (because I'm really interested in the result). Just two different ways to approach the same problem. You choose :) So, should I add replace my naive |
That is a dangerous conclusion. Mocking something always means that we switch from black box testing to white box testing. Meaning, we now need to know the internals of the function we want to test. That isn't a bad thing in general, but brings other problems with it. For example, if the internals change but the interface stays the same, the test might break although it shouldn't. Imagine we rename In this situation here, IMO it would be best to just use the expected result computation for images and segmentation and stick to black box testing. |
cb3161f
to
b6953b2
Compare
I agree with you. I'd just like to clarify one point: IMO, mocking a public function is very different from mocking a private one, especially in cases like ours, considering that modifying public functions means doing a "major change". Anyway, I added some extra test cases (output size bigger than image and odd output dimensions) and I'm computing the expected with the |
That is true in general, but not so much during the prototype phase. |
make_segmentation_masks(), | ||
[[4, 3], [42, 70], [4]], # crop sizes < image sizes, crop_sizes > image sizes, single crop size |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we maybe not rely on the default values here to make sure we actually get the cases detailed in the comment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still use the default values, but I have made them explicit.
@pytest.mark.parametrize("output_size", [[4, 3], [4], [7, 7]]) | ||
def test_correctness_center_crop_segmentation_mask(device, output_size): | ||
def _compute_expected_segmentation_mask(mask, output_size): | ||
return F.center_crop_image_tensor(mask, output_size) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not really happy about this given that we don't seem to have a reference test that makes sure F.center_crop_image_tensor
. Thus, if that had a bug, we would propagate it to this test.
Plus, we now use exactly the same implementation for the test and kernel and thus are unable to detect any bug with this test.
Maybe we can have something like
class TestCorrectnessCenterCrop:
def _compute_expected_image_or_segmentation_mask(self, input, output_size):
# manual implementation that does not use our tensor kernels
# IMO, it would be fine to use `center_crop_image_pil` here.
pass
def test_image(self):
pass
def test_segmentation_mask(self):
pass
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the problem here is with the _compute_expected_segmentation_mask
implementation which is exactly the same as F.center_crop_segmentation_mask
. IMO, this way of testing does not bring any value.
As for restructuring with TestCorrectnessCenterCrop
, @pmeier it can be maybe done in your functional tests refactor PR, not here.
Let's move forward with this. If we abandoned somehow the idea of mocking F.center_crop_image_tensor
op and just checking the args, now let's recode _compute_expected_segmentation_mask
such that we compute in another way a center crop and check output vs expected results.
Sorry, I'm pretty busy these days, I'll fix the PR in the next few days (probably during the weekend). |
c939a5f
to
802b9b4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks @federicopozzi33
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @federicopozzi33!
left = round((image_width - crop_width) * 0.5) | ||
top = round((image_height - crop_height) * 0.5) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The kernel has an additional int
call:
vision/torchvision/prototype/transforms/functional/_geometry.py
Lines 585 to 586 in b969cca
crop_top = int(round((image_height - crop_height) / 2.0)) | |
crop_left = int(round((image_width - crop_width) / 2.0)) |
@vfdev-5 I recall we had issues with round
before. Should we just switch to int
in general?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pmeier I do not quite remember what was the issue with round (maybe jit behaves differently to eager mode). For me the code you mention is more like a definition of crop_top and crop_left. For example, for bboxes we could also keep these values as float but let's define that crop_top/left are rounded integers.
Co-authored-by: Philip Meier <[email protected]>
Summary: * feat: add functional center crop on mask * test: add correctness center crop with random segmentation mask * test: improvements * test: improvements * Apply suggestions from code review Reviewed By: NicolasHug Differential Revision: D36760924 fbshipit-source-id: 2e033926b18a34676a367331a03e45ca2ffd9cdc Co-authored-by: Philip Meier <[email protected]> Co-authored-by: Federico Pozzi <[email protected]> Co-authored-by: vfdev <[email protected]> Co-authored-by: Philip Meier <[email protected]>
Related to #5782
Code