-
Notifications
You must be signed in to change notification settings - Fork 7.1k
Refactor transforms #861
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor transforms #861
Conversation
Codecov Report
@@ Coverage Diff @@
## master #861 +/- ##
==========================================
- Coverage 54.49% 54.37% -0.12%
==========================================
Files 36 37 +1
Lines 3307 3292 -15
Branches 542 539 -3
==========================================
- Hits 1802 1790 -12
+ Misses 1372 1369 -3
Partials 133 133
Continue to review full report at Codecov.
|
Hi, Thanks a lot for the PR! While I definitely agree that adding some more structure is a good thing, I'm still not convinced that the current way the transforms are implemented is very adapted when dealing with more types of data. In particular, I'm more and more inclined that for more complex use-cases the user should just write their composite transform themselves by using the functional interface, in a similar way that we don't provide That being said, I have no objections in merging this PR as is, but I might expect that a few things will change in the future, which might make some of the changes in this PR obsolete. Also, while the new classes do add some more structure, we don't significantly save much typing, precisely because the scope of what should be in the Thoughts? |
I'm most certainly don't have the whole picture, so please correct me if I make some wrong assumptions.
I don't mind if it gets replaced by something more fitting in the future.
I was wondering if I should create a separate module for all
I agree with you here. In this case I think having a superclass, from which the user can inherit from, is IMHO even more needed. By mimicking the structure of
I'm not sure if I got your point here. Can you elaborate? |
What would be the restrictions / requirements that the Also, before the first release of PyTorch, we used to have a I'd love to hear your thoughts on this |
None, other than providing more structure. On a second thought this makes little sense, since
We could mimic the structure of
Internally the If that is something we want to pursue, I will open a discussion as issue summarising all ideas I can find within the issues and PRs about this topic. |
If that is something we want to pursue, I will open a discussion as issue summarising all ideas I can find within the issues and PRs about this topic. @pmeier I think this is something we want to do. But there are some complications there. For example, we might have different image sizes that comes out from the dataset in this case. So we would need a way to efficiently represent a list of tensors of different sizes, something like a Here are my thoughts:
We can nowadays sample random transformation parameters efficiently for each image. But performing the geometric transformations in the batch in one go is not easy to implement efficiently as of now. This is a blocker so that this idea could get implemented. I'm not yet 100% clear on what kind of API we would need for that. And of course, we are not tied to using I'm ccing @cpuhrsch , as he is investigating |
In that case I will get to it. I will CC you.
This would make a lot things easier. AFAIK we could use
? If that is the case we could simply let |
Yes, but I'd rather not rush things just yet. There are not yet any real benefit for making the transforms a |
Edit: I only now became aware of #855 and especially a comment from @fmassa about the future of the
torchvision.transform
: if this PR does not fit into the grand scheme of the upcoming release, feel free to close it.TL,DR
This PR does not add any new functionality, but streamlines the
__repr__
methods. In order to do so it introduces the superclassesTransform
andTransformContainer
.Details
__repr__
methods of theTransform
andTransformContainer
objects are based the idea oftorch.nn.Module
: each subclass has the possibility to add information by overriding theextra_repr
method.Compose
,RandomTransforms
,RandomApply
,RandomOrder
,RandomChoice
are moved tocontainer.py
RandomTransforms
is superseded byTransformContainer
, since its only tasks were to handle the__repr__
method and make sure the subclasses implement the__call__
method.test_transforms.py
fail, but the same tests also fail on themaster
branch