You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
but found that torchvision.prototype.features is now gone. What's the current way to run this? I attempted to simply pass the images, bboxes and labels with the following types: torchvision.prototype.datasets.utils._encoded.EncodedImage, torchvision.prototype.datapoints._bounding_box.BoundingBox, torchvision.prototype.datapoints._label.Label. However this didn't seem to apply the transforms as everything remained the same shape.
edit: I've found that features seems to be renamed to datapoints. I tried applying this, but EncodedImage in a coco sample['image'] seems to be 1D and prototype.transforms requires 2D images. What's the proper way to get this as 2D so I can apply transforms? Is there a decode method I'm missing?
I tried applying this, but EncodedImage in a coco sample['image'] seems to be 1D and prototype.transforms requires 2D images.
All development on torchvision.prototype.datasets is on hold and thus there might be some incompatibilities. You can find our proposal on how to use the datasets v1 with the transforms v2 in #6662. We have a PoC implementation in #6663 that I'm actively working on. Happy to get your feedback there regarding this link between the two.
What's the proper way to get this as 2D so I can apply transforms? Is there a decode method I'm missing?
Our idea was for the prototype datasets to just return the raw bytes so decoding can happen however the user likes. In #6944 we made a cut and separated datasets from transforms to focus on the latter. In that PR we also removed the decoding transforms that linked the two. Here is the relevant part from the state right before the PR was merged:
Substituting datapoints for features in your and removing the color_space parameter from the datapoints.Image instantiation (this happened in #7120) should be sufficient to get the example working again. As such I'm closing this. If you have general questions or feedback, the thread in #6753 might be also be of interest.
Uh oh!
There was an error while loading. Please reload this page.
📚 The doc issue
I tried to run the end-to-end example in this recent blog post:
but found that
torchvision.prototype.features
is now gone. What's the current way to run this? I attempted to simply pass the images, bboxes and labels with the following types:torchvision.prototype.datasets.utils._encoded.EncodedImage
,torchvision.prototype.datapoints._bounding_box.BoundingBox
,torchvision.prototype.datapoints._label.Label
. However this didn't seem to apply the transforms as everything remained the same shape.edit: I've found that
features
seems to be renamed todatapoints
. I tried applying this, butEncodedImage
in a cocosample['image']
seems to be 1D andprototype.transforms
requires 2D images. What's the proper way to get this as 2D so I can apply transforms? Is there a decode method I'm missing?Suggest a potential alternative/fix
No response
cc @vfdev-5 @bjuncek @pmeier
The text was updated successfully, but these errors were encountered: