Skip to content

Evaluate models accuracy when using inference transforms on Tensors (instead of PIL images) #6506

@NicolasHug

Description

@NicolasHug

The accuracy of our trained models that we currently report come from evaluations run on PIL images. However, our inference-time transforms also support Tensors.

In the wild, our users might be passing Tensors to the pre-trained models (instead of PIL images), so it's worth figuring out whether the accuracy is consistent between Tensors and PIL.

Note: we do check that all the transforms are consistent between PIL and Tensors, so hopefully differences should be minimal. But models are known to learn interpolation tweaks and in particular the use of anti-aliasing. PIL uses anti-aliasing by default and this is what our models where trained on, but we don't pass antialias=True to the Resize transform, so it might be a source of discrepancy.

As discussed internally with @datumbox, figuring that out is part of the transforms rework plan (although it's relevant outside of the rework as well).

cc @vfdev-5 @datumbox

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions