-
Notifications
You must be signed in to change notification settings - Fork 7.1k
Closed
Labels
good first issuemodule: models.quantizationIssues related to the quantizable/quantized modelsIssues related to the quantizable/quantized modelsmodule: tests
Description
🚀 The feature
Unlike our test_classification_model
tests, the test_quantized_classification_model
doesn't check the model against an expected value. This means that if we break the Quantization model, we won't be able to detect it:
Lines 676 to 717 in 2e0949e
@pytest.mark.skipif(not ('fbgemm' in torch.backends.quantized.supported_engines and | |
'qnnpack' in torch.backends.quantized.supported_engines), | |
reason="This Pytorch Build has not been built with fbgemm and qnnpack") | |
@pytest.mark.parametrize('model_name', get_available_quantizable_models()) | |
def test_quantized_classification_model(model_name): | |
defaults = { | |
'input_shape': (1, 3, 224, 224), | |
'pretrained': False, | |
'quantize': True, | |
} | |
kwargs = {**defaults, **_model_params.get(model_name, {})} | |
input_shape = kwargs.pop('input_shape') | |
# First check if quantize=True provides models that can run with input data | |
model = torchvision.models.quantization.__dict__[model_name](**kwargs) | |
x = torch.rand(input_shape) | |
model(x) | |
kwargs['quantize'] = False | |
for eval_mode in [True, False]: | |
model = torchvision.models.quantization.__dict__[model_name](**kwargs) | |
if eval_mode: | |
model.eval() | |
model.qconfig = torch.quantization.default_qconfig | |
else: | |
model.train() | |
model.qconfig = torch.quantization.default_qat_qconfig | |
model.fuse_model() | |
if eval_mode: | |
torch.quantization.prepare(model, inplace=True) | |
else: | |
torch.quantization.prepare_qat(model, inplace=True) | |
model.eval() | |
torch.quantization.convert(model, inplace=True) | |
try: | |
torch.jit.script(model) | |
except Exception as e: | |
tb = traceback.format_exc() | |
raise AssertionError(f"model cannot be scripted. Traceback = {str(tb)}") from e |
We should adapt the tests (add new, modify or reuse existing) to cover for this case.
Motivation, pitch
Switch the following activation from Hardsigmoid
to Hardswish
and run the tests from mobilenet_v3_large.
kwargs["scale_activation"] = nn.Hardsigmoid |
None of the tests will fail but the model will be completely broken. This shows we have a massive hole on our Quantization tests.
cc @pmeier
Metadata
Metadata
Assignees
Labels
good first issuemodule: models.quantizationIssues related to the quantizable/quantized modelsIssues related to the quantizable/quantized modelsmodule: tests