Skip to content

Commit f6c6909

Browse files
authored
Relax XNN quantized conv1d test tolerances (#15666)
### Summary The test_qs8_conv1d_batchnorm_seq test is periodically failing in CI due to being out of tolerance. From looking at the numbers, I believe this is due to an overly tight tolerance when quantizing. The error reported in https://github.com/pytorch/executorch/actions/runs/19147331216/job/54728231147#step:9:19945, for example, is 0.03, which seems large, but given that the output range is 6, that's roughly 1/200th. With 8-bit quant, that's not unreasonable. I've updated the tolerance in the test to reflect this.
1 parent 6014129 commit f6c6909

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

backends/xnnpack/test/ops/test_conv1d.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -126,7 +126,9 @@ def _test_conv1d(
126126
# quantized operators to be loaded and we don't want to do that in the test.
127127
if not skip_to_executorch:
128128
tester.to_executorch().serialize().run_method_and_compare_outputs(
129-
num_runs=10, atol=0.02, rtol=0.02
129+
num_runs=10,
130+
atol=0.04 if quantized else 1e-03,
131+
rtol=0.02 if quantized else 1e-03,
130132
)
131133

132134
def test_fp16_conv1d(self):

0 commit comments

Comments
 (0)