Skip to content

Conversation

@GregoryComer
Copy link
Member

Summary

The test_qs8_conv1d_batchnorm_seq test is periodically failing in CI due to being out of tolerance. From looking at the numbers, I believe this is due to an overly tight tolerance when quantizing. The error reported in https://github.com/pytorch/executorch/actions/runs/19147331216/job/54728231147#step:9:19945, for example, is 0.03, which seems large, but given that the output range is 6, that's roughly 1/200th. With 8-bit quant, that's not unreasonable. I've updated the tolerance in the test to reflect this.

@pytorch-bot
Copy link

pytorch-bot bot commented Nov 6, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15666

Note: Links to docs will display an error until the docs builds have been completed.

⏳ 5 Pending, 1 Unrelated Failure

As of commit cca4dd3 with merge base 3405317 (image):

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Nov 6, 2025
@GregoryComer GregoryComer added the release notes: none Do not include this in the release notes label Nov 6, 2025
@GregoryComer GregoryComer changed the title Allow conv1d test to vary by 1 quant step from reference Relax XNN quantized conv1d test tolerances Nov 7, 2025
@GregoryComer GregoryComer marked this pull request as ready for review November 8, 2025 00:06
@GregoryComer GregoryComer merged commit f6c6909 into pytorch:main Nov 8, 2025
141 of 142 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. release notes: none Do not include this in the release notes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants