Skip to content

[Backend Tester] Clean up a few test issues #13258

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 118 commits into
base: main
Choose a base branch
from

Conversation

GregoryComer
Copy link
Member

@GregoryComer GregoryComer commented Aug 9, 2025

There are a few broken tests that need cleaning up. Some are failing due to missing portable kernels. These tests are now skipped if any unsupported portable ops remain post-delegation. I also fixed a few other small issues and bumped the element-wise tolerance to reduce false positives. SNR should hopefully catch most blatant correctness issues. The fp16 and quantized tests can generate occasional high element-wise error but still have decent SNR (~60+).

[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
@@ -9,6 +9,14 @@

import torch

# Set of unsupported ops that should cause tests to be skipped
UNSUPPORTED_PORTABLE_OPS = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we are adding a Portable flow, how would these show up there? Some PTE_FAIL?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair question. They won't be reported as failures. I think that's mostly fine, since the main goal is to validate backends. That being said, we could only skip these tests for flows that are expected to delegate - though backend op libs might make it trickier. I'll make this change.

@@ -142,12 +159,15 @@ def build_result(
tester.run_method_and_compare_outputs(
inputs=None if generate_random_test_inputs else inputs,
statistics_callback=lambda stats: error_statistics.append(stats),
atol=1e-1,
Copy link
Contributor

@digantdesai digantdesai Aug 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

atol seems pretty high for a general default, no?

The fp16 and quantized tests can generate occasional high element-wise error but still have decent SNR (~60+).

Do you know how this is tested on PyTorch/PyTorch side? >60 SNR is good but outliers are not great esp for individual ops, if they are expected I would prefer if we set them per test basis which would allow us to reason about the math being done on that specific test warrenting high ATOL/RTOL.

Copy link
Member Author

@GregoryComer GregoryComer Aug 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed offline. We will re-evaluate this in the future. I've created #13347 as a backlog task to track.

[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
Base automatically changed from gh/GregoryComer/113/head to main August 12, 2025 22:47
[ghstack-poisoned]
Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants