Skip to content

Timeout 24h for PyTorch inductor tests workflow #4358

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 3 commits into from

Conversation

pbchekin
Copy link
Contributor

Flex Attention tests run more than 16.

Flex Attention tests run more than 16.

Signed-off-by: Pavel Chekin <[email protected]>
@@ -36,7 +36,7 @@ jobs:
runs-on:
- linux
- ${{ inputs.runner_label || 'rolling' }}
timeout-minutes: 960
timeout-minutes: 1440
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it possible to have special timeout for flex attention, and not increase the timeout for other workloads?

Copy link
Contributor Author

@pbchekin pbchekin May 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. Changed to a parameter, a caller workflow will set it to 24h. Going to test it shortly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pbchekin added 2 commits May 29, 2025 08:08
Signed-off-by: Pavel Chekin <[email protected]>
@whitneywhtsang
Copy link
Contributor

Closing this PR, no longer need to increase timeout: https://github.com/intel/intel-xpu-backend-for-triton/actions/runs/15373802137
Thanks Pavel!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants