-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve workflows for CUDA availability #28
Comments
Yup, this
As for option 2., the self-hosted runner is desired for covering the tests of GPU backends. Actually, we are providing a GPU-server (most likely RTX3090) as the self-hosted runner. Then, we can use the preceding |
Concerning the Every job should have a So, you can now add a If you're worried instead about the actual |
I see. So now, the key is using Pytest level for filtering tests, instead of a preceding
Indeed, previously I was a bit worried on the |
Here I have two alternatives:
since not all tests require CUDA, just mark those for which is necessary with
@pytest.mark.cuda
, and filter them inconftest.py
according to the value of the environment variable, withinpytest_runtest_setup
f you want to run this very same workflow file on a self-hosted runner, then you would have to do something more complex: I don't believe you can access the underlying env vars by default, since for the GitHub runners there are the Variables for that, and it is written explicitly in the docs for the
env
contextHowever, you could run a first preliminary job, read the environment variable within bash code, and set in the
$GITHUB_ENV
file, this should make it available in theenv
context (I'm not completely sure myself, I should test it)Option 1. is much more reasonable to me, because it is more flexible (you could still run some tests everywhere), it's defined in Python, so you're already familiar with it (instead of looking around in GitHub docs), and you can test it offline on various systems, without having to fiddle with GitHub actions.
Originally posted by @alecandido in #24 (review)
The text was updated successfully, but these errors were encountered: