Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pytest.xfail and pytest.skip as context managers #13001

Open
stevapple opened this issue Nov 27, 2024 · 4 comments
Open

pytest.xfail and pytest.skip as context managers #13001

stevapple opened this issue Nov 27, 2024 · 4 comments

Comments

@stevapple
Copy link

What's the problem this feature will solve?

For some reason we may expect a specific part of the test to fail or to be skipped (maybe conditionally), usually because there's a blocking issue to be resolved.

Describe the solution you'd like

We propose exposing pytest.xfail and pytest.skip (or their equivalents) as context managers, just like pytest.raises. So we may have:

def test_match():
    assert match("hello", "hello")
    with pytest.xfail("external/matcher#123: match doesn't support case-insensitive matching yet"):
        assert match("World", "WORLD")

pytest.skip is used for a similar purpose, only when the broken tests are not even going to be run.

Alternative Solutions

We can, of course, break a single test into multiple and mark them with @pytest.mark.xfail and @pytest.mark.skip. This is less favorable because tests may be organized and can share some setup codes (which may not be exposed by fixture). Most importantly, we may want the test to early fail before it ever touches the with pytest.xfail block, and this is not achievable without the help of plugins like pytest-dependency.

Additional context

We would also like to see strict= argument in the current pytest.xfail.

@RonnyPfannschmidt
Copy link
Member

That's something to put into subtests

@stevapple
Copy link
Author

That's something to put into subtests

Learned something new today! It looks like subtests can solve most part of the problem, but we're now limited by the functionality of pytest.xfail which is almost identical to pytest.skip and completely different from @pytest.mark.xfail, by not allowing to strictly check if the test should fail and even not running the whole test.

@The-Compiler
Copy link
Member

but we're now limited by the functionality of pytest.xfail which is almost identical to pytest.skip and completely different from @pytest.mark.xfail, by not allowing to strictly check if the test should fail and even not running the whole test.

...huh, that seems backwards to me? Your original proposal is limited by that, but e.g. using parametrize isn't:

@pytest.mark.parametrize("inp, out", [
    ("hello", "hello"),
    pytest.param(
        "World", "WORLD",
        marks=[pytest.mark.xfail("external/matcher#123: match doesn't support case-insensitive matching yet")],
    ),
])
def test_match(inp, out):
    assert match(inp, out)

@RonnyPfannschmidt
Copy link
Member

@The-Compiler i believe the problem mentioned is that we cannot mark a subtest as xfail

a parameterized test can get marks via the id
a subtest cant quite handle them and tbh we dont quite have a mechanism for that yet

cc @nicoddemus for pytest-subtests

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants