Skip to content

Handle grouped test cases #49

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Apr 9, 2025
Merged

Conversation

RussellDash332
Copy link
Contributor

@RussellDash332 RussellDash332 commented Oct 12, 2024

What's the problem?

Sometimes, when you're working on a question with partial scores, it is very likely that the test cases are made into groups. The current implementation hinders whether we fail the test case group(s) or not.

Here's an example I took while working on this question and this question. Assume that ktcli <problem_id> is my own command alias to run the submission script along with <problem_id>.py attached somewhere.

Normally, if you fail a testcase, the judging will stop, as shown below.

image

For those with testcase groups, if you fail a testcase, instead of ending the entire judgement, there can be multiple behaviors. Usually, it will simply move on to the next testcase group, until there is none left. On other occassions, it will keep judging the next test cases anyways.

Therefore, we can't tell if we're failing a particular test case or not until we know it stops judging. In fact, we can still get partially accepted with these mistakes existing, as shown below (I did not get 100 points for this).

image

So what does this PR do?

Introduce colours (and scores too! #44)

With colours, we can finally tell these testcases apart. Here are two scenarios: when the judging is in progress, and when it's done.

Judging in progress

Yellow question mark for a nice touch 😄

Single group (default)
image

Grouped testcases
image

AC verdict

Single group (default)
image

Grouped testcases
image

Non-AC verdict

image

Small addition: auto-retry for out-of-tokens scenario

I personally find myself too lazy to keep running the script whenever I ran out of tokens. So I made sure that the script auto-retries until it has a token again.

  • Pro: less manual work involved
  • Con: more requests sent as an attempt to submit

Here's a reference to what originally happens when you're out of tokens.

image

This PR will ensure when the same thing happens, it will simply wait until the time is right and then does the usual behavior.

Copy link
Member

@JoelNiemela JoelNiemela left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a few comments. Once those are fixed, I think this could be approved.

@niemela niemela linked an issue Mar 2, 2025 that may be closed by this pull request
Copy link
Member

@JoelNiemela JoelNiemela left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! 👍

@JoelNiemela JoelNiemela changed the title Handle grouped test cases + auto-retry if out of tokens Handle grouped test cases Apr 9, 2025
@JoelNiemela JoelNiemela merged commit e748f51 into Kattis:master Apr 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Provide score in Accepted result
3 participants