-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨ Handle arbitrary metrics order and add testing #60
Conversation
9b33056
to
95141c9
Compare
3e5a912
to
9e8fc13
Compare
✅ Test class-based module ✅ Test longer vectors 🔥 Remove file ♻️ Use better test data 🔥 Remove old test data files 🔥 Remove old logs
70478c7
to
f2b5d28
Compare
Certain (valid?) CVSS4 vector strings were failing validation because they had a different vector ordering 💬 Fix language
Defines a Github Action to run the Jest test suite
Removes random selection of test cases.
9ff4383
to
10c24fe
Compare
Arbitrary metrics order is invalid according to spec... Furthermore, the current approach within this SIG for testing is developing an assessment framework so developers can easily handle this part. This PR goes against this effort by bypassing it. |
…d-testing" This reverts commit bea2884.
@pandatix OK, I understand that arbitrary metric order is not correct, that makes sense. However, I do not understand how adding tests is a problem and "bypassing" any other effort. If this is a reference implementation, why having tests which pass and which can be re-used by others, is wrong? Please elaborate. Also, it is not 500k lines of code, but of test vectors. I am completely fine removing it from this project, just please give a more detailed rationale. Thanks! |
Hey, sorry for the late answer everyone. Adding tests is not directly "bypassing" the SIG efforts, it is a good thing actually. From an external point of view (like an implementation maintainer/developer) the RedHat implementation is the de facto source of trust. NIST used it for their own calculator, many others accoring to our analysis. This PR comes in parallel of this effort: one of the requirements we drew is to provide this test suite in an efficient manner (e.g., a file served by a CDN, a versionned file on GitHub, a GH release asset) and with a limited set of test cases (under 10k if possible) that would cover most cases without needing exhaustivity. Another one is to provide both valid and invalid test cases. To avoid having parallel efforts, I think we should wait for the official FIRST.ORG SIG CVSS test data rather than creating one, propagating it to the community, then having to reach out maintainers to update their work one more time (update source of trust, consumption strategy, data model). We should hop on coordination rather than individual efforts bringing noise to the effort of CVSS v4 adoption. |
@pandatix I see. So if I understand correctly, the best course of action is to wait for official FIRST.ORG SIG CVSS test vectors, right? |
Yes, I think so 😄 |
Changes: