- Sponsor
-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TooTallToby tutorials: unbreak and test. #912
Conversation
FYI @jdegenstein. LMK if you'd rather I not replace your PPP0110 solution (and if you have a fix in mind already). |
7b9c87f
to
3e8bb44
Compare
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## dev #912 +/- ##
=======================================
Coverage 96.93% 96.93%
=======================================
Files 32 32
Lines 9460 9471 +11
=======================================
+ Hits 9170 9181 +11
Misses 290 290 ☔ View full report in Codecov by Sentry. |
@gumyr WDYT, worth testing TTT examples as part of automated testing? |
@fischman FYI TTT examples are (sort of) tested through the benchmarking workflow (in a separate python script) |
Thanks for the heads-up, I was unaware of that before. Do you prefer to keep the structure at HEAD, or are you suggesting I keep this PR moving forward and delete the test_benchmarks copy of the examples, or something else? |
@fischman yes I agree that having two versions is suboptimal, I was just trying to get something up and running for benchmarking purposes with some changes that were happening on the OCCT/OCP and build123d side. Yes, I would like Can we simply import the models from docs/ into test_benchmarks.py ? That would probably be the "cleanest" solution requiring less work to maintain long-term. RE: PPP0110 -- I am fine with your solution, I am not married to what I created. Hopefully OCCT 7.9 will make creating this model easier. |
@jdegenstein dropped the copies in the test_benchmarks.py file and now pulling from the docs copy. PTAL. |
@fischman looking at the way that [benchmarks] category was eliminated resulted in benchmarks trying to run then failing because of multiple threads in the tests workflow.
If this is the way it is setup, I would prefer that an appropriate flag is passed to pytest to explicitly disable benchmarking for the tests workflow. The current behavior seems fragile to me. |
@jdegenstein good catch! Fixed to exclude benchmarks from the tests workflow. |
4677ce1
to
934625b
Compare
- Unbreak the three broken tutorials (fixes gumyr#848) - This involved a rewrite of PPP-01-10 because I already had my own solution to that one and I couldn't easily tell what was going wrong with the previous solution. - Add assertions to all the tutorials so that non-raising means success - Add the TTT examples to `test_examples.py` added recently for gumyr#909 - Also added sympy to development dependencies since one of the TTT examples uses it.
… instead use the versions from the docs/assets/ttt directory.
5157ddb
to
789ff73
Compare
@gumyr @jdegenstein any idea about the CI failure? Seems unrelated to this PR. |
@fischman I am going to take one more quick look, I don't believe the CI failure is related to anything you did -- just a random problem with the runner I suspect. |
OK I am merging despite what appears to be a different random CI failure. EDIT: thanks for your contribution @fischman |
test_examples.py
added recently for Examples should be kept working by being run in a test #909dos2unix
'd the TTT .py files to match the rest of the codebase.