-
Notifications
You must be signed in to change notification settings - Fork 9.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mcid.py: optimize FFT and A-weighting calculations #33057
Conversation
Thanks for contributing to openpilot! In order for us to review your PR as quickly as possible, check the following:
|
Do you have before/after on some benchmark? We need that for any optimization PR. |
I have not tested the reduced CPU usage of this PR on the device separately. I only measured the reduced time, and it saves approximately 1.5 ms on every update on the device. As for CPU usage, I tested it along with another PR (not yet submitted) that replaces np.concatenate with a circular buffer to efficiently manage incoming audio samples, avoiding costly array resizing operations. Together, these two PRs can reduce CPU usage by half (about reduce 4% cpu usage) |
openpilot/selfdrive/test/test_onroad.py Line 59 in 0fa6745
|
trigger-jenkins |
Not sure what the issue is. The cached calculation result A_WEIGHTING in the global variable is update: The memory size of the array A_WEIGHTING is 32KB. Is this causing the issue? |
jenkins passed on a subsequent run w/ no changes, flaky test? @adeebshihadeh |
I haven't seen that test fail randomly before. We can run a few more times to confirm though |
Passed 5 subsequent runs across the day. @deanlee can you make it use cache? |
d47e4e1
to
b7516a4
Compare
Completed. The function now uses cache. |
0104073
to
487328f
Compare
trigger-jenkins |
* Precomputing weighting * add comments back * use cache * spacing spacing * clean up * lower by diff --------- Co-authored-by: Shane Smiskol <[email protected]> old-commit-hash: 313a282
Precompute A-weighting filter coefficients for reuse in each callback to enhance performance and reduce computational overhead.