Replies: 1 comment 1 reply
-
hi @ngocdh Thank you for your idea! You could probably save a bit of CPU usage with this. But apparently the mixing is quite cheap anyway / most of the CPU consumption comes from the OPUS encoding as stated here: #339 (comment) You could try to use tools to get actual numbers for the CPU usage of the different functions and determine if such optimizations may make sense for very big number of participants. But the added complexity may not be worth for the savings that may be possible. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
If I'm not mistaken, CServer mixes all channels for clients one by one before sending audio data back to clients.
Maybe if we do a check first to find duplicates (clients having the same mix, e.g. 100 every where!), then we could avoid repeating the same audio mixing process?
To augment the duplicate chance, we could define larger steps for faders: instead of volume (and pan) values running from 0,1, 2, 3 to 100, we could do 0,5,10,15, etc.
Listeners won't recognize much difference anyway?
Beta Was this translation helpful? Give feedback.
All reactions