You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We've been using AudioContext object to analyze media stream. Chrome plans to change audio policies December 2018 and they'll require user gesture for AudioContext to work:
Key Point: If an AudioContext is created prior to the document receiving a user gesture, it will be created in the "suspended" state, and you will need to call resume() after a user gesture is received.
It means, that even if user granted access to mic for given website, audio analysis wont work if i.e. user fired the page and mic is being captured.
Do you have an alternative way to detect current mic level indication?
The text was updated successfully, but these errors were encountered:
yeah, I think that is mostly a usage thing since hark doesn't instantiate the audiocontext before the constructor is called. It will suck when that is done before a button click thought but that is chrome...
It might not be the only case. We use in our app identical approach (with AudioContext to calculate user mic volume level) and if we allow users to join conversation just by going to some link. If they have mic permission (chrome remembers it so its not always required to ask for user click) then we can have active speaking user.
We've been using AudioContext object to analyze media stream. Chrome plans to change audio policies December 2018 and they'll require user gesture for AudioContext to work:
https://developers.google.com/web/updates/2017/09/autoplay-policy-changes#webaudio
It means, that even if user granted access to mic for given website, audio analysis wont work if i.e. user fired the page and mic is being captured.
Do you have an alternative way to detect current mic level indication?
The text was updated successfully, but these errors were encountered: