-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem with Utterance-level Prosody extractor of DelightfulTTS #7
Comments
Hi @vietvq-vbee , thanks for sharing and sorry for late response. I just updated repo (v0.2.0) with some improvements, but still prosody modelings including DelightfulTTS are not yet resolved (WIP). I'll take a look with your insight and update the repo if I can make it work! |
@keonlee9420 I think I've found the source of the problem mentioned above. My colleague and I suspect this is because the Conformer layers use ReLU as activation function ([0, inf]) and UtteranceLevelProsodyEncoder uses tanh as activation function ([-1, 1]), meaning the maximum value of UtteranceLevelProsodyEncoder is still very small comparing to the average value of Conformer layers. We haven't conducted experiments where we replace tanh -> ReLU or LeakyReLU since currently we're concatenating them, but I'll inform you via this discussion ASAP :) |
Great! Looking forward to seeing how the results turn out :) |
I know I'm a bit late to the party, but I'm curious to know if either of you were able to resolve the issues with utterance level prosody extraction? When I train a model, it appears to ignore the utterance prosody embedding altogether. |
I've recently been experimenting with your implementation of DelightfulTTS and the voice quality is awesome. However I found out that the embedding vector output of Utterance-level Prosody extractor is very small, making the that of Utterance-level Prosody predictor small as well (L2 is roughly 12 and each element in the vector is roughly 0.2 to 0.3). Vectors with element close to zero means this layer mostly doesn't add any information at all. Have you find any solution to this?
The text was updated successfully, but these errors were encountered: