Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with Utterance-level Prosody extractor of DelightfulTTS #7

Open
vietvq-vbee opened this issue Jan 31, 2022 · 4 comments
Open

Comments

@vietvq-vbee
Copy link

vietvq-vbee commented Jan 31, 2022

I've recently been experimenting with your implementation of DelightfulTTS and the voice quality is awesome. However I found out that the embedding vector output of Utterance-level Prosody extractor is very small, making the that of Utterance-level Prosody predictor small as well (L2 is roughly 12 and each element in the vector is roughly 0.2 to 0.3). Vectors with element close to zero means this layer mostly doesn't add any information at all. Have you find any solution to this?

@keonlee9420
Copy link
Owner

Hi @vietvq-vbee , thanks for sharing and sorry for late response. I just updated repo (v0.2.0) with some improvements, but still prosody modelings including DelightfulTTS are not yet resolved (WIP). I'll take a look with your insight and update the repo if I can make it work!

@vietvq-vbee
Copy link
Author

@keonlee9420 I think I've found the source of the problem mentioned above. My colleague and I suspect this is because the Conformer layers use ReLU as activation function ([0, inf]) and UtteranceLevelProsodyEncoder uses tanh as activation function ([-1, 1]), meaning the maximum value of UtteranceLevelProsodyEncoder is still very small comparing to the average value of Conformer layers.

We haven't conducted experiments where we replace tanh -> ReLU or LeakyReLU since currently we're concatenating them, but I'll inform you via this discussion ASAP :)

@keonlee9420
Copy link
Owner

Great! Looking forward to seeing how the results turn out :)

@devangsrammohan
Copy link

I know I'm a bit late to the party, but I'm curious to know if either of you were able to resolve the issues with utterance level prosody extraction? When I train a model, it appears to ignore the utterance prosody embedding altogether.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants