-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New TTS Model request #3
Comments
Hi @rishikksh20, thanks for the requests! I can see that they fit well with this project. I will look into it and hope that I can merge them with this repo :) |
Hi @keonlee9420 , DelightfulTTS is similar to Phone Level Mixture Density Network but here instead of using complicated GMM based model author directly used latent representation for Prosody Predictor and Prosody encoder. Phoneme level prosody encoder and Utterance level encoder are similar to this. I think they simply uses Global Style Token(GST) module as Utterance level encoder. |
DelightfulTTS learn Phoneme level prosody implicitly whereas |
I think DelightfulTTS is all in one solution, it uses non-autoregressive architecture with conformer blocks and both Utterance level and Phoneme level predictor as well. |
Thank you for the summary. The DelightfulTTS model seems worth a try as you depicted. I will try it and share through the update soon! |
@keonlee9420 Hi, are you able to train DelightfullTTS successfully ? |
Yes, but it shows overfitting issue. I guess this issue originated from the limited capacity of the prosody predictor since I can confirm that the prosody embedding extracted from prosody extractor can actually improve the expressiveness including the validation loss. |
Have you train predictor and extractor simultaneously or train extractor for 100k steps first then pause it and then start predictor training in teacher forcing method like mentioned in AdaSpeech paper ? |
Because in my case I do some modification in architecture, I used same extractors as mentioned in DelightfullTTS 's papers but I am not using any predictor for utterance level because I want to use it similarly as GST-Tacotron by passing external reference mel, and for phoneme level predictor I used similar predictor architecture as in original Adaspeech's which is similar to duration and pitch predictor. And I train Phoneme level extractor for 100k then stop it and then start predictor training. |
ah, thanks for sharing. I trained jointly without any detach or schedule from the first step. So what you mean is
|
I suggest 1 |
@keonlee9420 In your experience which perform better normal Transformer encoder or Conformer when you have only 20 hours of speech data? |
As per this article Microsoft TTS api built on DelightfullTTS. |
can you share your code
|
@rishikksh20 Does this refer to text encoder output? |
yes |
@rishikksh20 After 100k,, does the prams of prodsody extractor update or just frozen? |
Is there any confirmation on the quality of the Transformer encoder or Conformer, I found that the conformer in DelightfulTTS is quite different from ASR a little bit. |
@v-nhandt21 yes conformer in TTS is modified version of ASR one. |
Recently two papers regarding Transformer TTS pops up and I think both are suitable for this repo:
I think both are easy to implement and well suited for this repo.
The text was updated successfully, but these errors were encountered: