Replies: 3 comments 10 replies
-
Yeah makes sense. However, I am not sure if it would make a huge difference in performance. Any experience with that? |
Beta Was this translation helpful? Give feedback.
-
yes it does. for now as a workaround just adding N times the same small finetuning dataset and the more I add the quicker and the higher I am reaching a high SECS. |
Beta Was this translation helpful? Give feedback.
-
would require some changes because rigth now, language sampler and speaker sampler are mutually exclusive and here TTS/TTS/tts/models/base_tts.py Lines 357 to 363 in c63bb48 |
Beta Was this translation helpful? Give feedback.
-
Hello,
For finetuning, it seems relevant to overweight a smaller dataset WRT the main data used to train the model.
My suggestion would be to add a weight field in the dataset config list and to take it into account in the load_tts_samples() function.
it's simple and should make the job.
@erogol Any insights?
Beta Was this translation helpful? Give feedback.
All reactions