Replies: 1 comment
-
Hi, sorry for the late response, I missed this. If still relevant, which training set did you use? Did you monitor your losses? Your bitrates are very weird. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, @fracape
I trained factorized model for 100 epochs, but inference results for kodak dataset were not similar.
"kodak-{number}" means just a model number. The former is the 3.0M model, and the latter is the 7.0M model. The lambda is 0.01; aims quality 3.
Are these vary results usual? Do I have to train several models for best results as the pre-trained?
Or did the models just not converge?
Thanks for your great works.
Beta Was this translation helpful? Give feedback.
All reactions