You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This commit was created on GitHub.com and signed with GitHub’s verified signature.
The key has expired.
[2.3.8]
Fixed
Fix problem identified in issue #925 that caused learning rate
warmup to fail in some instances when doing continued training
[2.3.7]
Changed
Use dataclass module to simplify Config classes. No functional change.
[2.3.6]
Fixed
Fixes the problem identified in issue #890, where the lr_scheduler
does not behave as expected when continuing training. The problem is
that the lr_scheduler is kept as part of the optimizer, but the
optimizer is not saved when saving state. Therefore, every time
training is restarted, a new lr_scheduler is created with initial
parameter settings. Fix by saving and restoring the lr_scheduling
separately.
[2.3.5]
Fixed
Fixed issue with LearningRateSchedulerPlateauReduce.repr printing
out num_not_improved instead of reduce_num_not_improved.
[2.3.4]
Fixed
Fixed issue with dtype mismatch in beam search when translating with --dtype float16.
[2.3.3]
Changed
Upgraded SacreBLEU dependency of Sockeye to a newer version (1.4.14).