You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This commit was created on GitHub.com and signed with GitHub’s verified signature.
The key has expired.
[1.18.20]
Changed
Transformer parametrization flags (model size, # of attention heads, feed-forward layer size) can now optionally
defined separately for encoder & decoder. For example, to use a different transformer model size for the encoder,
pass --transformer-model-size 1024:512.
[1.18.19]
Added
LHUC is now supported in transformer models
[1.18.18]
Added
[Experimental] Introducing the image captioning module. Type of models supported: ConvNet encoder - Sockeye NMT decoders. This includes also a feature extraction script,
an image-text iterator that loads features, training and inference pipelines and a visualization script that loads images and captions.
See this tutorial for its usage. This module is experimental therefore its maintenance is not fully guaranteed.