Skip to content

1.18.20

Compare
Choose a tag to compare
@fhieber fhieber released this 27 May 19:01
· 502 commits to main since this release
163dec8

[1.18.20]

Changed

  • Transformer parametrization flags (model size, # of attention heads, feed-forward layer size) can now optionally
    defined separately for encoder & decoder. For example, to use a different transformer model size for the encoder,
    pass --transformer-model-size 1024:512.

[1.18.19]

Added

  • LHUC is now supported in transformer models

[1.18.18]

Added

  • [Experimental] Introducing the image captioning module. Type of models supported: ConvNet encoder - Sockeye NMT decoders. This includes also a feature extraction script,
    an image-text iterator that loads features, training and inference pipelines and a visualization script that loads images and captions.
    See this tutorial for its usage. This module is experimental therefore its maintenance is not fully guaranteed.