Skip to content

Master's Thesis Project: Text generation with GANs and Autoencoders

Notifications You must be signed in to change notification settings

postnubilaphoebus/Lost-in-Latent-Space

Repository files navigation

This repository contains the code for my master's thesis at Groningen University:

Lost in Latent Space: Exploring GAN Text Generation by Leveraging Autoencoder Representations
I systematically investigated LaTextGAN https://arxiv.org/abs/1810.06640.

Abstract: Generative adversarial networks (GANs) are powerful for data generation, but applying them to language is challenging due to the difficulty of backpropagating over discrete sequences. Previous solutions, using the REINFORCE algorithm, yield comparable results to maximum-likelihood estimation (MLE), yet are difficult to implement and unstable. Donahue and Rumshisky propose an alternative method called LaTextGAN, which trains a GAN on learned latent representations from an autoencoder, enabling backpropagation between the generator and discriminator. In this study, we investigate this technique by exploring algorithmic and architectural modifications in both the autoencoder and GAN domains. For autoencoders, we examine convolutional and recurrent architectures, as well as denoising and regularization techniques, to promote generalization. We compare two variants of the Wasserstein GAN (WGAN): WGAN with spectral norm and WGAN with gradient penalty, along with techniques to prevent mode collapse. We find that input-dependent denoising harms autoencoder reconstruction, while input-independent regularization improves it. Furthermore, we propose that convolutional decoders may be ill-suited for this task, as they lack the local sequence modeling ability required for reconstructing meaningful sentences from artificially generated latent codes. Regarding GANs, methods to prevent mode collapse improve proximity between artificial and real codes in the latent space but adversely affect decoding performance. The best-performing GAN models sometimes produce syntactically correct sentences, albeit with no real-world meaning or uncommon words. Lastly, WGAN with a gradient penalty seems more suitable for this task than spectral normalization, which becomes unstable for deeper models. We conclude that achieving true generalization in the latent space is challenging, and similar latent codes do not necessarily decode into similar sentences. To improve LaTextGAN, we advise that future research efforts focus on improving meaningful latent representations for textual autoencoders.

About

Master's Thesis Project: Text generation with GANs and Autoencoders

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published