Autoencoders belongs to deep generative models. Variational autoencoders (VAEs) and generative adversarial networks (GANs) are deep generative models that have been applied in NLP to discover the language structure through the process of generating realistic sentences.
For example, an RNN-based VAE was proposed to generate a more diverse and well-formed sentences when compared to standard sentence autoencoders. Other models also allowed for structured variables to be incorporated into the latent code. What I understand from this is that once the VAE model is trained to generate realistic sentences, we can feed sentiment as input and it will produce sentences that reflect that particular sentiment.
Generative adversarial networks (GANs) is composed of two competing networks; the generator and the discriminator. They have also been used to generate realistic text data. My understanding is that the objective of the generator is to generate sample texts to fool the discriminator and the role of the discriminator is to discriminate between real data and generated samples. As you can imagine, at the beginning of the process, the generator would be terrible at generating realistic samples and would be constantly reject by the discriminator. However, as the training process continues, the generator would eventually be able to generate samples that are hard for the discriminator to discriminate.
I will be learning and writing more about VAEs and GANs in future blog posts!