The variational autoencoder (VAE) can learn the manifold of natural images on certain datasets, as evidenced by meaningful interpolation or extrapolation in the continuous latent space. However, on discrete data such as text, it is unclear if unsupervised learning can discover a similar latent space that allows controllable manipulation. In this work, we find that sequence VAEs trained on text fail to properly decode when the latent codes are manipulated, because the modified codes often land in holes or vacant regions in the aggregated posterior latent space, where the decoding network fails to generalize. Both as a validation of the explanation and as a fix to the problem, we propose to constrain the posterior mean to a learned probability simplex, and perform manipulation within this simplex. Our proposed method mitigates the latent vacancy problem and achieves the first success in unsupervised learning of controllable representations for text. Empirically, our method outperforms unsupervised baselines and strong supervised approaches on text style transfer. On automatic evaluation metrics used in text style transfer, even with the decoding network trained from scratch, our method achieves comparable results with state-of-the-art supervised approaches leveraging large-scale pre-trained models for generation. Furthermore, it is capable of performing more flexible fine-grained control over text generation than existing methods.
Bibtex
@article{xu2020variational,
title={On Variational Learning of Controllable Representations for Text without Supervision},
author={Xu, Peng and Cheung, Jackie Chi Kit and Cao, Yanshuai},
booktitle = {ICML},
url={https://arxiv.org/abs/1905.11975},
year={2020}
}
Related Research
-
Our NeurIPS 2021 Reading List
Our NeurIPS 2021 Reading List
Y. Cao, K. Y. C. Lui, T. Durand, J. He, P. Xu, N. Mehrasa, A. Radovic, A. Lehrmann, R. Deng, A. Abdi, M. Schlegel, and S. Liu.
Computer Vision; Data Visualization; Graph Representation Learning; Learning And Generalization; Natural Language Processing; Optimization; Reinforcement Learning; Time series Modelling; Unsupervised Learning
Research
-
TURING: an Accurate and Interpretable Multi-Hypothesis Cross-Domain Natural Language Database Interface
TURING: an Accurate and Interpretable Multi-Hypothesis Cross-Domain Natural Language Database Interface
*P. Xu, *W. Zi, H. Shahidi, A. Kádár, K. Tang, W. Yang, J. Ateeq, H. Barot, M. Alon, and Y. Cao. Association for Computational Linguistics (ACL) & International Joint Conference on Natural Language Processing (IJCNLP)
Publications
-
Optimizing Deeper Transformers on Small Datasets
Optimizing Deeper Transformers on Small Datasets
P. Xu, D. Kumar, W. Yang, W. Zi, K. Tang, C. Huang, J. Chi Kit Cheung, S. Prince, and Y. Cao. Association for Computational Linguistics (ACL) & International Joint Conference on Natural Language Processing (IJCNLP)
Publications