We propose a novel regularizer to improve the training of Generative Adversarial Networks (GANs). The motivation is that when the discriminator D spreads out its model capacity in the right way, the learning signals given to the generator G are more informative and diverse, which helps G to explore better and discover the real data manifold while avoiding large unstable jumps due to the erroneous extrapolation made by D. Our regularizer guides the rectifier discriminator D to better allocate its model capacity, by encouraging the binary activation patternson selected internal layers of D to have a high joint entropy. Experimental results on both synthetic data and real datasets demonstrate improvements in stability
and convergence speed of the GAN training, as well as higher sample quality.
The approach also leads to higher classification accuracies in semi-supervised learning.
Bibtex
@Article{Cao2018Improving,
Title = {Improving GAN Training via Binarized Representation Entropy (BRE) Regularization},
Author = {Yanshuai Cao and Gavin Weiguang Ding and Kry Yik-Chau Lui and Ruitong Huang},
Journal = {ICLR},
Year = {2018},
Url = {https://openreview.net/forum?id=BkLhaGZRW},
Note = {accepted as poster}
}
Related Research
-
Interpretation for Variational Autoencoder Used to Generate Financial Synthetic Tabular Data
Interpretation for Variational Autoencoder Used to Generate Financial Synthetic Tabular Data
J. Wu, K. N. Plataniotis, *L. Z. Liu, *E. Amjadian, and Y. A. Lawryshyn. Special Issue Interpretability, Accountability and Robustness in Machine Learning (Algorithims)
Publications
-
ATOM: Attention Mixer for Efficient Dataset Distillation
ATOM: Attention Mixer for Efficient Dataset Distillation
*S. Khaki, *A. Sajedi, K. Wang, L. Z. Liu, Y. A. Lawryshyn, and K. N. Plataniotis. Oral presentation at The IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR)
Publications