Cookies Settings

By using this website, you agree to our  Privacy Policy.

This paper raises an implicit manifold learning perspective in Generative Adversarial Networks (GANs), by studying how the support of the learned distribution, modelled as a submanifold Mθ, perfectly match with Mr, the support of the real data distribution. We show that optimizing Jensen-Shannon divergence forces Mθ to perfectly match with Mr, while optimizing Wasserstein distance does not. On the other hand, by comparing the gradients of the Jensen-Shannon divergence and the Wasserstein distances (W1 and W22) in their primal forms, we conjecture that Wasserstein W22 may enjoy desirable properties such as reduced mode collapse. It is therefore interesting to design new distances that inherit the best from both distances.

Bibtex

@Conference{LuiIML,
Title = {Implicit Manifold Learning on Generative Adversarial Networks},
Author = {Kry Yik Chau Lui and Yanshuai Cao and Maxime Gazeau and Kelvin Shuangjian Zhang},
Year = {2017},
Abstract = {This paper raises an implicit manifold learning perspective in Generative Adversarial Networks (GANs), by studying how the support of the learned distribution, modelled as a submanifold Mθ, perfectly match with Mr, the support of the real data distribution. We show that optimizing Jensen-Shannon divergence forces Mθ to perfectly match with Mr, while optimizing Wasserstein distance does not. On the other hand, by comparing the gradients of the Jensen-Shannon divergence and the Wasserstein distances (W1 and W22) in their primal forms, we conjecture that Wasserstein W22 may enjoy desirable properties such as reduced mode collapse. It is therefore interesting to design new distances that inherit the best from both distances.},
Journal = {International Conference on Machine Learning (Workshop on Implicit Models)},
Url = {https://arxiv.org/abs/1710.11260}
}

Related Research