RBC Borealis is proud to have two workshop papers at this year’s International Conference on Machine Learning (ICML) conference in Sydney, Australia. Representatives from our team will be downloading at least a dozen Netflix movies on their respective iPads in preparation for the flight to present their research next month. Below, a quick roundup of our work.
Implicit Manifold Learning in Generative Adversarial Networks
Our first spotlight paper, on Implicit Manifold Learning in Generative Adversarial Networks, was authored by Kry Lui, Yanshuai Cao, Maxime Gazeau, and Kelvin Zhang. Maxime and Kelvin are currently deciding which one of them will attend based on the empirically driven criteria of who can handle jet lag better [update: Kelvin won].
The idea behind the Generative Adversarial Networks was to discover how to create fake pictures that look like real pictures. Starting from an input dataset – whether that’s images of cats, dogs, or people – the goal was to get the algorithm to generate similar data that weren’t in that initial dataset.
But we still wanted our pictures to look accurate and be diverse. It turns out that the choice of the cost function was crucial in order to achieve these two desirable properties. We studied two different cost functions from the perspective of sharpness (how realistic the pictures are) and mode dropping (you want to generate diverse enough data without limited selection) — namely the Jensen-Shannon divergence and the Wasserstein distance.
We showed that the Jensen-Shannon divergence is a sensible objective to optimize when it comes to learning how to generate realistic samples, while the Wasserstein distance can give better sample diversity. We concluded that it’s worthwhile to look for ways to combine them or to seek new distances that inherit the best of both.
Automatic Selection of t-SNE Perplexity
Our second paper, authored by Yanshuai Cao and Luyu Wang, sought to reduce the time-consuming trial-and-error method of finding the right balance for data visualization using the t-SNE algorithm. t-SNE is a very powerful visualization tool, but until now we didn’t have any structured way to find the parameter that would automatically lead to the best result.
We proposed a new decision function for selecting the t-SNE perplexity hyperparameter and, by eliciting preference from human experts and inferring their hidden utility with a Gaussian process model, we found our algorithm matched human judgment. To effect, our solution is able to automatically set the balance between retaining the local structure and global structure and can identify the balance that is the best for any visualization. This work has been implemented as a feature in the data visualization software tool, Kaleidoscope, that we have already shipped to RBC business units.
Congratulations to our researchers. Stay tuned to this page for dispatches from the Outback and please stop by the office after August 12 for some Marmite brownies.
Blog
RBC Borealis announces new AI infrastructure, developed with Red Hat and NVIDIA
Blog