The RBC Borealis team is proud to present some of our research at NeurIPS 2018. Read about our work below and don’t forget to come by Booth #104 to talk shop, meet the team or simply say hello (like these guys did).

Image - NeurIPS2018_Image1.jpg

Paper

Dimensionality Reduction has Quantifiable Imperfections: Two Geometric Bounds

Authors: Kry Lui, Gavin Weiguang Ding, Ruitong Huang, Robert McCann
Poster: Decemeber 5, 10:45 am – 12:45 pm @ Room 210 & 230 AB #103
Dimensionality reduction occurs frequently in machine learning. It is widely believed that reducing more dimensions will often result in a greater loss of information. However, the phenomenon remains a conceptual mystery in theory. In this work, we try to rigorously quantify such phenomena in an information retrieval setting by using geometric techniques. To the best of our knowledge, these are the first provable information loss rates due to dimensionality reduction.

Workshops

On Learning Wire-Length Efficient Neural Networks

Authors: Christopher Blake, Luyu Wang, Giuseppe Castiglione, Christopher Srinivasa, Marcus Brubaker
Workshop: Compact Deep Neural Network Representation (Spotlight Paper); December 7, 2:50 pm
When seeking energy-efficient neural networks, we argue that wire-length is an important metric to consider. Based on this theory, new techniques are developed and tested to train neural networks that are both accurate and wire-length-efficient. This contrasts to previous techniques that minimize the number of weights in the network, suggesting these techniques may be useful for creating specialized neural network circuits that consume less energy.

Few-Shot Self Reminder to Overcome Catastrophic Forgetting

Authors: Junfeng Wen, Yanshuai Cao, Ruitong Huang
Workshop: Continual Learning; December 7
We present a simple, yet surprisingly effective, way to prevent catastrophic forgetting. Our method regularizes the neural net from changing its learned behaviour by performing logit matching on selected samples kept in episodic memory from previous tasks. As little as one data point per class is found to be effective. With similar storage, our algorithm outperforms previous state-of-the-art methods.

Compositional Hard Negatives for Visual Semantic Embeddings via an Adversary

Authors: *A.J. Bose, *Huan Ling, Yanshuai Cao
Workshop: ViGIL; December 7, 8 am – 6:30 pm
We present a new technique for hard negative mining for learning visual-semantic embeddings. The technique uses an adversary that is learned in a min-max game with the cross-modal embedding model. The adversary exploits compositionality of images and texts and is able to compose harder negatives through a novel combination of objects and regions across different images for a given caption. We show new state-of-the-art results on MS-COCO. 

On the Sensitivity of Adversarial Robustness to Input Data Distributions

Authors: Gavin Weiguang Ding, Yik Chau (Kry) Lui, Xiaomeng Jin, Luyu Wang, Ruitong Huang
Workshop: Security in Machine Learning; December 7, 8:45 am – 5:30 pm We demonstrate an intriguing phenomenon about adversarial training – that adversarial robustness, unlike clean accuracy, is highly sensitive to the input data distribution. In theory, we show this by analyzing the Bayes classifier’s robustness. In experiments, we further show that transformed variants of MNIST and CIFAR10 achieve comparable clean accuracies under standard training but significantly different robust accuracies under adversarial training.

Skill Reuse in Partially Observable Multiagent Environments

Authors: Pablo Hernandez-Leal, Bilal Kartal, Matthew E. Taylor
Workshop: Latinx in AI Coalition; December 2, 8 am – 6:30 pm
Our goal is to tackle partially observable multiagent scenarios by proposing a framework based on learning robust best responses (i.e., skills) and Bayesian inference for opponent detection. In order to reduce long training periods, we propose to intelligently reuse policies (skills) by quickly identifying the opponent we are playing with.

Competition

Adversarial Vision Challenge

Authors: Yash Sharma, Gavin Weiguang Ding
Workshop: NeurIPS 2018 Competition Track Day 1; 8 am – 6:30 pm
This challenge pitted submitted adversarial attacks against submitted defenses. The challenge was unique in that they allowed for a limited set of queries, outputting the decision of the defense, rewarded minimizing the (L2) distortion instead of using a (Linf) distortion constraint, and used TinyImageNet instead of the ImageNet dataset, making it tractable for competitors to train their own models. Our attack solution placed top-10 overall in the challenge, in particular placing 5th in the targeted attack track – a more difficult setting. We based our solution on performing a binary search to find the minimal successful distortion, then optimizing the procedure while still performing the necessary number of iterations to meet the computational constraints.