Imbalanced distributions are ubiquitous in real-world data. They create constraints on Deep Neural Networks to represent the minority labels and avoid bias towards majority labels. The extensive body of imbalanced approaches address categorical label spaces but fail to effectively extend to regression problems where the label space is continuous. Local and global correlations among continuous labels provide valuable insights towards effectively modelling relationships in feature space. In this work, we propose ConR, a contrastive regularizer that models global and local label similarities in feature space and prevents the features of minority samples from being collapsed into their majority neighbours. ConR discerns the disagreements between the label space and feature space and imposes a penalty on these disagreements. ConR addresses the continuous nature of label space with two main strategies in a contrastive manner: incorrect proximities are penalized proportionate to the label similarities and the correct ones are encouraged to model local similarities. ConR consolidates essential considerations into a generic, easy-to-integrate, and efficient method that effectively addresses deep imbalanced regression. Moreover, ConR is orthogonal to existing approaches and smoothly extends to uni- and multi-dimensional label spaces. Our comprehensive experiments show that ConR significantly boosts the performance of all the state-of-the-art methods on four large-scale deep imbalanced regression benchmarks. Our code is publicly available in this https URL.
Bibtex
@misc{keramati2023conr,
title={ConR: Contrastive Regularizer for Deep Imbalanced Regression},
author={Mahsa Keramati and Lili Meng and R. David Evans},
year={2023},
eprint={2309.06651},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
Related Research
-
Identifying and Addressing Delusions for Target-Directed Decision-Making
Identifying and Addressing Delusions for Target-Directed Decision-Making
M. Zhao, T. Sylvain, D. Precup, and Y. Bengio. Workshop at Conference on Neural Information Processing System (NeurIPS)
Publications
-
Leveraging Environment Interaction for Automated PDDL Generation and Planning with Large Language Models
Leveraging Environment Interaction for Automated PDDL Generation and Planning with Large Language Models
S. Mahdavi, R. Aoki, K. Tang, and Y. Cao. Conference on Neural Information Processing System (NeurIPS)
Publications
-
Jump Starting Bandits with LLM-Generated Prior Knowledge
Jump Starting Bandits with LLM-Generated Prior Knowledge
P. A. Alamdari, Y. Cao, and K. Wilson. Conference on Empirical Methods in Natural Language Processing
Publications
How to Join The RBC Borealis Team!
Are you aspiring to build a career in AI and ML research? Do you want to join a team that is associated with innovation and high ethical standards? If you are looking for opportunities to make a personal impact, we have several programs and opportunities.
View open roles