Although neural networks are conventionally optimized towards zero training loss, it has been recently learned that targeting a non-zero training loss threshold, referred to as a flood level, often enables better test time generalization. Current approaches, however, apply the same constant flood level to all training samples, which inherently assumes all the samples have the same difficulty. We present AdaFlood, a novel flood regularization method that adapts the flood level of each training sample according to the difficulty of the sample. Intuitively, since training samples are not equal in difficulty, the target training loss should be conditioned on the instance. Experiments on datasets covering four diverse input modalities – text, images, asynchronous event sequences, and tabular – demonstrate the versatility of AdaFlood across data domains and noise levels.
Bibtex
@article{bae2024adaflood,
title={AdaFlood: Adaptive Flood Regularization},
author={Bae, Wonho and Ren, Yi and Ahmed, Mohamad Osama and Tung, Frederick and Sutherland, Danica J and Oliveira, Gabriel L},
journal={Transactions on Machine Learning Research (TMLR)},
year={2024}
}
Related Research
-
Designing Scalable Multi-Tenant Data Pipelines with Dagster’s Declarative Orchestration
Designing Scalable Multi-Tenant Data Pipelines with Dagster’s Declarative Orchestration
B. Zhang.
Research
-
Training foundation models up to 10x more efficiently with Memory-Mapped Datasets
Training foundation models up to 10x more efficiently with Memory-Mapped Datasets
T. Badamdorj, and M. Anand.
Research
-
DeepRRTime: Robust Time-series Forecasting with a Regularized INR Basis
DeepRRTime: Robust Time-series Forecasting with a Regularized INR Basis
C.S. Sastry, M. Gilany, K. Y. C. Lui, M. Magill, and A. Pashevich. Transactions on Machine Learning Research (TMLR)
Publications