In this work, we propose a practical scheme to enforce monotonicity in neural networks with respect to a given subset of the dimensions of the input space. The proposed approach focuses on the setting where point-wise gradient penalties are used as a soft constraint alongside the empirical risk during training. Our results indicate that the choice of the points employed for computing such a penalty defines the regions of the input space where the desired property is satisfied. As such, previous methods result in models that are monotonic either only at the boundaries of the input space or in the small volume where training data lies. Given this, we propose an alternative approach that uses pairs of training instances and random points to create mixtures of points that lie inside and outside of the convex hull of the training sample. Empirical evaluation carried out using different datasets show that the proposed approach yields predictors that are monotonic in a larger volume of the space compared to previous methods. Our approach does not introduce relevant computational overhead leading to an efficient procedure that consistently achieves the best performance amongst all alternatives.
Bibtex
@inproceedings{
anonymous2021not,
title={Not Too Close and Not Too Far: Enforcing Monotonicity Requires Penalizing The Right Points},
author={Anonymous},
booktitle={eXplainable AI approaches for debugging and diagnosis.},
year={2021},
url={https://openreview.net/forum?id=xdFqKVlDHnY}
}
Related Research
-
Interpretation for Variational Autoencoder Used to Generate Financial Synthetic Tabular Data
Interpretation for Variational Autoencoder Used to Generate Financial Synthetic Tabular Data
J. Wu, K. N. Plataniotis, *L. Z. Liu, *E. Amjadian, and Y. A. Lawryshyn. Special Issue Interpretability, Accountability and Robustness in Machine Learning (Algorithims)
Publications
-
ATOM: Attention Mixer for Efficient Dataset Distillation
ATOM: Attention Mixer for Efficient Dataset Distillation
*S. Khaki, *A. Sajedi, K. Wang, L. Z. Liu, Y. A. Lawryshyn, and K. N. Plataniotis. Oral presentation at The IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR)
Publications
-
DataDAM: Efficient Dataset Distillation with Attention Matching
DataDAM: Efficient Dataset Distillation with Attention Matching
*A. Sajedi, *S. Khaki, E. Amjadian, L. Z. Liu, Y. A. Lawryshyn, and K. N. Plataniotis. International Conference in Computer Vision (ICCV)
Publications