Understanding financial decisions made by predictive models requires clear explanations. One intuitive approach is through counterfactual (CF) examples, which illustrate how changes in input features can lead to different predictions. Sparse CF examples are particularly desirable, as modifying fewer features makes them more actionable. In this work, we propose a sparse CF explanation method that primarily alters discriminative features. By incorporating feature importance-based weights into the optimization process, our method emphasizes the most relevant features while preserving others. We evaluate our approach on a credit risk assessment task, demonstrating that it produces better counterfactual examples.
Related Research
-
Scalable Temporal Domain Generalization via Prompting
Scalable Temporal Domain Generalization via Prompting
S. Hosseini, M. Zhai, H. Hajimirsadeghi, and F. Tung. Workshop at International Conference on Machine Learning (ICML)
Publications
-
Accurate Parameter-Efficient Test-Time Adaptation for Time Series Forecasting
Accurate Parameter-Efficient Test-Time Adaptation for Time Series Forecasting
H. R. Medeiros, H. Sharifi, G. Oliveira, and S. Irandoust. Workshop at International Conference on Machine Learning (ICML)
Publications
-
TabReason: A Reinforcement Learning-Enhanced LLM for Accurate and Explainable Tabular Data Prediction
TabReason: A Reinforcement Learning-Enhanced LLM for Accurate and Explainable Tabular Data Prediction
*T. Xu, *Z. Zhang, *X. Sun, *L. K. Zung, *H. Hajimirsadeghi, and G. Mori. Workshop at International Conference on Machine Learning (ICML)
Publications
How to Join The RBC Borealis Team!
Are you aspiring to build a career in AI and ML research? Do you want to join a team that is associated with innovation and high ethical standards? If you are looking for opportunities to make a personal impact, we have several programs and opportunities.
View open roles