Understanding financial decisions made by predictive models requires clear explanations. One intuitive approach is through counterfactual (CF) examples, which illustrate how changes in input features can lead to different predictions. Sparse CF examples are particularly desirable, as modifying fewer features makes them more actionable. In this work, we propose a sparse CF explanation method that primarily alters discriminative features. By incorporating feature importance-based weights into the optimization process, our method emphasizes the most relevant features while preserving others. We evaluate our approach on a credit risk assessment task, demonstrating that it produces better counterfactual examples.

Related Research

How to Join The RBC Borealis Team!

Are you aspiring to build a career in AI and ML research? Do you want to join a team that is associated with innovation and high ethical standards? If you are looking for opportunities to make a personal impact, we have several programs and opportunities.

View open roles