Personal banking is a major aspect of our daily lives and finances are personal. It is important for our bank to understand and honor our individual needs and adjust its services accordingly. An advisor at our local branch knows us, our family and circumstances, and their historical knowledge of our past activity and financial decisions, combined with their banking expertise, is valuable when we need support in managing our money.

Access to our finances has evolved from branches to online to mobile and many of us use our digital personal banking and financial tools almost daily. We pay for groceries, gas and consumer goods with our debit or credit cards, we shop online, pay bills using online banking, and transfer money across accounts to progress towards our personal financial goals.

The data that results from our activity in the context of digital banking is multi-layered. Every transaction and money movement carries a time stamp, a dollar amount, and sometimes, location information. The combination and history of these data points provides context, and therefore carries meaning. Selecting the right model, adapting it to this domain and customizing it for financial modelling presents a rich and interesting problem for researchers and engineers.

Image - AIPB_3.png

The machine learning challenge

We’re exploring a few key areas of machine learning research to apply prediction solutions to personal banking.

Most banking data has a timestamp attached to it, denoting when an event such as a purchase took place. But those times are often irregular. A client may go several days without a transaction, and then have a cluster of purchases on a single weekend shopping trip. So, we need models that can handle irregular temporal sampling. We’re also working with data that goes back years, so we need models that are able to incorporate past information that goes back a long time. Modeling long-range, irregular, time-based sequences is an interesting research challenge.

A rich history exists in machine learning models for temporal data. Standard approaches are built upon recurrent neural networks such as the LSTM (long short-term memory). These can be augmented with techniques for including long-range connections (Residual Networks), and deciding when to make predictions (Selective Networks). LSTMs have been used successfully in a wide range of applications, from language to transportation to healthcare to business process management.

Because LSTMs are especially suited for incorporating prior context into predicting future outcomes, they have been shown to be successful in other long-range sequence modelling tasks, like speech recognition, translation, sentiment analysis or transcription. For example, Google Translate uses LSTMs. Like banking, with its specific yet irregular time-based contexts, language is a complex “context” challenge in machine learning. LSTMs have also been applied to various parts of autonomous vehicle development.

Our team at RBC Borealis has made novel research contributions to the literature on temporal data analysis that support the work we are doing in personal banking. We developed methods for modeling uncertainties in irregular time series data, point processes, that can capture when activities are likely to occur. These methods are built in a variational auto-encoder framework and use latent representations and non-linear functions to parametrize distributions over which event is likely to occur next in a sequence and at what time (Mehrasa et al., CVPR 2019).

The temporal events we are modeling can be quite complex – different people can have different patterns of financial behaviour. The probability distributions that we use to model these uncertainties need to be able to capture this variety. We have built state of the art methods for capturing this variability based on normalizing flows that deform a base stochastic process for time series (Deng et al, NeurIPS 2020).

Finally, clients have their own pattern of behaviour. Adapting to this can allow our models to provide personalized advice and service. Our work on learning user representations (Durand, CVPR 2020) contributes a model that extracts a representation of a user based on his/her historical data. Our model allows us to incrementally improve a user representation from new data without retraining the model, an important benefit for scalability. 

We’re using methods such as these in this use case, because they allow us to perform time series modelling and use the vast banking data available at RBC to make good predictions. So, we use this model to predict future patterns and therefore suggest future actions.

Meanwhile, machine learning research doesn’t stand still. Transformers, another type of machine learning method, have been shown to perform more accurately with this type of long-range sequence modelling task. Using Transformers is another way we could solve this problem, and we’re actively exploring how well they apply here to help us reach our goals.

Image - AIPB_2.png

Accounting for bias in model development

Like most machine learning tasks, this work must account for potential biases. For example, we consider how variables like our clients’ gender may impact a prediction or suggestion. We also consider judgements we might make on what is considered discretionary versus non discretionary spending. These are tied to values, and machines shouldn’t be put in a position of judgement.

At RBC we’re mitigating risks such as bias and other by using appropriate features in our models, gathering feedback from our users, deploying automated analysis to test for behaviour, and employing thorough validation processes according to regulation. You can learn more about our validation approach at Model Validation: a vital tool for building trust in AI.

(Deng et al., NeurIPS 2020) R. Deng , B. Chang, M. Brubaker , G. Mori , A. Lehrmann. Modeling Continuous Stochastic Processes with Dynamic Normalizing Flows. Neural Information Processing Systems (NeurIPS), 2020

(Durand, CVPR 2020) T. Durand. Learning User Representations for Open Vocabulary Image Hashtag Prediction. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020

(Mehrasa et al., CVPR 2019) N. Mehrasa, A. Jyothi, T. Durand , J. He , L. Sigal , G. Mori. A Variational Auto-Encoder Model for Stochastic Point Process. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019