In this post, we explore the concept of ‘model validation’ with one of the world’s leading academics on the topic – Dr. Sander Klous, Professor of Big Data Ecosystems at the University of Amsterdam and Partner in Charge of D&A at KPMG in the Netherlands.
The views expressed in this article are those of the interviewee and do not necessarily reflect the position of RBC or RBC Borealis.
Why should the AI community be focused on model validation?
Sander Klous (SK):
It’s very tempting to run off and create all sorts of futuristic solutions using AI and Machine Learning. But without proper model validation and governance processes, creativity can very quickly turn into risk.
For example, I have seen examples of healthcare organizations launch algorithms that mysteriously strip people of their healthcare allowances; suddenly patients had to jump through hoops to demonstrate they were eligible, basically reversing the burden of proof. There are also examples of fraud detection algorithms at banks that generated too many false positives; fraud departments quickly became overwhelmed without the means to address the problem and it created frustration for customers.
In both cases, these unexpected outcomes should have been uncovered in the model validation process. Especially with these kind of new technological developments, where trustworthiness is still a fragile concept, positive experiences are the key to success.
Are you worried about what a fully autonomous AI might do if left unchecked?
SK:
Actually, quite the opposite. I’m not worried about AI becoming too smart. I’m worried about AI being too stupid right now. We tend to think AI can do a lot, but often it’s not as smart as we give it credit for. It does not magically solve any issues for you. It requires thorough process, robust validation, governance, controls and risk frameworks – amongst other things – to ensure we remove the ‘stupidity’ from these models. That’s not a future risk, it is something that needs to be addressed right now.
Who should be responsible for model validation?
SK:
There are actually three lines of defense that normally come into play when we talk about model validation and risk management. The first line are the developers and designers themselves. They are the ones that need to be following the controls and considering the implications at the design level. The second line of defense is the risk function; it’s the risk function that needs to develop and drive adherence to those controls. There is also often a third line of defense that is served by an independent validator. These validators may be internal to the company or external advisors, depending on the circumstances. All three lines of defense need to work together to ensure proper model validation.
Why is model validation so difficult with AI?
SK:
There are two big challenges. The first is a general lack of global standards around AI model validation. We see lots of different standards bodies working to come up with practical frameworks. But nothing is really mature yet. So it is very difficult for organizations to assess what ‘good’ looks like and then have that validated in the same way they would a financial statement, for example.
The other challenge comes down to process. Typically, the three lines of defense would work in a waterfall approach – design, followed by risk validation, followed by periodic independent auditing. But AI isn’t developed using a waterfall approach. And that means that it is becoming increasingly difficult to draw the lines of separation of duties around the three lines of defense.
Let’s unpack the standards issue for a few minutes. How are organizations staying on top of model validation given the lack of mature global standards?
SK:
That is certainly an ongoing problem and one that will take some time to resolve. As we saw with other similar regulation – like GDPR in Europe – it takes a lot of case law and a lot of collaboration to come up with a set of global standards. That can take years.
In the meantime, most organizations are creating their own set of validation standards and controls, largely based on industry good practices and evolving current standards. The problem is that the environment is continuously evolving and – until we have a set of global standards that can be audited – ‘good’ will continue to be a moving target.
Some organizations are creating their own ecosystems by collaborating with third parties and industry peers where there are common areas that they can all benefit from. For example, manufacturing companies who want to take a combined approach to validating specific parts of the process that other manufacturers would also use. This means that there could be standard validation for key aspects but not one overall industry approach that everyone adheres to.
And what about the process issue? How are the second and third line of defense evolving to meet the challenge?
SK:
To be successful at rapidly adopting AI models solutions, the second and third lines of defense need to reinvent themselves. Risk managers and oversight professionals are starting to rethink their approach to model validation in an agile environment. Unfortunately for them, this may result in a reduction in efficiency as validation processes are run and re-run as the models evolve. But this is not a bad thing; risk managers tell me they understand the trade-off between their own efficiency and that of the business. Some would be willing to see their efficiency cut in half just to deliver a 10 percent efficiency boost to their data scientists.
I think we are also starting to see an interesting evolution in the accounting and auditing professions around this issue. KPMG firms have been working with a range of clients to help develop their own internal standards and controls. The experiences we gained in these activities are the foundation of our “AI in Control framework” – it helps organizations build and evaluate sound AI models, driving better adoption, confidence, and compliance. I believe that eventually – once there is a set of global standards – the auditing profession will play an essential role in providing the same type of independent validation they already deliver on financial statements.
What can business leaders and AI developers do to help drive this issue forward?
SK:
I think we all really need to keep challenging each other. You can’t just accept models on face value; you need to stay sharp and have rock-solid processes and frameworks for model validation. This is all new territory and we don’t really know what the ultimate standards and frameworks will look like. And that means it requires more thought and more caution than other areas where the roadmap has already been created.
I would argue that the greatest challenge is doing all of that while still encouraging the type of creativity, innovation and problem solving that drew you to consider an AI solution in the first place. Balancing that need for creativity against the controls of model validation can be extremely difficult.
About Sander Klous
Sander is a Professor of Big Data ecosystems for business and society at the University of Amsterdam and D&A Leader for KPMG in the Netherlands. He has a PhD in high energy physics and worked for over a decade on a number of projects for CERN, the world’s largest physics institute in Geneva. His best-selling book, ‘We are Big Data’, was runner-up for the management book of the year award in 2015. His new book, Trust in a Smart Society, is a top selling management book in the Netherlands.
News