The rapid democratization of AI has left many organizations seeking guidance on how to move ahead responsibly. In this article, we talk with Abhishek Gupta, Founder and Principal Researcher at the Montreal AI Ethics Institute, about how organizations can start to realign their governance models to manage risk and unlock growth with responsible AI.
The views expressed in this article are those of the interviewee and do not necessarily reflect the position of RBC or RBC Borealis.
Are you surprised by the pace at which AI is being developed and adopted?
I don’t think we’ve ever seen as much excitement around a piece of technology as we have with LLMs and ChatGPT in particular. And I think some of the hype is certainly warranted. In many ways, they have democratized AI. In the past, if you wanted to build an AI-enabled system you really needed access to a team of technical people, data and computing power. Now all you need is a browser and a creative prompt.
Just consider the rate of adoption of ChatGPT – a million users in just five days and 100 million in just two months. Now consider what ChatGPT is: a single-player experience where users type questions into a text box. It’s not some amazingly immersive visual experience. There is no ‘network effect’ like social media. Two years ago, when everyone was talking about the Metaverse, it seemed single-player text activities were archaic. Now they are the hottest thing.
What are some of the AI risks you worry about?
There are so many risks that one could worry about with AI. Like others in the field, I’m concerned about key risks like bias, privacy, explainability, hallucinations and cyber security. And there are nuances emerging all the time. Research out of the University of Toronto suggests that very, very large language models have a very strong power of memorization, which adds a completely new lens to the idea of privacy, for example.
There are so many risks that one could worry about with AI. Like others in the field, I’m concerned about key risks like bias, privacy, explainability, hallucinations and cyber security. And there are nuances emerging all the time. Research out of the University of Toronto suggests that very, very large language models have a very strong power of memorization, which adds a completely new lens to the idea of privacy, for example.
Should we be shunning generative AI until we understand it better?
Not at all. I think, as individuals, we should embrace it, experiment with it and learn about it. The more you interact with it, the more you understand it. It almost requires us as humans to do our own reinforcement learning as we try to figure out what works and what doesn’t.
One of the amazing things we have seen with the explosion of LLMs is that people are self-educating and learning from each other to understand how these models work. And they are doing it at an amazing speed. As we think about how the workplace of the future is evolving and how roles and capabilities are changing, I think this experience shows how this can happen in a very short span of time.
Are traditional governance models robust enough to control the risks?
I don’t think so. The challenge is that these new large language models are incredibly accessible. You can imagine an HR employee might experiment with using LLMs to review and prioritize job applications. Or for someone in accounting to use an LLM to generate a finance report. Anyone with a web browser can use them. What that means is that top-down governance models are no longer effective. You simply don’t have line of sight into what is happening across the organization anymore.
What we are advising people to do is to flip the model to adopt a more bottoms-up approach. But that requires a cultural transformation in the sense that governance becomes a shared responsibility as individuals interact and use these systems. Policies and guidelines are certainly important in helping the business understand what is expected (and what enforcement activities back it up). But these must be well understood at every level and part of the organization, not just in the risk and AI functions.
Are businesses receptive to that approach?
I think they are. In the past, people tended to think of Responsible AI professionals as party poopers. They thought we wanted to control everything and stamp down innovation. But I think businesses increasingly see Responsible AI as being a growth and value enabler.
Business leaders recognize that a responsible approach to AI gives your people the confidence to experiment safely with clear and appropriate guardrails. It provides them with full freedom to go try out a bunch of things without constantly looking over their shoulder to see if they did anything wrong. It generates a faster pace of innovation. And ultimately, it builds customer and employee trust, which is incredibly important these days.
Canada has often been in the lead on the Responsible AI debate. Can we continue to lead?
I think we have a great approach here in Canada. There is certainly a lot of debate about whether the AI and Data Act as part of Bill C-27 is the right approach. Everyone has a point of view.
Personally, I believe that the effectiveness of any guidelines or regulation must happen at the industry domain level for them to be meaningful. Principles like ‘do no harm’ are great. But what does that actually mean operationally? I think we need to very quickly get down to a point where we can start articulating guidelines around specific industry domain use cases. That’s where guidelines and regulations really become applicable and actionable.
About Abhishek Gupta
Abhishek Gupta is the Founder & Principal Researcher at the Montreal AI Ethics Institute, an international non-profit research institute with a mission to democratize AI ethics literacy. He is an alumnus of the US State Department International Visitors Leadership Program representing Canada and has received The Gradient Writing Prize 2021 for his work on The Imperative for Sustainable AI Systems.
Advancing Responsible AI
Responsible AI is key to the future of AI. We have launched RESPECT AI hub to share knowledge, algorithms, and tooling to help advance responsible AI.
Visit RESPECT AI hub