In this post, we hear Gillian Hadfield’s views on the need for new regulation and new regulatory approaches for machine learning. Gillian is the director of the Schwartz Reisman Institute for Technology and Society; professor of law and of strategic management at the University of Toronto; a faculty affiliate at the Vector Institute for Artificial Intelligence; and a senior policy advisor at OpenAI.

The views expressed in this article are those of the interviewee and do not necessarily reflect the position of RBC or RBC Borealis. 

Why is machine learning difficult to regulate?

Gillian Hadfield (GH)

I believe many of the governance challenges related to machine learning fit under the umbrella of the ‘alignment problem’ – simply put, is the machine doing what humans want it to do? When you program a computer, humans tell the machine what actions to take. But when you train a computer, it’s the machine itself that figures out how to achieve the objective. And sometimes we won’t know exactly how it did that. 

The other challenge at play here is that most machine learning technologies and models are being developed by private companies for commercial purposes. That means that much of the transparency and data that could help regulators manage machine learning ends up being locked away in a box labelled ‘trade secrets’. Add to that the fact that government often doesn’t have the technical capacity or compute to replicate what industry has built and all told it can be really challenging for regulators to understand and manage.

Can current regulatory approaches keep up with the pace of technological change?

GH

No. I do not believe they can. Our existing approach of getting a bunch of smart people to write down a set of rules and then using regulatory agencies or courts to police them worked well when we were regulating factories, railroads and mines. But this is a very different world: massive globalisation, increased levels of complexity, tremendous use of technology and, critically, much faster rates of innovation, have all accelerated the pace of change. Our traditional approaches are much too slow and nowhere near agile enough to meet that challenge. 

The introduction of AI increases that gap tremendously. What I believe we need, therefore, isn’t new regulation but rather a new approach to regulating.  

What might a new approach look like? 

GH

I think we need to be harnessing markets in driving the pace of innovation and investment in regulatory approaches. The reality is that it’s going to take a lot of innovation to build and update the approaches and systems needed to validate AI models. And that requires a lot of investment and agility, something private markets are particularly good at generating. What I think we need to do, therefore, is connect that to political accountability in a way that creates and encourages regulatory markets. 

I would argue that we need a system of licenced private regulators who are held to an established set of expected outcomes that have been defined with political oversight. Basically, to get our governments into the business of regulating regulators instead of regulating technologies.  

Why should it be market driven?

GH:

Let’s face it: a more agile approach is required. But if we want the market to come up with better systems and drive continuous innovation, we need to incentivise it to do that. And allowing competition and the opportunity for profitability will attract the investment and brains you need to come up with the underlying technologies and systems required. 

We need competition between regulators to ensure they stay at the leading edge of innovation. We need start-ups coming up with new ideas, seed money to help them develop, and commercial markets eager to purchase those solutions. We need a regulatory approach that can keep pace with the rate of change in the private markets. 

Yet all of that needs to happen in a way that is politically responsive and legitimate. Agility tied with accountability. That’s the challenge. 

What can companies be doing to advocate for a new approach to regulation?

GH

Right now, most companies are thinking about this in terms of their traditional policy strategy – they are looking for opportunities to help draft or influence new regulation and legislation on the governance of AI. Instead, we need to be changing the conversation to talk about what types of regulatory approaches will meet the needs of politicians, citizens, consumers and businesses in a world characterised by rapid innovation and change. 

The way things stand currently, it’s up to individual businesses to decide how they go about governing their AI. That’s a lot of unnecessary risk for businesses to take on. I’m actually surprised that more businesses aren’t busy lobbying their governments for public law frameworks that help them eliminate that risk.  

What should governments be doing? Who is going to drive this fundamental change in regulatory approaches?

GH:

It really starts with being willing to think differently about the challenge. In my role with the UofT’s Schwarz Reisman Institute for Technology and Society, I have been working with the Rockefeller Foundation on a program to drive innovation in AI governance. Ultimately, we hope to identify, design and set the stage for the launch of a handful of concrete projects that prototype effective global governance frameworks for AI. And we’re looking for bold governments interested in working with us to design and implement those projects. 

I believe we are going to be seeing a lot more innovation in regulatory markets very soon. 

Image - Gillian Hadfield.png

About Gillian K. Hadfield

Gillian Hadfield is the director of the Schwartz Reisman Institute for Technology and Society. She is the Schwartz Reisman Chair in Technology and Society, professor of law and of strategic management at the University of Toronto, a faculty affiliate at the Vector Institute for Artificial Intelligence, and a senior policy advisor at OpenAI. Her current research is focused on innovative design for legal and regulatory systems for AI and other complex global technologies; computational models of human normative systems; and working with machine learning researchers to build ML systems that understand and respond to human norms.