Carnegie Mellon University

Jonathan Berry helps Parliament find balance between innovation and safety

March 26, 2025

Regulating AI in the UK

Jonathan Berry helps Parliament find balance between innovation and safety

By Kelly Rembold

Jonathan Berry was astonished the day he received a call from the chief whip in the United Kingdom’s House of Lords. The prime minister needed someone to lead the newly formed Department for Science, Innovation and Technology, and the whip was calling to offer Jonathan the job.

He accepted, and was appointed minister for artificial intelligence and intellectual property in March 2023.

“I was very surprised, but obviously very honored and delighted to do it,” Jonathan says.

Jonathan is known as the Fifth Viscount Camrose in the U.K., a title of nobility inherited from his father. The title allowed him to join the House of Lords — the upper chamber of the U.K.’s Parliament — in 2022, where he and his fellow peers make laws, investigate public policy and question government action.

“I became very comfortable with understanding, in general, the application of logic to data that those classes taught. That's always stayed with me completely and I'm very grateful.”

“It's very detailed work. It can be very emotive,” he says. “People in the House of Lords tend to focus on a given number of areas. I've always taken an interest in the impact of technology, so I’ve focused very much on tech since I've been in there.”

Jonathan is well qualified to examine technology-related legislation. He earned a master’s degree in industrial administration in 2000 from the Tepper School of Business, where he learned how to use analytical techniques to solve complex problems.

“There was a lot of focus at the time on operations research and, which I think was very powerful for me, seeing how algorithms and data could produce optimal answers,” he says. “I became very comfortable with understanding, in general, the application of logic to data that those classes taught. That's always stayed with me completely and I'm very grateful.”

Jonathan also has more than 20 years of experience in organizational change management and tech transfer and adoption, including leadership roles at Pfizer, BP, Shell and Expressworks International.

“I was always interested in the problem of tech transfer. That is, taking the new possibilities inherent in a particular kind of technology and getting them to a point where somebody could gain value from them,” he says. “I'm much more interested in that than I am in the workings of the technology itself. It's the ‘So what? What are we supposed to do with this?’ That is, to me, a much more interesting question than ‘How does this thing work?’”

In his ministerial role, he was tasked with answering a different question — How do we regulate this? Specifically, how do we regulate AI?

To start, Jonathan and his team launched the world’s first governmental AI safety institute in November 2023. It brought together specialists from around the world to facilitate collaboration and improve global understanding of the capabilities, safeguards and societal impact of advanced AI systems.

They also examined the way other governments are regulating AI.

“AI, for instance, can lead to more peaceful, longer, happier, wiser lives for all humans, or it can lead to absolute chaos. And the same is true for almost all aspects of science and tech. So it's a fascinating area to work on and move between all of these competing outcomes.”

“We looked at other areas, other jurisdictions like Europe, that we felt were developing very prescriptive approaches to regulating AI that would not work because of the speed of tech development,” Jonathan says. “So we were very keen to develop a flexible, adaptive approach in the U.K.”

They created a principles-based framework that existing regulators can use to drive safe, responsible AI innovation in their respective sectors. Like Jonathan’s earlier work, it focuses on how new AI technologies will be used, rather than the specifics of how those technologies work.

“The trick with AI is, of course, that it works in finance, it works in healthcare, it works in the military and in all of these totally different areas,” he says. “You need whoever regulates those initial areas to remain in charge of regulating those things. And you need maybe a little bit of coordination at the center. So that was our approach to the whole thing.”

When his political party lost the general election in July 2024, Jonathan became a shadow minister. Shadow ministers take the lead on specific policy areas for their party and question and challenge their counterpart in the prime minister’s cabinet.

“We are called the official opposition,” he says. “I have a portfolio for AI and other aspects of science and tech and I question the minister in parliament. I used to have to answer the questions, now I get to ask them.”

Although he is no longer in charge of regulatory activities, he believes the U.K.’s new administration supports the framework that he and his team developed.

“I think it is standing the test of time,” Jonathan says. “We'll see where the current government wants to take it now. But I have the impression they are very much on the same page.”

Either way, he’s happy he had a chance to tackle the tough questions.

“Unlike other areas that governments have to work on, the spread between the best outcome and the worst outcome — as we used to describe it, the utopia dystopia spread — is enormous,” Jonathan says. “AI, for instance, can lead to more peaceful, longer, happier, wiser lives for all humans, or it can lead to absolute chaos. And the same is true for almost all aspects of science and tech. So it's a fascinating area to work on and move between all of these competing outcomes.”