Strong UK regulation of AI will protect the public, MPs told

Representatives from BT, Google and Microsoft have appeared before the Science and Technology Select Committee looking into AI regulation.

Martyn Landi
Wednesday 22 February 2023 07:44 EST
Representatives from BT, Google and Microsoft appeared before the Science and Technology Select Committee to discuss AI regulation (Dominic Lipinski/PA)
Representatives from BT, Google and Microsoft appeared before the Science and Technology Select Committee to discuss AI regulation (Dominic Lipinski/PA) (PA Wire)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Strong regulation of artificial intelligence use in the UK would help stop malicious uses of the technology from reaching or harming the public, MPs have been told.

Representatives from BT, Google and Microsoft told the Science and Technology Select Committee that AI regulation should focus on how the technology is used and not the tech itself in order to be most effective.

The appearance of the tech giants before the committee, which is investigating the governance of AI, comes as generative AI – chatbots able to understand questions and hold human-like conversations – developed by Google and Microsoft have grabbed the public’s attention and raised concerns about the technology’s future.

Microsoft is integrating OpenAI’s ChatGPT software into its Bing search engine, while Google has announced the development of its own chatbot – Bard – which will use Google Search to answer queries and respond to requests.

As we regulate AI, we need to make sure that we are thinking hard about the regulation of uses of AI rather than the AI itself

Hugh Milward, Microsoft UK

When asked by MPs if the tech giants would be concerned if AI products were developed with malicious intent by countries such as China, Hugh Milward, general manager for corporate, external and legal affairs at Microsoft UK admitted the company would be “worried” by such a development, but said strong regulation in the UK would help protect the public here.

“We can’t hold back the development of AI that has been developed in other countries,” he said.

“As we regulate AI, we need to make sure that we are thinking hard about the regulation of uses of AI rather than the AI itself.

“Because if we do that, then irrespective of where it is developed – if it’s developed in a regime that we don’t necessarily agree with – if we regulate its use, then that AI has to, when used in the UK, abide by a set of principles.

“We can regulate how it’s used in the UK.

“So it allows us then to worry less about where it’s developed, and worry more about how it’s being used.”

When discussing uses of the technology, Conservative MP Aaron Bell, who chaired the session, said that he believes civil servants have been asking if they are allowed to use software such as ChatGPT to “help produce briefing materials”.

He said he believed staff at the new Department for Science, Innovation and Technology had made “a number of requests” as to whether they could use the programme in their work.

In response, Mr Milward said he thought AI could “without a doubt” could be helpful in Government because of its ability to be programmed to spot more quickly anomalies or patterns in huge collections of data.

On the ongoing public testing of the ChatGPT-powered Bing and Google’s Bard, the two companies said they understood some of the concerns raised around the rise of the technology, but that allowing people to use the software was a key way for them to learn as they continue to build and develop it.

Both companies noted that they also had internal policies around testing in order to reduce bias and other issues in the development of AI products.

There have been concerns raised about the possible impact of the technology on a range of writing-based professions, as well as its possible use by students in essay creation.

There have also been issues raised about chatbots spreading misinformation if it is not able to understand the context of the authenticity of sources.

Asked by MPs if they were worried about the future and the potential power of AI to move beyond the control of humans, Mr Milward said he was not, as AI is a “co-pilot, not an autopilot” and humans had control over how it was used.

Jen Gennai, Google’s director of responsible innovation, said that while it was “my job to be worried”, she was “excited about the technology as long as there are guardrails” such as strong regulation.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in