First international guideline on AI safety published by UK standards body

The British Standards Institution has produced advice on how organisations can develop safe and responsible artificial intelligence products.

Martyn Landi
Tuesday 16 January 2024 06:06 EST
The British Standards Institution has drawn up guidance on how to safely handle AI (Yui Mok/PA)
The British Standards Institution has drawn up guidance on how to safely handle AI (Yui Mok/PA) (PA Wire)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

A first-of-its-kind international standard on how to safely manage artificial intelligence (AI) has been published by the UK’s national standards body.

The guidance sets out how to establish, implement, maintain and continually improve an AI management system, with a focus on safeguards.

It has been published by the British Standards Institution (BSI) and offers direction on how businesses can responsibly develop and deploy AI tools both internally and externally.

It comes amid ongoing debate about the need to regulate the fast-moving technology, which has become increasingly prominent over the last year thanks to the public release of generative AI tools such as ChatGPT.

BSI is pleased to announce publication of the latest, international management standard for industry on the use of AI technologies, which is aimed at helping companies embed safe and responsible use of AI in their products and services

Scott Steedman, BSI

The UK held the first global AI Safety Summit last November, where world leaders and major tech firms from around the world met to discuss the safe and responsible development of AI, as well as the potential long-term threats the technology could pose.

Those threats included AI being used to create malware for cyber attacks and even being a potentially existential threat to humanity, if humans were to lose control of the technology.

Susan Taylor Martin, chief executive of BSI, said of the new international standard: “AI is a transformational technology. For it to be a powerful force for good, trust is critical.

“The publication of the first international AI management system standard is an important step in empowering organisations to responsibly manage the technology which, in turn, offers the opportunity to harness AI to accelerate progress towards a better future and a sustainable world.

“BSI is proud to be at the forefront of ensuring AI’s safe and trusted integration across society.”

The guidance includes requirements to create context-based risk assessments, as well as additional controls for both internal and external AI products and services.

Scott Steedman, director general for standards at BSI, said: “AI technologies are being widely used by organisations in the UK despite the lack of an established regulatory framework.

“While government considers how to regulate most effectively, people everywhere are calling for guidelines and guardrails to protect them.

“In this fast moving space, BSI is pleased to announce publication of the latest, international management standard for industry on the use of AI technologies, which is aimed at helping companies embed safe and responsible use of AI in their products and services.

“Medical diagnoses, self-driving cars and digital assistants are just a few examples of products that already benefit from AI.

Consumers and industry need to be confident that in the race to develop these new technologies we are not embedding discrimination, safety blind spots or loss of privacy.

“The guidelines for business leaders in the new AI standard aim to balance innovation with best practice by focusing on the key risks, accountabilities and safeguards.”

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in