Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Cambridge Dictionary reveals word of the year for 2023

Cambridge Dictionary’s word of the year has a different meaning thanks to AI

Sam Russell
Wednesday 15 November 2023 03:26 EST
Comments
Related video: Collins Dictionary Word of the Year: A.I.

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Cambridge Dictionary has revealed its word of the year for 2023 is ‘hallucinate’, as the term got a new additional definition relating to artificial intelligence (AI) producing false information.

AI hallucinations, also known as confabulations, sometimes appear nonsensical but can also seem entirely plausible, even while being factually inaccurate or ultimately illogical.

The traditional definition of ‘hallucinate’ is to seem to see, hear, feel or smell something that does not exist, usually because of a health condition or because you have taken a drug.

The new additional definition in the Cambridge Dictionary is: “When an artificial intelligence (= a computer system that has some of the qualities that the human brain has, such as the ability to produce language in a way that seems human) hallucinates, it produces false information.”

It follows a year-long surge in interest in generative artificial intelligence (AI) tools like ChatGPT, with public attention shifting towards the limitations of AI and whether they can be overcome.

AI tools, especially those using large language models (LLMs), have proven capable of generating plausible prose, but often do so using false, misleading or made-up ‘facts’.

They ‘hallucinate’ in a confident and sometimes believable manner.

Wendalyn Nichols, Cambridge Dictionary’s publishing manager, said: “The fact that AIs can ‘hallucinate’ reminds us that humans still need to bring their critical thinking skills to the use of these tools.

“AIs are fantastic at churning through huge amounts of data to extract specific information and consolidate it.

“But the more original you ask them to be, the likelier they are to go astray.

“At their best, large language models can only be as reliable as their training data.

“Human expertise is arguably more important – and sought after – than ever, to create the authoritative and up-to-date information that LLMs can be trained on.”

AI hallucinations have already had real-world impacts.

A US law firm used ChatGPT for legal research which led to fictitious cases being cited in court.

And in Google’s own promotional video for its chatbot Bard, the AI tool made a factual error about the James Webb Space Telescope.

The new definition illustrates a growing tendency to anthropomorphise AI technology, using human-like metaphors as we speak, write and think about machines.

Dr Henry Shevlin, an AI ethicist at Cambridge University, said: “The widespread use of the term ‘hallucinate’ to refer to mistakes by systems like ChatGPT provides a fascinating snapshot of how we’re thinking about and anthropomorphising AI.

“Inaccurate or misleading information has long been with us, of course, whether in the form of rumours, propaganda or ‘fake news’.

“Whereas these are normally thought of as human products, ‘hallucinate’ is an evocative verb implying an agent experiencing a disconnect from reality.

“This linguistic choice reflects a subtle yet profound shift in perception: the AI, not the user, is the one ‘hallucinating.’

“While this doesn’t suggest a widespread belief in AI sentience, it underscores our readiness to ascribe human-like attributes to AI.

“As this decade progresses, I expect our psychological vocabulary will be further extended to encompass the strange abilities of the new intelligences we’re creating.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in