ChatGPT is being used to disrupt elections around the world, OpenAI warns

Hacking groups affiliated with regimes in China, Iran and Russia named as suspects in a 54-page-report

Anthony Cuthbertson
Thursday 10 October 2024 06:40 EDT
Comments
The logo of OpenAI’s AI chatbot ChatGPT shown on a smartphone in Mulhouse, eastern France, on 30 October, 2023
The logo of OpenAI’s AI chatbot ChatGPT shown on a smartphone in Mulhouse, eastern France, on 30 October, 2023 (Getty Images)

Your support helps us to tell the story

This election is still a dead heat, according to most polls. In a fight with such wafer-thin margins, we need reporters on the ground talking to the people Trump and Harris are courting. Your support allows us to keep sending journalists to the story.

The Independent is trusted by 27 million Americans from across the entire political spectrum every month. Unlike many other quality news outlets, we choose not to lock you out of our reporting and analysis with paywalls. But quality journalism must still be paid for.

Help us keep bring these critical stories to light. Your support makes all the difference.

OpenAI has warned that foreign hacking groups are using its AI tools ChatGPT and Dall-E in an attempt to interfere with elections.

A 54-page-report revealed that the company has already detected 20 campaigns around the world since the start of the year, with more expected in the build-up to the US presidential elections next month.

Manipulative activities involving ChatGPT ranged from writing articles for websites, to generating fake personas and posting content on social media. It included “multi-stage efforts to analyse and reply to social media posts”.

This year has been touted as the biggest ever demonstration of democracy, with more than 50 countries heading to the polls. The recent emergence of generative artificial intelligence has led to concerns about potential misuse of the technology to influence the elections, leading to several leading firms taking special measures in an effort to prevent interference.

Last year, OpenAI chief executive Sam Altman said he was “nervous” about the threat generative AI poses to election integrity, testifying before congress that it could be used to spread disinformation in ways never-before possible.

“In this year of global elections, we know it is particularly important to build robust, multi-layered defences against state-linked cyber actors and covert influence operations that may attempt to use our models in furtherance of deceptive campaigns on social media and other internet platforms,” OpenAI’s latest report stated.

“Since the beginning of the year, we’ve disrupted more than 20 operations and deceptive networks from around the world that attempted to use our models.”

OpenAI named hacking groups affiliated with regimes in China, Iran and Russia as suspects in some of the interference operations

Several case studies were detailed in the report, with examples including a “Russia-origin threat actor” generating English- and French-language content targeting West Africa and the UK.

“This operation used our models to generate short comments, long-form articles and images. The long-form articles in English and French were then posted on a cluster of websites that posed as news outlets in Africa and the UK,” the report stated.

“This operation represented an unusual combination of efforts to build an audience... The UK-focused ‘news’ brands appear to have established ‘information partnerships’ with a number of local organisations, including a church in Yorkshire, a school in Wales, and an association of chambers of commerce in California.”

The specific organisations were not named in the report – The Independent has reached out to OpenAI for further information.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in