Hackers use ChatGPT to target feminists, researchers reveal
Iranian hackers used artificial intelligence tools to ‘lure prominent feminists’
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Hackers in Iran have used the popular AI tool ChatGPT to launch cyber attacks against feminists, researchers have revealed.
It is one of several incidents of state-backed actors using the technology in hacking campaigns, with the app’s creator OpenAI also naming groups linked to China, North Korea and Russia.
A report published on Wednesday said hackers were honing their skills and tricking their targets by using generative artificial intelligence tools like ChatGPT, which draw on massive amounts of text to generate human-sounding responses.
The Iranian hacking group Crimson Sandstorm used the technology in an attempt to “lure prominent feminists” to an attacker-built website, according to the report published by researchers at Microsoft, which is one of OpenAI’s biggest backers.
Microsoft and OpenAI said they were implementing a blanket ban on state-backed hacking groups using its AI products.
“Independent of whether there’s any violation of the law or any violation of terms of service, we just don’t want those actors that we’ve identified – that we track and know are threat actors of various kinds – we don’t want them to have access to this technology,” Microsoft Vice President for Customer Security Tom Burt told Reuters in an interview ahead of the report’s release.
Russian, North Korean and Iranian diplomatic officials didn’t immediately return messages seeking comment on the allegations.
China’s US embassy spokesperson Liu Pengyu said it opposed “groundless smears and accusations against China” and advocated for the “safe, reliable and controllable” deployment of AI technology to “enhance the common well-being of all mankind.”
The allegation that state-backed hackers have been caught using AI tools to help boost their spying capabilities is likely to underline concerns about the rapid proliferation of the technology and its potential for abuse. Senior cybersecurity officials in the West have been warning since last year that rogue actors were abusing such tools, although specifics have, until now, been thin on the ground.
“This is one of the first, if not the first, instances of a AI company coming out and discussing publicly how cybersecurity threat actors use AI technologies,” said Bob Rotsted, who leads cybersecurity threat intelligence at OpenAI.
OpenAI and Microsoft described the hackers’ use of their AI tools as “early-stage” and “incremental.” Mr Burt said neither had seen cyber spies make any breakthroughs.
“We really saw them just using this technology like any other user,” he said.
The report described hacking groups using the large language models differently.
Hackers alleged to working on behalf of Russia military spy agency, widely known as the GRU, used the models to research “various satellite and radar technologies that may pertain to conventional military operations in Ukraine,” Microsoft said.
Microsoft said North Korean hackers used the models to generate content “that would likely be for use in spear-phishing campaigns” against regional experts. Iranian hackers also leaned on the models to write more convincing emails, Microsoft said, at one point using them to draft a message attempting to lure “prominent feminists” to a booby trapped website.
The software giant said Chinese state-backed hackers were also experimenting with large language models, for example to ask questions about rival intelligence agencies, cybersecurity issues, and “notable individuals.”
OpenAI said it will continue to work to imrpove its safety measures, though conceded that hackers will still likely find a way to use its tools.
“As is the case with many other ecosystems, there are a handful of malicious actors that require sustained attention so that everyone else can continue to enjoy the benefits,” the company said.
“Although we work to minimize potential misuse by such actors, we will not be able to stop every instance.”
Additional reporting from agencies.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments