Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Generative AI ‘helping criminals create more sophisticated cyber attacks’

The UK’s National Cyber Security Centre has also highlighted the use of AI to create and spread disinformation as a key threat.

Martyn Landi
Wednesday 29 November 2023 19:01 EST
ChatGPT marks the first anniversary of its launch to the public (John Walton/PA)
ChatGPT marks the first anniversary of its launch to the public (John Walton/PA) (PA Wire)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

The rise of generative AI tools such as ChatGPT is helping cybercriminals create more convincing sophisticated scams, cybersecurity experts have warned.

As ChatGPT marks the first anniversary of its launch to the public, a number of industry experts have said the technology is being leveraged by bad actors online.

They warn that generative AI tools for text and image creation are making it easier for criminals to create convincing scams, but also that AI is being used to help boost cyber defences by helping identify evolving threats as they appear.

At the UK’s AI Safety Summit earlier this month, the threat of more sophisticated cyber attacks powered by AI was highlighted as a key risk going forward, with world leaders agreeing to work together on the issue.

The UK’s National Cyber Security Centre (NCSC) has also highlighted the use of AI to create and spread disinformation as a key threat in years to come, especially around elections.

James McQuiggan, security awareness advocate at cyber security firm KnowBe4, said the impact of generative AI, and the large language models (LLMs) which power them, was already being felt.

“ChatGPT has revolutionised the threat landscape, open source investigations, and cybersecurity in general,” he told the PA news agency.

With generative AI also lowering the technical barrier to creating convincing profile pictures, impeccable text and even malware, AI and LLMs like ChatGPT are increasingly being used to create more convincing phishing messages at scale

James McQuiggan, KnowBe4

“Cybercriminals leverage LLMs to generate well-written documents with proper grammar and no spelling mistakes to level up their attacks and circumvent one of the biggest red flags taught in security awareness programmes – the notion that poor grammar and spelling mistakes are indicative of social engineering email or phishing attacks.

“Unsurprisingly, there have been increased sophistication and volume of phishing attacks in various styles, creating challenges for businesses and consumers alike.

“With generative AI also lowering the technical barrier to creating convincing profile pictures, impeccable text and even malware, AI and LLMs like ChatGPT are increasingly being used to create more convincing phishing messages at scale.”

The next generation of generative AI models are expected to start appearing in 2024, with experts predicting they will be significantly more capable than the current generation models.

Looking ahead to potential future uses of generative AI by bad actors, Borja Rodriguez, manager of threat intelligence operations at cyber security firm Outpost24, said hackers could develop AI tools to write malicious code for them.

“Currently, tools like Copilot from GitHub help developers generate code automatically,” he said.

“Not far from that, someone could create a similar tool specifically to assist in creating malicious code, scripts, backdoors and more, aiding script kiddies (novice hackers) with low levels of technical knowledge to achieve things they weren’t capable of in the past.

LLMs such as ChatGPT and Bard have already reshaped the landscape

Etay Maor, Cato Networks

“These tools will assist underground communities in executing complex attacks without much expertise, lowering the skill requirements for those executing them.”

The rate of advancement of generative AI, and the general unknown potential of the technology for the years to come, has created an uncertainty around it, the experts say.

Many governments and world leaders have begun discussions on how to potentially regulate AI, but without knowing more about the possibilities of technology, piecing together successful regulation will be unlikely.

Etay Maor, senior director of security strategy at Cato Networks, said the issue of trust remained key in regard to LLMs, which are trained on large amounts of text data, and how they are programmed.

“As the excitement surrounding LLMs settles into a more balanced perspective, it becomes imperative to acknowledge both their strengths and limitations,” he said.

“Users must verify critical information from reliable sources, recognising that, despite their prowess, LLMs are not immune to errors.

“LLMs such as ChatGPT and Bard have already reshaped the landscape.

“However, a lingering uncertainty persists as the industry grapples with understanding where these tools source their information and whether they can be fully trusted.”

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in