Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Staying cyber-secure in the age of misdirection

THE ARTICLES ON THESE PAGES ARE PRODUCED BY BUSINESS REPORTER, WHICH TAKES SOLE RESPONSIBILITY FOR THE CONTENTS

Wednesday 17 January 2024 05:30 EST
Harder to spot: As AI becomes more sophisticated, we’ll no longer even be able to rely on voices we know
Harder to spot: As AI becomes more sophisticated, we’ll no longer even be able to rely on voices we know ( iStock)

Yubico is a Business Reporter client.

Business Reporter: Yubico

The use of artificial intelligence (AI) and machine learning (ML) is infiltrating many aspects of our lives. While much of this will be with the best of intentions, AI is raising concerns about jobs and the use of data, and has significantly added to the armoury of tools that can be deployed by cyber criminals.

Today, as businesses operate with geographically distributed workforces, AI tools enable advanced techniques to execute social engineering attacks like phishing. Authentication methods that were previously difficult to circumvent, such as voice verification to prove identity when resetting a password, will become obsolete.

There are a number of ways in which AI and ML can be used to facilitate social engineering attacks, including:

    • The use of convincing audio or video to fool people into thinking they are dealing with humans they know and trust An example is voices of family members being replicated from genuine clips sourced online, and used to fool targets into giving up account details. “The challenge with AI- or ML-based attacks is that they bring technology closer to the edge of what our normal human senses can detect,” says David Treece, VP of solutions architecture at Yubico, a leading provider of hardware multi-factor authentication. “Convincing video and audio plays on our most intuitively trusted senses of sight and hearing.”
    • More compelling and authentic-looking phishing emails As the use of AI continues to rise, we’ll also see a rise in phishing emails generated by large language models (LLMs) such as ChatGPT. LLMs give cyber-criminals the power to produce flawless, genuine-seeming messages, making it even harder to spot fraudulent emails.
    • More successful spear phishing attacks Spear phishing attacks, which use a more targeted and personal approach to persuade people to give up important information, can be made much more effective, or can even be automated. Vast amounts of open-source data on a user’s online interests and interactions can be harnessed to create personalised messages. The use of ML means attacks can be optimised over time to increase effectiveness.
    • More personalised AI AI can also be used to generate more personalised, but fraudulent, adverts. Those attempting to buy retail items or theatre tickets could find themselves inadvertently providing payment details to scammers, made possible by AI’s ability to match your interests with potential items or events.

Staying secure from AI-based cyber-threats

As adoption of AI and ML-based tools such as ChatGPT continues to grow, it will be important to focus on how we can circumvent the risks associated with them. There are various ways individuals can protect themselves against these threats, and businesses should also look to conduct employee training to ensure their valuable data is not compromised. Keeping social media profiles private can help restrict the information readily available online, although with the potential for posts to be shared, this is far from foolproof. Having a secret codeword among family members could also help to determine if a voice is genuine or an AI-generated impersonation.

As it becomes less possible to rely on elements we have traditionally subconsciously used to verify someone’s identity, such as knowing their voice or recognising their face, we require new ways to guarantee authentic interactions. Machine learning itself can be part of the solution here, too, learning to identify AI-based attacks and flag them as suspicious before any damage can be done.

To protect our online lives, the use of phishing-resistant multi-factor authentication (MFA), such as physical security keys, becomes essential. The YubiKey, for instance, requires a user to prove their physical presence by touching the key, guarding against remote attacks. An individual’s credentials are tied with a particular site, which prevents people inadvertently dealing with hackers rather than a genuine site. “It prevents attackers from preying on our human inability to spot nefarious websites that look exactly like the real ones,” says Treece. “When the efficacy of identity measures that companies have trusted for decades such as voice verification and video verification erodes, strongly linked authentication mechanisms are even more important.”

With the YubiKey, credentials are securely stored in hardware, which prevents those credentials from being transferred to another system without the user’s knowledge. “The use of FIDO2 authenticators, such as the YubiKey, greatly reduces the efficacy of social engineering through phishing as users cannot be tricked into sending a one-time-password to an attacker, or have SMS authentication codes stolen directly through a SIM swapping attack,” he adds.

With around 90 per cent of all cyber-attacks originating from phishing, according to the US government’s Cybersecurity & Infrastructure Agency, it’s important that individuals and businesses act now to ensure they are able to stay secure. With AI and ML attacks growing in capability and number each day, being prepared is far better than being wise after the event.


To find out more about Yubico and how the YubiKey can protect your online accounts, please visit yubico.com.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in