top of page
Writer's picturePerformanta

Will Generative AI create a Phishing spike?



In 2022, criminals in the US stole over $2.6 billion through impersonation or imposter scams, according to the Federal Trade Commission. What's staggering is that none of those scams used generative artificial intelligence, a technology that only broke into the mainstream in December of that year. Yet, generative AI is an incredibly powerful and accessible way to create convincing correspondence at scale.

 

This raises a crucial question: will generative AI lead to more impersonation scams, particularly the most widely-used impersonation scam out there, phishing?

 

Phishing is an attack where criminals mimic a person or organisation, trying to fool the receiver into taking action to give the criminals access. The best-known examples are phishing emails that attempt to steal login credentials or install malware onto systems. Phishing also happens across text messages, social media channels, and instant messaging services such as Whatsapp, Discord, and iMessage.

 

Generative AI is a vogue new technology that can produce convincingly human text. One can request an essay on World War 2 battles or a selection of 'cold call' emails for new sales prospects, and the AI will deliver 'human written' prose.

 

The risk of crime and generative AI

 

Generative AI is an enormously potent and helpful tool. But it can work for both sides of the fence, including those criminal impersonators, creating the concern that generative AI will amplify cybercrime.

 

It's not even a concern—this is already happening in several startling ways. At the least, researchers report that many scams, phishing and business email compromise attacks (aka, BEC, a middleman attack that doesn't require stealing credentials or installing malware) are generally better written. Criminals also use generative AI to create translations on the fly and target broader demographics.

 

More sophisticated attacks combine phishing with other impersonation tactics, such as deepfaked voices to impersonate security staff and support desks. Researchers also highlight more personalisation in scam messages, often tailored to target specific individuals.

 

Many of these tactics are not new, but generative AI enables criminal gangs to scale and experiment to astounding levels. Moreover, malicious generative AI services are starting to appear on the dark web, the areas of the internet often hidden from formal scrutiny or regulations. For example, criminals can access tools such as WormGPT, DarkBard, and FraudGPT. Unethical coders continually produce scripts that can 'jailbreak' content controls on legitimate services such as ChatGPT.

 

How to remain safe in a generative AI world

 

If you read the above and think, "None of this is good news," you're not being cynical. Generative AI in criminals' hands poses a serious problem for individuals and organisations.

Yet, this is not a new problem. Phishing and BEC attacks were already commonplace before generative AI: in 2022, 83% of cyberattacks on UK businesses were through phishing, and over the past decade, BEC scams have netted over US$50 billion. These statistics may be staggering, but they also point us to the silver lining: we already know what to do about such attacks.

 

Fundamentally, little has changed in general cybersecurity practices. The basic principles of reducing security risks still apply:

 

  • Encourage and train good security habits in your organisation. Generative AI is useful as it can personalise training.

  • Don't just train—create a positive and supportive security culture. Assigning blame instead of giving support is the best way to erode security culture.

  • Identify which individuals and departments are most at risk and tailor training and support for their risks. Could you give them the skills to spot scams?

  • Ensure security is a part of the business, a concept we at Performanta call 'cybersafe'. Provide executives with introductory security courses, place a security expert on the board, and always include security at the start of projects, not at the end.

  • Support security processes, such as good passwords and multi-factor authentication. Nix bad habits such as password sharing, and focus on security as an enabler, not a barrier. Encourage all department heads to have a stake in security—it's not solely an IT problem or responsibility.

  • Invest in proper security measures, such as threat detection, incident response and data loss prevention. It's wise to partner with a managed security service provider to provide complementary or comprehensive integrated security services, such as a Security Operations Centre.

Apart from such basics, you can also add some new considerations:

 

  • Expose and train decision-makers to generative AI. The more they understand its potential, the more they can appreciate its risks.

  • Set policies for using generative AI tools in your organisation. This trend is not a fad or a niche concern. It's globally transformative, so endeavour to own it. Be careful of blanket-banning AI tools, or you will end up with unsupported shadow AI adoption among staff.

  • Cultivate a sceptical mindset: read twice, reply once. If an email contains changes to payment details, contact that client on an independent channel to confirm. If an email or text message is full of urgency and consequence, consider its authenticity. Criminals depend on us to be reactionary.

  • Practice mindfulness. Far from being an esoteric hippy concept, mindfulness is the most effective tool against cybercrime (and other problems such as fake news) because it encourages active focus and awareness. Offering mindfulness development for staff will also help reduce burnout and increase loyalty.

 

When wielded by cybercriminals, generative AI is a massive threat. But it's not a new threat, merely an escalation of what is already out there. We can respond with proven tactics and a few new methods.

 

The impact of generative AI is a reminder to take cybersecurity seriously. The threat has always been there—we simply need to be more proactive about it.

Comments


bottom of page