Skip to main content

About the author

Roland Thomas

Associate Director | Corporate Development

Roland is an Associate Director in Thomas Murray’s Corporate Development team. He joined Thomas Murray in 2018 with responsibility for group strategy, partnerships and corporate finance. More recently, Roland’s role has focused on establishing Thomas Murray’s cyber risk business, starting in 2021 with the launch of our Orbit Security platform, and the development of our expert cyber risk consultancy. Roland has a BA in English Language and Literature from Oxford University.

Advances in artificial intelligence (AI) create as many opportunities as they do dangers, though it pays to keep a sense of perspective. While it’s important to be aware of how threat actors could use AI to harm your organisation, it’s also important to think strategically about how to manage these risks.

The evolution of business email compromise (BEC) scams is a case in point.

Phishing scams and their variants have been around for a long time. By now, most people with an email account will know it’s very unlikely that an exiled prince needs the help of a total stranger to launder millions of dollars. What they may not be ready for, however, are the increasingly sophisticated descendants of this type of scam.

WormGPT: Irresistible bait or scammers getting scammed?

Described as the ‘evil’ cousins of ChatGPT, WormGPT and FraudGPT are versions of generative AI unfettered by the ethical constraints built into their mainstream counterparts. Just how dangerous this development is seems to depend on your point of view.

Some cyber security experts have found the BEC emails generated by WormGPT to be ‘unsettling’, ‘remarkably persuasive’ and ‘cunning.’ Others have been far less impressed, describing their experiences with WormGPT’s efforts to be ‘not especially convincing’, ‘rudimentary’ and ‘generic in a way that should ring alarm bells.’

One aspiring BEC scammer complained on a Dark Web forum that WormGPT’s code is “broken most of the time” and cannot generate “simple stuff.” In fact, suspicions have already emerged that FraudGPT and WormGPT are, in and of themselves, scams.

Asia becomes new focal point for whaling attacks

Unlike cruder forms of phishing, a BEC scam will often claim to be from someone known to the recipient. This more targeted approach is called ‘spear phishing’. The scale of its impact is hard to gauge – many organisations may not even know if they’re targeted, as one survey found that an astonishing 98% of employees admit to deleting suspect emails without reporting them to their IT Security teams.

Multinational operators and those reliant on international supply chains need to be aware that Asia is an emerging hotspot for BEC attacks that target the accounts of high-level executives (also known as ‘whaling attacks’). In Singapore, successful BEC scams of this type defrauded 93 victims of US$41.3m in the first three months of 2022 alone.

In Japan, recent targets have included:

  • a large airline;
  • the US subsidiary of a media conglomerate; and
  • the EU subsidiary of a leading manufacturer of automotive parts.

All three suffered multi-billion yen losses as a result. 

In a Singaporean case, IBI Group Hellas Single Member Société Anonyme v Saber Holdings Pte Ltd [2023] SGDC 95, the threat actor posed as Saber’s CEO on WhatsApp and successfully duped an employee into making a large cash transfer (which the employee was told was for an acquisition). The threat actor even compromised the CEO’s email address to send further instructions.

The employee transferred €700,000 to the bank account of a Hong Kong company. Saber was eventually able to recoup the money, though presumably it was missing a chunk thanks to the legal fees required to get it back.

The personal touch: moving beyond the written word

WormGPT purports to have a high success rate when it comes to bypassing email filters and anti-spam engines through crafted targeting and personalisation aimed at the would-be victim organisation. BEC attacks ultimately rely on successfully convincing the recipient to interact with the email or communication message. This level of sophistication and personalisation certainly helps adversaries in that regard.

However, a new, perhaps more worrying, trend has emerged – deep fake voice cloning. While the whaling scam is not necessarily new, AI allows attackers to take this to the next level by cloning. Microsoft’s VALL-E AI engine reportedly needs just three seconds of audio to clone a voice.

Consider for a second how many conferences, webinars and other media that organisations put out to the world with CEO, COO and other high-profile positions being represented. This is a clear opportunity for adversaries to elevate their scams, as evidenced by the case of the executive in Hong Kong who was caught up in a deep fake voice scam.

Sounds familiar...

Following an increase in the number of BEC complaints involving the use of virtual meeting platforms, the FBI issued an alert that identifies multiple ways in which threat actors use virtual meetings, webinars and video conferencing tools:

  • You’re on mute: The compromised email account of an employer or high-level executive, let’s say a CEO, is used to send invites to a virtual meeting. The criminal will display a still picture of the CEO without audio, claiming they’re having problems with their setup. They then use either the in-platform chat or a follow-up email to request the attendees to transfer funds.
  • You’ve frozen up: A variation of ‘You’re on mute,’ in this version of the scam the criminal also uses a still picture of the CEO (perhaps an AI-generated screenshot of the CEO caught mid-sentence and with their eyes closed, for added veracity), but uses a clone of the CEO’s voice to request that those on the call go ahead with a funds transfer.
  • The fly on the wall: The threat actor inserts themselves into virtual meetings through a compromised email, so they can collect information on an organisation’s day-to-day operations or sensitive projects.
  • The delegated task: Spoofed emails purporting to be from the CEO will say that they are extremely busy/in a meeting/stuck in traffic/taken ill. Could the recipient please, as a matter of urgency, move funds to the specified account on their behalf?

Regular training will help to nip many BEC attempts in the bud. It’s also essential that you ensure robust procedures are followed when it comes to authorising large financial transactions.

But the key takeaway is that your organisation needs to prepare its people to combat this new generation of multichannel and AI-driven BEC tactics.

Orbit Security

Orbit Security

Security ratings for enhanced attack surface management and third-party risk. Continuous monitoring of your risk environment for breaches and vulnerabilities that could be exploited by threat actors.

Learn more

Contact an expert

Robert Smith

Robert Smith

Head of SaaS Sales and Customer Success 

 
Roland Thomas

Roland Thomas

Associate Director | Cyber Risk