Skip to main content

About the author

Roland Thomas

Associate Director | Corporate Development

Roland is an Associate Director in Thomas Murray’s Corporate Development team. He joined Thomas Murray in 2018 with responsibility for group strategy, partnerships and corporate finance. More recently, Roland’s role has focused on establishing Thomas Murray’s cyber risk business, starting in 2021 with the launch of our Orbit Security platform, and the development of our expert cyber risk consultancy. Roland has a BA in English Language and Literature from Oxford University.

The rapid advances in artificial intelligence (AI), especially when it comes to deep fake technology, will require a complete rethink of your organisation’s cybersecurity training.

Most of us like to think that we are savvy enough when it comes to avoiding online dangers but, by most estimates, one in three employees will still fall for a standard email phishing scam. A Stanford University study found that one in four people were willing to admit they’d clicked on a link in a phishing email.

This should ring alarm bells for security teams as the stuff of science fiction quickly becomes part of our daily reality. How well prepared are your people to deal with the humble phishing scam as it becomes ever-more sophisticated? We look at three scenarios, from the kind you may be ready for to the kind that you may not be – even though they’re already here.

Level One: Old-school phishing email

From: Pete Uberboss
To: Mika Underling
Subject: URGENT
Dearest Mika
As a priority send me the payment details for our suppliers. I need them right away. You have five minutes to respond to this email.
Many thanks
Pete

This is not really from Mika’s department head Pete. Mika has just posted on LinkedIn about their new role as a junior accounts clerk at a firm where Pete is the CFO, and the threat actor is simply trying to exploit this publicly available information.

When you consider that the average employee has access to about 10.8 million files, it’s easy to see why a threat actor would be unconcerned with job titles and seniority when picking a target.

Luckily, Mika paid attention during IT induction and flags this email to the security team.

Level Two: The sound-alike

Mika gets an email from Pete with a link to a Zoom call. Mika is immediately nervous. They have never spoken, but Mika knows who Pete is because of his seniority in their organisation.

Mika joins the call and Pete apologises for having his camera off (“network issues”) but the avatar photo is of him. Pete asks Mika to send him some sensitive documents, and gets quite angry when Mika suggests this request be routed through a manager.

Something feels wrong about the exchange – Mika is not the right person to contact for this information, for a start – so Mika politely ends the call and reports it to IT security.

Mika has narrowly avoided falling for a very recent innovation in phishing scams, which use “deep voice” technology. Investigators in Dubai are still trying to recover US$400,000 after a bank manager in Hong Kong was duped in 2020 by the cloned voice of a company director instructing him to make US$35m in transfers.

Level Three: The digital twin

Pete was the keynote speaker at an industry event, and his firm posted high-quality video of his speech to its YouTube channel. There are several other videos featuring Pete uploaded there, some filmed in his office. He’s not a big social media user, but Pete does have a comprehensive LinkedIn profile.

By combining the information from these sources with machine learning and AI, including a sophisticated chat program, a threat actor creates “Fake Pete”, the real Pete’s deep fake twin.

Fake Pete starts contacting all the real Pete’s LinkedIn connections and everyone employed at Pete’s company, with the aim of getting these people to join a video call with it. Many people don’t because they don’t know Pete personally, and don’t recognise his name.

But Mika, a junior employee at Pete’s company, does accept the invite. Mika is in no way suspicious. This Fake Pete not only looks and sounds like Pete, it also seems to be sitting in Pete’s office.

When Fake Pete asks if Mika can screen-share so it can better understand some new products that are under development, Mika doesn’t hesitate to do just that.

This scenario may sound the most far-fetched, but the future could already be here. The chief communications officer at a cryptocurrency exchange has claimed that hackers created an AI hologram of him that they used to try and scam his firm’s potential business partners over Zoom.

People are the first and last line of defence

It’s unclear just how convincing this hologram was, given that at present most machine learning models don’t include profile views of faces. However, this is a problem that threat actors are surely working to overcome.

A lot of us now do more remote working, and we work with colleagues and clients in off-shore locations. While this is very convenient, fewer face-to-face interactions may make us more comfortable – and therefore less cautious – when dealing with people online.

Establishing a rigorous protocol for the transfer of funds and sensitive data, even between co-workers, that involves a short chain of sign-off permissions may be the first effective step to protecting your organisation, your people and your clients.

How we can help

At Thomas Murray, we have 30 years’ experience working with firms in the world’s most complex industries. We combine that knowledge with our award-winning Cyber Security Technology to offer scalable, comprehensive protection to organisations of all sizes, in all sectors.

Talk to us to find out more about what we can do for you.

 

Contact an expert

Robert Smith

Robert Smith

Head of SaaS Sales and Customer Success 

 
Roland Thomas

Roland Thomas

Associate Director | Cyber Risk