Deep fake, real threat. Can anything stop the rapid rise of face-swapping fraud?
© 123Ref
AI deep fakes are everywhere. Criminals are using the tech to steal identities and take over accounts. How should enterprises respond?
In 2024, a finance officer at a global engineering firm took part in a video call with colleagues. On the instruction of his CFO, the employee wired $26.6 million to multiple bank accounts across 15 transfers.
Regrettably, the finance officer was the only ‘real’ person on the call. The others were all AI-generated deepfakes, face swapped to look and sound exactly like his actual colleagues. The $26.6 million? All sent to fraudsters.
The attack is just one high profile example of the sophisticated new social engineering threat facing organisations all over the world. Fraudsters are moving on from the phishing email and the fake invoice. They’re starting to embrace full-motion, real-time, hi-res deepfakes.
The new threat is asking a fundamental question for cybersecurity leaders: “how can my systems verify what’s real?”
From fun and filters to financial fraud
The origin of face swapping began with a tech called GAN (generative adversarial networks), which generates high-quality synthetic data by learning patterns from existing training datasets. GAN opened the door to the generation of real-time deep fakes.
Like so many society-changing technologies, AI face swapping emerged first as entertainment. Hollywood was quick to embrace the tech – especially for de-aging. A high profile breakthrough came in 2019 with The Irishman, where 76 year old Robert De Niro’s younger face was digitally recreated for flashback scenes.
At around the same time, social media users were starting to experiment with a version of the tech too. Snapchat introduced filters to let people change themselves into cats or fairies. The innovation was hugely popular.
But the fraudsters were watching. They saw the potential of face swapping to take social engineering to new levels. They were helped by the rapid improvement in the tech. Three seconds of audio can produce an 85 percent voice match from the original to a clone. And thanks to social media there are images and videos everywhere to provide the source material needed.
As a consequence, deepfake-related losses are expected to grow from $12.3 billion in 2023 to $40 billion by 2027.
The AI deep fake arms race
The emergence of AI deepfakes has changed the rules of ‘know your customer’ (KYC) for large enterprises. After years of using weak methods for identification (uploading personal documents, verifying emails etc) the real-time facial scan seemed like a vast improvement. Face swapping quickly undermined that.
Speaking on the Thales Security Sessions podcast, Jason Keenaghan, Director of product management at Thales Identity and Access Management, described how a US bank found this out the hard way.
He said: ”About a year ago, the bank started seeing an uptick in new account opening fraud. It wasn’t sure what was going on. The fraud detection technology was not able to detect the attacks. Only manual checks revealed they were deepfakes: it was obvious to the human eye that the faces were not real.
“However today, a year or so later, that’s all changed. Now, the human eye or ear can no longer detect a deepfake. The videos are too good. My understanding is that we need AI tools to detect unnatural eye blinking, mismatched lighting, skin texture inconsistencies and ‘boundary artefacts’ around faces.
“We have to accept this is a cat and mouse game. The arms race has reversed — we now need AI to detect AI.”
The three layers of defence
So how should organisations defend themselves against these attacks? A good start is to identify the processes that are targeted by fraudsters: on-boarding, account recovery processes, help desk and so on. And this should apply to customer interactions and those with business partners/employees. Enterprises should screen vendors carefully since supply chain deep fake attacks are rising.
Multi Factor Authentication (MFA) is an option here. In other words, staff must ask for secondary confirmation before acting on any video call. However, this does imply working with third parties, which introduces new risks.
Next is technology. There isn't a single universal deepfake detector, so enterprises should look for verification tools with specific capabilities. Perhaps the most important of these is liveness, since this is the factor that distinguishes an actual person from a synthetic video or AI generated face. Enterprises should seek out guidance from organisations like NIST that rank different commercial offerings.
The third defence is training. Humans can be the weakest link against deep fake attacks – the target for social engineering. Security teams should therefore embed a degree of scepticism in staff and ensure training is ongoing, rather than a one-off. Red team testing can help to identify weaknesses in security protocols.
Beyond deepfakes: non-human agents
Fraudsters are clearly deploying deep fake tech with great success. But they are already looking ahead to new scams. Experts believe agentic AI will be their next attack vector. Products such as Chat GPT Agent work on behalf of human users. They can plan, execute and learn without continuous prompts. It’s easy to see how criminals might deploy rogue agents to trick their way inside organisations.
As agent tech goes mainstream, the big challenge will be to detect legitimate non-humans from those controlled by scammers.
Of course, agents can be used for defence too. Jason Keenaghan believes enterprises could deploy agents as deep fake spotting co-pilots. He says: “You could inject an agent inside a Zoom or Teams meeting to watch the attendees and detect deepfakes in real time.”
Take away
It used to be OK to trust the evidence of your own eyes. No longer. Today, AI generated fakes are everywhere. And they’re too realistic for humans to discern.
So we can’t merely trust. We have to verify. And we must use AI tools – in combination with staff training and robust processes – to combat AI scams.
To find out more about the state of AI deep fakes and how to repel them listen to the full discussion with Jason Keenaghan on the Thales Security Sessions podcast.