How to Spot a Deepfake

Published  5 MIN READ

Deepfake videos are making the news as a developing means of financial fraud. Laurent Sarrat, Co-Founder and CEO of payments security fintech Sis ID, assists TMI in its investigation of this digital menace.

There’s a new threat to corporate financial security, and it is very difficult to detect. Serving as a wake-up call for all is the news in early February 2024 that a finance worker in Hong Kong was tricked into a US$25m payment to fraudsters using deepfake video technology to pose as his company’s London-based CFO.

The unfortunate employee was first conned into joining the video call, attended by several deepfake recreations of overseas colleagues, with talk of a secret transaction. Other employees had already dismissed the invite as a scam.

However, the apparent presence of senior company executives reassured the employee that all was legitimate, and that there was no issue with the request to make 15 transactions to five local bank accounts, to a total of HK$200m (US$25m). A subsequent call by the duped individual to head office a week later revealed the error of their judgement.

Deepfakes are highly realistic videos or images created using AI to depict speech or events that didn’t happen. The Hong Kong deepfakes were most likely created from genuine online conference footage of the executives, using AI software to manipulate the imagery and add fake voices for the video conference.

It’s an alarming development, but the victim of this scam is not alone in falling for it. A survey conducted by iProov in 2022, which involved 16,000 respondents across the US, Canada, Mexico, Germany, Italy, Spain, the UK, and Australia, saw 43% of respondents admitting they would not be able to tell the difference between a real video and a deepfake.

Another report, published in June 2023 in the Journal of Cybersecurity, noted findings from the eight academic studies available at the time revealed “a surprisingly large range in human accuracy in the labelling of deepfake video across studies from 23 to 87%”.

Whether or not people can tell the difference, the volume of deepfake fraud attempts in 2023 nonetheless rose by 3,000% during 2022, according to Onfido’s Identity Fraud Report 2024. The paper suggested that this rise was due to the increasing availability and lower cost of user-friendly generative AI tools.

Upping the ante

The current level of use of deepfake video suggests that criminals are moving up a gear – or are way ahead of the game – in their efforts to defraud businesses. For Sarrat there is still a significant amount of work to do to mitigate the risk.

“We can clearly see an evolution in the preparation and execution of fraud,” he says. “Whereas the preparation of a fraud used to be carried out by a person with technical skills, and the execution by a person with more human skills, today preparation and execution are so simplified that they can be carried out by the same person and also by a machine.”

He continues: “Fraudsters are able to adapt their techniques to the evolution of new technologies so as to always be at the cutting edge. This is their core business, while fraud victims often lag behind. Fraud is taking on a new dimension, with technology arriving much faster on the side of the fraudsters than on that of the defrauded.”

Of course, there has always been a cat-and-mouse game between fraudsters and companies. And fraudsters are always one step ahead, with companies putting protective measures in place while a new, more technically advanced and undetectable type of fraud emerges, notes Sarrat. “Deepfake is still in its infancy and is only rarely used. It is not yet considered a real threat, but it’s a safe bet that as companies equip themselves against previous fraud, deep fake is improving.”

Technological defence

The evolution of AI is offering a means of exposing basic deception. For example, the assignment of a student who has used ChatGPT can be easily checked using an algorithm specially designed to prevent cheating.

But Sora, a new generative AI tool announced by OpenAI in February 2024, is able to create short (up to 60 seconds) but very high-resolution videos (up to 1920 by 1080 pixels) from text prompts. The output is almost indistinguishable from real content. While it’s not yet available for public use, Sora is clearly a useful tool, yet it is also another potential threat to be countered.

The cat-and mouse game continues: “The means of protection are being developed, but by the time they are complete, it will already be too late,” warns Sarrat. Indeed, while deepfake detection algorithms are now in use, deepfake quality is improving all the time, pushing the boundaries.

AI-based detection tools include Sensity, Operation Minerva, and Deepfake-O-Meter.

The challenge is that AI algorithms generally require the use of training data to learn what a deepfake is. It means the diversity of the training set used is what dictates the scope of the algorithm’s efficacy; it has to know what to look for, but by the time it’s learnt, the fraudsters have moved on.

What’s more, it is highly likely that the development of deepfake detection algorithms is being followed by the development of ‘adversarial ML technology’ that simply counters their effectiveness.

While it is currently possible to verify a deepfake using deep learning and computer vision technologies to analyse certain properties, such as how light reflects on real skin versus imagery or synthetic skin, adversarial tools can be deployed to trick ML models by providing deceptive input.

Human eyes

In view of this ever-changing landscape, what else can and should companies and their employees be doing or looking for to verify video calls? Faced with the general threat of fraud, it’s essential to establish basic good practices, urges Sarrat.

“Segregation of duties, double-checking and raising team awareness are ways of confirming the reliability of a request,” he says. “However, with deepfake, this type of verification is rapidly becoming obsolete. What we can imagine in the future is the verification of digital identity before each action, such as a video call. To counter this scourge, digital identity certification seems to be the next step.” Firms such as Sis ID and iProov offer tools of this nature.

If the face doesn’t fit…

For now at least, perceptible imperfections exist in deepfake creation algorithms, so it would seem beneficial to call upon human intervention when faced with an in-the-moment deepfake challenge. Ideally, companies will advise and train all staff on what to look for.

In the meantime, online security firm Norton has suggested some useful easy-to-spot red flags to hone individual detection skills:

  • Unnatural or no eye movement (such as blinking)
  • Unnatural facial expressions: often occur when one image has been superimposed on another
  • Awkward facial-feature positioning: a nose out of place with the rest of the face, for example
  • Lack of emotion: expressions not matching the words spoken
  • Awkward-looking body or posture: deepfakes usually focus on facial features rather than the whole body
  • Unnatural/disjointed body movement: especially when turning to the side or moving the head
  • Unnatural image colouring: including abnormal skin tone, discolouration, unusual lighting and shadows
  • Hair/teeth that don’t look real: lower resolution tools cannot replicate these well
  • Blurring or misalignment at the edges, especially where the face and neck meet the body
  • Inconsistent audio and noise: including poor lip-syncing, robotic-sounding voices, and strange word pronunciations