loading='lazy' Learn About ValidSoft’s Recommendations for Evaluating Deepfake Detection Solutions
Icon September 06, 2023

Virtual Deception: The Growing Rise of Deepfake Video Call Scams

AI
Deepfakes
Video Calls
Virtual Deception

3 minutes min read

Virtual deception is continously rising with the use of deepfakes in video calls. Among the numerous articles and opinion pieces on generative AI Deepfakes, the not unsurprisingly common conclusion seems to be that this is technology waiting for a fraudulent use-case, or more to the point use-cases, rather than harmless usage by pranksters. In fact, Deepfakes have already been used in a number of scams and frauds, using both synthetic audio and video techniques. Anecdotal evidence also suggests the number of attempted scams may be far higher than those examples reported in various news articles.

The Rising Menace: How Deepfakes are Already Being Exploited

recent Bloomberg article has quoted CyberArk Software who predict the next wave of scams will be based on deepfake video calls, using what are now well-established employee and business communication channels such as Zoom and Teams. There is already an example of this from 2022 involving the Chief Communications Officer from cryptocurrency exchange Binance. Hackers used previous TV appearances and news interviews to create a deepfake of the executive, which they then used in Zoom meetings to fool crypto developers.

The Next-Level Risks in Video Conferencing

This takes Zoom bombing to a new and far more nefarious level. Where previously the aim was to disrupt and force meeting closures for no reason other than as a prank, the impersonation of known colleagues or senior employees over a Zoom meeting or one-on-one call could be used for covert information gathering, industrial espionage and blackmail or the obvious fraudulent requests for funds transfers.

So, what are the solutions to this predicted exploitation of deepfakes and our reliance on video conferencing. Well firstly, not everyone participates with their camera active so video conferencing in many cases is reduced to audio conferencing. Regardless of whether a deepfake is video/audio based or audio alone, the scammer must speak using real-time deepfake voice cloning in order to be convincing to the target or target audience. And herein lies the trick to deepfake detection.

The Silver Lining: Advanced Deepfake Detection

Advanced AI -based voice biometric or audio analysis engines such as ValidSoft’s include techniques that can detect anomalous artefacts contained in deepfake audio. These techniques, however, are not biometric in nature meaning unlike biometric authentication solutions that require enrolment, voice matching and consent, these simply process a snippet of audio in real-time to determine whether the speaker is human or machine. What the human ear can’t detect, these deepfake detection algorithms can. And because these algorithms can be provided as standalone solutions, requiring no voice biometric platform if authentication is not required, a deepfake detection solution that is anonymous, requires no enrolment or consent and which can detect deepfakes in realtime can be integrated into video conferencing and video calling platforms. ValidSoft’s deepfake detection platform, Voice Verity™, can be deployed as a standalone solution or comes fully integrated with our world leading voice biometric authentication platform VoiceID™.

As deepfake technology evolves from harmless pranks to fraudulent exploits, it’s clear that this is no longer a fringe issue but a looming cybersecurity crisis. The escalation from disrupting Zoom meetings to highly sophisticated scams involving deepfake audio and video in business settings demonstrates the pressing need for robust countermeasures. With continued investment in in deepfake detection, emerging AI-based solutions like ValidSoft’s Voice Verity™ offer a promising line of defence by providing real-time detection of deepfake anomalies. These solutions, which can seamlessly integrate into existing video conferencing platforms, signal a positive step toward safeguarding the authenticity of digital interactions. The road ahead may be fraught with challenges, but with proactive action and technological innovation, there’s reason for cautious optimism.