loading='lazy' Learn About ValidSoft’s Recommendations for Evaluating Deepfake Detection Solutions
Icon February 12, 2025

Remote IDV Fraud Driven by Deepfakes

Deepfakes
Identity
Remote IDV
Synthetic identities
Verification
Voice Verity

Remote IDV (Identity Verification) fraud is on the rise, and this is partly attributed to AI, or more specifically, deepfakes. The fraud can take the form of video, audio, and identity documents. It is also attributed to the now ubiquitous use of digital channels and remote processing of account onboarding. In areas such as banking, this is especially so, given that cost-cutting measures inevitably lead to branch closures and fully automated digital processes.

How Deepfake Technology is Fueling Remote IDV Fraud

Advancements in generative AI regarding deepfake ID documents, necessary for remote IDV, prompted the U.S. Treasury Department’s Financial Crimes Enforcement Network (FinCEN) to issue an alert in 2024 specifically encouraging the review of identification documents. So good were they becoming that they were increasingly bypassing software ID checking.

Synthetic Identities: A Long Term Threat

Threats remote IDV cases pose can include AI modified genuine documentation, false identities, and complete or partial synthetic identities. It is no longer a case of attempting to impersonate a living person for traditional account takeover purposes, which typically involve the immediate loss of funds through fraudulent account transfers

With a synthetic identity, fraudsters can play a long game lasting months or even years, with the identity becoming ever more trustworthy with a transaction history until the end-game fraud, such as a large loan or credit card bust-out.

A 2023 Deloitte report stated that the average payout was between USD$81,000 and USD$98,000, though individual attacks can reap millions. The same report estimated that synthetic identity fraud losses would reach US$23 billion by 2030.

Why AI is Key to Detecting Synthetic Identities

So, the question is how can these synthetic identities be better detected? As is now understood, AI itself does a far better job of detecting deepfake than humans. Greater human scrutiny of documents, images, video and audio is not only time consuming but also potentially ineffective.

ValidSoft’s Voice Verity® solution, a non-biometric AI solution based on large scale Deep Neural Network techniques is highly effective at detecting deepfake audio, regardless of which deepfake generation software was used in its creation. It is also omni-channel, capable of taking audio snippets from any channel in real-time and returning results for immediate action.

Role of  Omni-Channel Solutions in Remote IDV

The omni-channel capability provides organizations performing remote IDV protection regardless of the point of attack. Where audio is captured along with ID documentation and potentially video in a remote onboarding, the probability of the audio being deepfake is calculated in real-time and returned to the IDV platform. If, however, escalation is required in the form of a live video chat with the applicant, this audio can also be captured and analyzed.

Voice Verity® can also operate as a background task by processing historical onboarding audio, helping to find synthetic identities an organization may already unwittingly possess, eliminating them before any monetary loss occurs.

ValidSoft’s leading edge voice biometric solution Voice ID™ can also integrate with remote IDV solutions to not only detect deepfakes and prevent onboarding but to also enroll legitimate onboarded customers into the biometric solution as part of the IDV process.

Remote IDV is here to stay regardless of deepfakes, so to effectively detect synthetic identities generated by AI, use detection techniques also generated by AI.