Deepfakes and Phishing: Dual Threats of GenAI
In the recently published Entrust Cybersecurity Institute 2025 Identity Fraud Report, it was found that a fraudulent deepfake attempt occurred every 5 minutes throughout 2024. This reinforces the alert recently issued by the US Treasury Department’s Financial Crimes Enforcement Network (FinCEN) on the increased activity of deepfakes created by GenAI.
A further warning to financial institutions especially was issued in a 2024 Deloitte survey that found deepfake financial fraud is expected to surge in the next 12 months with more than 50% of C-suite executives expecting a rise in deepfake attacks in both scale and frequency.
Entrust Report: Deepfake and Phishing Threats
However, there is another trend emerging that organizations, particularly financial services, need to be aware of regarding Generative AI. Whilst the Entrust report found that deepfakes were ranked as the highest threat level, phishing attacks remained the largest attack vector by volume.
Couple this with The State of Phishing 2024 report published by SlashNext which found there had been a 4,151% increase in malicious phishing messages since the launch of ChatGPT in November 2022, and we see that two very different forms of GenAI are being used to target institutions and their customers.
Deepfakes are being used to target biometric authentication solutions, primarily facial, as well as for direct social engineering attacks against institutions, such as impersonating staff or customers on calls, video conferencing, or messaging applications.
GenAI-based phishing attacks, on the other hand, are aimed directly at customers to ultimately obtain credentials such as login IDs, passwords, PINs, and OTPs that will provide the fraudster access to bank accounts, etc. Whilst phishing has been occurring for years, the nefarious use of AI tools such as ChatGPT and ElevenLabs enables fraudsters to make their attempts far more frequent, sophisticated, convincing, and ultimately successful.
All organizations are at risk and therefore need a strategy to counter the sophistication of deepfake attack vectors, on both biometric and non-biometric channels, as well as the increased volume and sophistication of phishing attacks which are becoming increasingly plausible and difficult to identify as fake.
Stolen Credentials Render Useless
Only biometric authentication solutions can prevent phishing and other credential-gathering attacks from being successful as credentials are useless without the accompaniment of the genuine customer. And now, any biometric solution requires sophisticated AI-based deepfake detection capabilities to prevent attacks on the authentication process. Such a solution also needs to be fully omni-channel, covering all customer interfaces from the contact centre, mobile applications, online portals, IVAs, and in-person.
Moreover, for communications where biometric authentication is either not present or is optional, but where deepfakes provide a potential risk, such as contact centers, online bots, help desks, video conferencing, etc., a deepfake detection capability is now a mandatory requirement.
Omnichannel Active and Passive Solution
ValidSoft solves both problems with its world-leading voice biometric solution VoiceID™ as well as its standalone deepfake detection solution Voice Verity®. VoiceID™, a fully omnichannel active and passive authentication solution, comes integrated with the most sophisticated audio deepfake detection available, providing protection against synthetic audio authentication attacks.
Voice Verity® provides the same audio deepfake detection capability but in a completely standalone, non-biometric, non-PII solution that requires no enrolment or consent, but can be integrated on any channel that supports audio. Requiring just a snippet of audio, whether in a video or standalone, Voice Verity provides a low latency, real-time audio streaming solution that can protect every interaction on every channel.
Deepfakes are here to stay as a threat vector and their usage is already increasing. Just as multi-factor authentication has become a non-negotiable in regard to online security and reputational risk, so too must deepfake detection become obligatory for organizations taking the threat of GenAI seriously.