AI Scams Are Surging, And Consumers Expect Their Banks to Respond with AI Voice Security
Rising Threat of AI Scams
A new State of Scams report from Alloy, based on a Harris Poll of 2,000 U.S. consumers, confirms what many in cybersecurity have long anticipated: we have entered the era of weaponized, AI-driven fraud.
Eighty-five percent of consumers now believe that artificial intelligence is making scams harder to detect, and nearly two-thirds have either been directly affected or know someone who has. Some of the fastest-growing attack vectors are AI-powered bank impersonations and voice-cloning phone scams, attacks that can convincingly replicate the voices of trusted individuals, from customer service agents to family members. One in five victims of these scams has lost more than $5,000.
These are not isolated incidents. They represent a systemic shift in how financial crime operates. As Sara Seguin, Principal Advisor on Fraud and Identity Risk at Alloy, notes, “AI hasn’t made fraudsters more sophisticated, it’s made them more efficient.” A single criminal can now generate thousands of hyper-personalized voice scams in minutes. The barrier to entry for fraud has never been lower, while the potential impact has never been greater.
The New Trust Crisis in Banking
What’s especially striking about Alloy’s findings is not only the scale of consumer concern but its direction. Ninety-seven percent of respondents say fraud prevention is the number-one factor when choosing a bank. Two-thirds say they are more likely to select a financial institution that actively uses AI to prevent fraud.
In other words, consumers recognize that AI is part of the problem, but they also believe it must be part of the solution. This dual expectation creates a new trust mandate for the financial sector. The voice channel, in particular, has become fraud’s easiest point of entry, yet it remains one of the least protected. When anyone can sound like anyone else, verifying the authenticity of the voice itself becomes mission-critical.
Responding with AI Voice Security™
At ValidSoft, we are leading the shift from voice vulnerability to AI Voice Security™, a new standard for protecting the most human form of communication that combines real-time deepfake detection with voice identity assurance, creating a seamless layer of protection that detects synthetic voices and confirms genuine ones in real time. And our strategic alliance with Reality Defender consolidates the world’s most advanced voice and deepfake detection technologies into a unified solution, so clients no longer have to choose.
Securing a Future of Trust
AI has fundamentally changed the economics of fraud. But it has also changed what customers expect from the organizations they trust with their money and identity. The next generation of secure banking will be defined by those who can protect not only data and transactions, but voice itself.
Banks and enterprises that secure their voice channels today will earn the trust that defines tomorrow.
At ValidSoft, we believe that in an era of synthetic voices and AI deception, authenticity must be audible.
Because when trust is on the line, your voice should still speak for itself.