Martin Lewis: The Money Saving Expert: Two Sides to the AI and Deepfake Arms Race
4 minutes min read
The importance of being able to distinguish deepfakes, realistic AI-generated video, and audio, from actual humans was highlighted in a speech this week by the UK Financial Conduct Authority (FCA) chief Nikhil Rathi. This was prompted by the recent case of a deepfake ad on Facebook purportedly showing UK TV personality Martin Lewis – the Money Saving Expert – promoting a scam investment scheme.
Rathi stated “As AI is further adopted, the investment in fraud prevention and operational and cyber resilience will have to accelerate simultaneously. We will take a robust line on this – full support for beneficial innovation alongside proportionate protections.”
However, the Martin Lewis example has also elicited at least one response demonstrating a misunderstanding of how AI-based technology not only creates deepfakes but can also detect them.
AI deepfakes are on the rise
Digital Certificate provider Sectigo’s CTO of SSL, Nick France, was quoted in an Infosecurity-magazine article as saying “People don’t realize how far along AI deep fake technology has come and how democratized the technology is. AI is being increasingly used by bad actors to produce convincing deep fakes to bypass voice recognition.” He also states “As passwords are used less and less, biometrics have risen as a trusted form of identity validation. It makes sense. But as deepfakes become more common, some biometric authentication methods may be rendered useless.”
The opposite is in fact the case. Advanced voice biometric solutions such as ValidSoft’s, built on large-scale DNN models, do more than securely authenticate or identify speakers. They also have the ability to identify anomalous artifacts and other characteristics in audio sources which betray the existence of synthesized audio, i.e., audio created by a machine rather than a person. What might sound natural to the human ear, is not natural for the machine. In that case, the “machine” is a machine learning (ML)-based detector trained on millions of fake and real recordings.
ValidSoft has developed this so-called spoof detection solution for over a decade, starting years before the term “deepfake” was first coined in 2017. Fast forward a few years and ValidSoft stand ready to face the wave of so-called generative AI, of which Martin Lewis is the latest example hitting the wire. Notable examples this year included 4Chan users creating fake voices of celebrities including Emma Watson, as well as the US Senate example of Senator Blumenthal. In all these cases our ability to detect these deepfakes has been proven. So much so that ValidSoft has created a standalone version of our deepfake detection engine, Voice Verity™, for non-biometric deployments.
The standalone solution
This standalone, non-biometric deepfake detection solution also exposes the second misconception in the article, that deepfakes are created for the purpose of bypassing voice recognition solutions. Apart from the fact, as explained above, that advanced voice biometric solutions include AI technology that can detect deepfakes, the only two published successful fraud examples using deepfakes were based on social engineering, fooling employees to believe they were speaking with their CEO in order to elicit a funds transfer.
This is where Voice Biometrics can also come into play to be part of the solution. With deepfake-based fraud deemed to thrive, the first easy step is to bring deepfake detection. As we noted, voice imitation is not cloning. As the deepfake creation is only a partial transfer of speaker characteristics, some of these fake voices are quite poor at fooling modern voice biometrics systems.
That is not to say that all voice biometric solutions have deepfake detection capabilities, as has been shown in a couple of examples of journalists using their own voices to create deepfakes and appearing to pass our competitors’ authentication checks.
What is clear though, is that to detect deepfakes on channels such as the contact center or a company help desk, where a social engineering attack could occur, organizations cannot rely on the human ear, but require advanced AI solutions, whether an integral part of a voice biometric solution or a standalone non-biometric solution such as Voice Verity™. Fighting AI with AI is the key to detecting deepfakes and protecting against this new threat model. If it’s top of mind for the FCA, it should also be top of mind for the financial services organizations they oversee.