Covid-19 and the Deepfake Threat The Need for Systematic Detection
Covid-19 will provide opportunities for Deepfake attacks against organisations where procedures and controls could be targeted due to social distancing and home working.
Deepfakes are the nefarious computer-generated audio and video that are a by-product of the advancements in Artificial Intelligence. Designed to sound and look exactly like the human they mimic the technology is now sufficiently advanced to fool the unsuspecting and even perhaps the suspecting.
With the availability of tools such as Lyrebird and Adobe Voco, the technology has been brought within the reach of fraudsters and cybercriminals, and the current global disruption to business and everyday life provides the perfect environment for the technology to be exploited.
With so many people working remotely from home, and the speed at which this occurred, business processes and controls designed for the office may not apply or may have been relaxed for reasons of pragmatism. Controls such as segregation of duties requiring two or more employees to be involved in a process such as a payment or a purchase may no longer be possible using traditional methods.
Added to this is the fact that normal business patterns in the way of payments, purchases and even organisations we deal with are subject to change. A long-term supplier may no longer be in a business, meaning a brand-new supplier being added may not raise any suspicion. This is where Deepfakes can be exploited by leveraging changes in the normal way of doing things and social isolation.
Many senior executives will have their faces and voices online somewhere, whether on social media or just as likely on their own company’s web site. This provides the input to train the Deepfake engine into producing either a static recording or even a “voice skin”, allowing the perpetrator to conduct an actual conversation in the target’s voice.
Armed with the Deepfake, the next step is simply social engineering. As in the first known reported case of Deepfake fraud reported in the WSJ in 2019, fraudsters used a Deepfake of a CEO’s voice to convince another executive to make an urgent payment to a new supplier, which was of course the fraudster. In the current environment where there is so much physical separation and upheaval requests such as this would not necessarily raise suspicions if a voicemail or even remote conversation request from a colleague sounded genuine and familiar.
Not having access to the computer, forgot my password and the helpdesk isn’t answering, we urgently need to pay this new provider for provisions all sound vaguely reasonable in the circumstances.
So how do we protect against Deepfakes? Only advanced voice biometric engines that can discriminate between a human voice and a synthetic, machine-learning generated voice, i.e. a Deepfake, can detect these types of frauds. These advanced synthetic detection algorithms can detect what the human ear (and eye) cannot due to the inherent digital anomalies present.
The ValidSoft technology to detect Deepfake audio can be integrated with any authorisation system, whether payment, purchasing or anything else that would normally rely on person-present segregation controls. Just as voice biometric solutions can be used to absolutely authenticate the various parties in a secure authorisation system, whether they are working remotely or not, so too can ValidSoft’s voice biometric technology be used to identify those parties as human and not Deepfakes.