Understanding AI Legislation: The Global Challenge of Regulating Deepfake Technology

Understanding AI Legislation: The Global Challenge of Regulating Deepfake Technology

Whilst governments and policymakers globally look to define high and low-risk Artificial Intelligence (AI) usage, with the aim of legislating and restricting the usage of high-risk applications of the technology, AI is far-reaching in its potential applications, with many applications not even thought of yet.

Within the EU’s draft AI Act, unacceptable and high-risk uses of AI range from children’s toys, aviation, and medical devices through to cognitive behavioral manipulation and social scoring, amongst numerous others.

AI Legislation in Action: The ELVIS Act of Tennessee

In the US State of Tennessee, however, a piece of newly proposed AI legislation is very specific in its intended target and application. Known as the Ensuring Likeness Voice and Image Security (ELVIS) Act, it is intended to strengthen the existing Personal Rights Protection Act which recognizes a commercial Right of Publicity. The intention is to add sound and voice protection to the existing protections that apply to name and likeness, i.e., an image or photograph.

One target of the ELVIS Act is therefore the unauthorized production of Deepfake audio (whether or not combined with video). The existing Personal Rights Protection Act was intended for celebrities, public figures, and others who have a degree of fame, as they were the targets of most Deepfake videos. A recent example is the unauthorized production of a song, purportedly by the artist known as Drake, which went viral and topped the music charts last year. This is the sort of activity that the ELVIS Act seeks to address.

Deepfake Technology: A Catalyst for AI Legislation

The usage of Deepfakes has therefore clearly progressed from celebrities portrayed in pornography or politicians purportedly making controversial announcements. Whilst still common usages against publicly recognizable figures, Deepfakes are now also used in fraudulent applications and are not restricted to public figures, they can target anyone. Their nefarious usage is also not intended to be public or published, but rather to be furtive and “under the radar”.

This, therefore, raises a number of questions on the intended legislation. Will the protection of voice against deepfakes apply only to public figures or entertainers such as recording artists, or will it apply to all Tennessee citizens? If the output of Deepfake software is synthetic audio of any person that could be used to their detriment, e.g., payment or banking fraud, rather than synthetically generated songs, is the actual use of this software, or making this software publicly available, going to be deemed unlawful, given it only has one purpose? Apart from making a Deepfake of yourself, generating audio based on anyone else’s voice would potentially be unlawful.

Enforcement Challenges in AI Legislation Against Deepfakes

And then there is the question of policing the legislation. Suppose Deepfake voice protection is to be afforded to all Tennessee citizens rather than just the famous musicians. How is its nefarious use intended to be detected, given the fraudulent use-cases of Deepfakes are not obvious to anyone that they are even occurring? It is no longer just a right of publicity, as the new and fraudulent use of Deepfake audio is anything but public.

Whilst these questions are still unanswered, legislating the use of something may be effective for the law-abiding majority, but will never actually stop the non-law-abiding minority. It would seem therefore, that technical detection of Deepfake audio on the channels where it would be targeted may be the most effective way of achieving the Act’s stated aims. The ability to detect Deepfake audio will also become increasingly important from an evidentiary basis as the use, and misuse, of Deepfake audio becomes increasingly litigated in the courts.

The Interplay of AI Legislation and Technology

As legislative efforts like the ELVIS Act aim to address the burgeoning concerns of deepfake technology, particularly in the realm of synthetic audio, the need for sophisticated, real-world solutions becomes paramount. This is where ValidSoft’s Voice Verity™ steps in, bridging the gap between legislative intent and practical enforcement. With its advanced voice biometrics and AI-driven analysis, Voice Verity™ offers a proactive defense mechanism, capable of identifying and flagging deepfake audio with remarkable precision. This not only aligns with the objectives of new legislation but also enhances the security and trust in digital communication, extending protection beyond public figures to every individual susceptible to deepfake audio exploitation.

In conclusion, while laws like the ELVIS Act mark a significant step in governing AI applications, the ultimate effectiveness in this digital era hinges on innovative and effective defensive and preventative technologies. ValidSoft’s Deepfake detection solution, Voice Verity™, is one example of AI being used to detect audio generated by AI. Voice Verity detects Deepfake speech on any channel using just a few seconds of audio and being a non-biometric speech processing solution, it requires no PII, no enrolment, or indeed consent. By deploying solutions such as Voice Verity, this proposed AI legislation can take on an enforceable element for the Deepfakes that operate undetected across any organization’s customer-facing channels.