POTUS Executive Order on AI: Navigating a Multifaceted Challenge
5 Minutes min read
In light of the recent Executive Order issued by President Biden on Safe, Secure, and Trustworthy Artificial Intelligence, President Biden is taking the initiative to ensure that America leads the way in seizing the opportunities and managing the risks of artificial intelligence (AI). The Executive Order sets out to establish new standards for AI safety and security and aims to protect the privacy and rights of the American people whilst promoting innovation and competition, amongst other goals.
Whist AI offers immense opportunities for innovation and business exploitation, a key part of the Executive Order is to put in place a framework to deal with the darker side of AI abuse and nefarious applications. The rise of artificial intelligence has birthed advancements such as deepfakes, which have many positive applications but equally, many applications that enable it to be used for cybercrime, deception, identity theft, and fraud. The Executive Order highlights some of the methods that may be used to mitigate the risk of AI deepfake misuse.
Executive Order Points to Watermarking: A Potential Solution?
One such proposed remedy is watermarking and there seems to be considerable hope pinned on watermarking as a discerning tool against AI-generated deepfakes. However, a deeper dive into its efficacy paints a more nuanced picture.
Recent research from the University of Maryland unveils a concerning vulnerability in relying solely on watermarking. Their findings demonstrate how straightforward it can be to remove these watermarks, or worse, embed deceptive ones that can mislead verifications. When experts like Soheil Feizi, a respected name in computer science, express reservations about the reliability of watermarking as a shield against AI disinformation, it’s evident that the challenge is more complex than it appears on the surface.
A particularly worrying frontier of the deepfake evolution is audio. The leap from manipulated images to synthetic audio brings with it a host of challenges and malicious potential uses – from spreading misinformation and fraud to tarnishing reputations. To believe that the answer to these multifaceted problems lies solely in watermarking is to grossly oversimplify a layered challenge.
Delving into the Inherent Challenges with Watermarking:
Robustness vs. Imperceptibility: An ideal watermark should be resilient, and capable of withstanding various alterations like compression, equalization, or speed shifts. Simultaneously, it should be undetectable, ensuring listeners remain oblivious to its presence. Striking this delicate balance is far from trivial.
The Threat of Intentional Attacks: In the world of AI, adversaries are sophisticated and are not bound by geography or jurisdiction. Many, including rogue nations, are well-equipped with sophisticated tools and knowledge that can deliberately remove or distort watermarks. Tactics such as re-encoding, noise introduction, or advanced filtering techniques can be used to this end.
Unplanned Alterations: The real world is unpredictable. Common events, like a phone call transmitted over a subpar line, ambient background noises, or even the nuances of recording a replayed audio, can inadvertently degrade a watermark, rendering it useless.
Consistency in Implementation: The premise of watermarking hinges on the assumption that all synthetic audio will carry this protective mark. Ensuring that every AI tool or deepfake creator complies with this is a gargantuan task. More so, malicious creators have every incentive to sidestep this system, making it ineffective against such determined threats.
Voice Verity™: A Reliable Solution That’s Already Here
Amidst these challenges emerges the pressing need for robust, reliable solutions, independent of the creators of the AI deepfake. Such tools need to be able to operate independently, across all AI deepfake creators and crucially be able to detect the deepfake irrespective of whether or not a watermark is present. Indeed, such a tool should effectively ignore the watermark since its presence is not an absolute indicator of integrity and trust.
Enter ValidSoft’s Voice Verity™, purpose-built to detect deepfake audio in real time. As our digital personas become integral to our everyday lives, the urgency for instantaneous, foolproof security measures escalates.
The Need for Comprehensive Defense Strategies
While watermarking is undoubtedly a valuable tool in the arsenal against tampered or AI-generated content, its limitations cannot be overlooked. Voice Verity™ isn’t just a patchwork fix; it represents a comprehensive, forward-thinking approach to security. It’s designed to integrate seamlessly with any audio-supporting channel, providing an unwavering line of defense. As businesses and consumers grapple with the challenges of deepfakes, it’s crucial to understand that preliminary defenses like watermarking must be underpinned by stalwart solutions like Voice Verity™. This ensures a well-rounded protective strategy, shielding us from the insidious threat of synthetic audio fraud.
In summation, watermarking, while being one tool in the fight against deepfake and the threat of AI, isn’t sufficient. The challenges it faces, both inherent and practical, advocate for a combined strategy. Cutting-edge AI-based detection methods offers the best prospect against the ever-evolving menace of deepfakes. Whilst nations, international and national organizations and private enterprises proceed in an unharmonious gallop towards setting the standards to control the use of AI, abuse and deceit will continue unabated and at a far faster pace than the world of regulation and control, which bad actors and rogue nations will ignore in any event. The world needs a standalone generative AI audio deepfake detection and prevention capability. Voice Verity™ is available, today!