AI firm ElevenLabs is in the news following a recent event that shook the political establishment since it is said that the company exploited its technology improperly to produce a deepfake audio of President Joe Biden. A robocall was used to spread the deepfake, which mimicked the sound of the president, to voters in New Hampshire, advising them not to take part in the state’s primary. Since then, ElevenLabs has allegedly taken prompt action, banning the individual who created the false audio.
ElevenLabs Addresses the Deepfake Scenario:
ElevenLabs, a company well-known for its AI technologies that provide voice cloning services, swiftly banned the user connected to the deepfake Biden audio, according to a Bloomberg story. The security firm Pindrop notified the company that its tools were purportedly utilized in the production of the deepfake, and as a result, the company has launched an inquiry into the problem. According to a thorough investigation conducted by Pindrop, the likelihood of ElevenLabs’ participation was “well north of 99 percent.” In spite of this, ElevenLabs is still dedicated to solving the problem and avoiding improper usage of their AI solutions.
Pindrop’s Examencia:
In order to conduct Pindrop’s analysis, background noise had to be eliminated, and the deepfake audio had to be carefully compared with samples from more than 120 speech synthesis methods. Vijay Balasubramaniyan, the CEO of Pindrop, revealed to Wired that the study unequivocally indicated ElevenLabs as the origin of the deepfake technology. ElevenLabs has already suspended the account that generated the phony audio, according to Bloomberg.
Election Security Consequences:
The incident with President Biden’s voice that was caused by a deepfake underscores the possible risks associated with technologies that can replicate both voice and likeness. Experts worry that votes and elections might be influenced by the use of such deepfake capabilities. Professor Kathleen Carley of Carnegie Mellon University cautions that this could not be the end of the story, foreseeing increasingly complex efforts to stifle voting and assaults on election personnel in the United States’ next presidential election.
ElevenLabs’s Platform and Reaction from the Public:
The platform’s beta version was only released by ElevenLabs a few days before to the event. Through creative and political discourse, users of the platform may produce audio recordings that resemble celebrities and add to public debates. The business emphasizes that users cannot replicate voices with malicious intent on its safety page, along with the fact that fraud, discrimination, hate speech, and other forms of online abuse are prohibited. Even though ElevenLabs says it is committed to stopping the improper use of its AI technologies, this incident highlights the need for stronger security measures to stop bad actors from influencing elections throughout the world.
The event with the deepfake Biden robocall serves as a sobering reminder of the difficulties presented by quickly developing AI technology in the context of political discourse and elections. The event sparks a larger discussion about the need for strong protections to prevent the manipulation of audio AI tools for malevolent reasons, even as ElevenLabs looks into and addresses the misuse of their capabilities. The event emphasizes how crucial it is to give election security first priority in order to protect the integrity of democratic processes as the 2024 presidential election draws near.