Recently, there have been serious worries raised regarding the potential for artificial intelligence (AI) to be used to modify videos to resemble the voice of US Vice President Kamala Harris and fool voters during elections. Elon Musk, the CEO of SpaceX and Tesla, posted the video on his social media site X, which was formerly known as Twitter, on Friday night. The video instantly grabbed notice. Musk caused a lot of controversy when he first uploaded the video without disclosing that it was first published as a spoof.
The aforementioned video features images from a legitimate Harris presidential campaign ad, but the voice-over is an impersonation of the most probable Democratic nominee. Concerns have been raised by this lifelike AI-generated video over the possibility that AI will be used to mock or mislead political people, particularly during election seasons.
Musk posted the video again after receiving criticism, quoting the original user’s caption, which identified it as a parody and added, “Parody is legal in America.” The first article without a disclaimer highlighted the fine line between satire and false information, since many people were persuaded to assume the content was real despite this correction.
“We believe the American people want the real freedom, opportunity, and security Vice President Harris is offering; not the fake, manipulated lies of Elon Musk and Donald Trump,” said Mia Ehrenberg, a spokesman for the Harris campaign, in an email to The Associated Press. The campaign’s worries about the possible effects of such controlled media on democracy and public opinion are emphasized by this remark.
Additionally, the event has drawn attention to X’s regulations against edited and synthetic media. Users “may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm,” according the platform’s standards. Memes and satire are exempt, though, as long as they don’t lead to “significant confusion about the authenticity of the media.” Due to Musk’s first post’s omission of the fact that the video was satire, it was unclear if these regulations were broken.
After listening to the audio of the phony advertisement, two specialists in AI-generated media attested to the fact that AI was used to create it. Digital forensics specialist Hany Farid of the University of California, Berkeley, offered commentary on the video’s deft use of deepfakes and generative artificial intelligence. He wrote in an email, “The voice generated by AI is very good.” “The video is so much more powerful when the words are in Vice President Harris’ voice, even though most people won’t believe it.”
Farid stressed that generative AI businesses must make sure their products are not utilized in ways that endanger people or democracy. However, Rob Weissman, co-president of Public Citizen, an advocacy group, offered a more negative assessment. According to Weissman, “I don’t think that’s obviously a joke,” during an interview. “I’m sure the majority of people who see it don’t think it’s a joke. Although it’s not excellent, the quality is sufficient. And most people will take it seriously since it reinforces prevailing ideas that have been talked about in relation to her.
Weissman, whose group supports laws governing generative artificial intelligence, issued a warning, saying that this event is a prime example of the risks they have been warning about. Deepfakes and other AI-generated material are spreading quickly on social media, which presents serious problems since it can easily mislead the public and sway political debate.
This is not a unique instance. Deepfakes and other AI-generated material are being utilized more often in a variety of situations, such as disinformation campaigns and frauds. Experts caution that the potential for AI technology to be abused in politics and other fields will only increase with its advancement and accessibility, highlighting the need for strict laws and more public awareness to protect democracy.
The use of AI in politics will probably remain a divisive and keenly followed topic as the 2024 election season heats up. The Musk and Kamala Harris video incident serves as a sobering reminder of the dangers and potential of artificial intelligence in influencing public opinion, as well as the pressing need for safeguards against its misuse.