Jadoon, 31, says he refuses to do work that is defamatory or deceptive. But he expects many consultants to do their duty and bend reality in the world's biggest election as more than 500 million Indian voters head to the polls.
“Only our ethics can stop the creation of unethical deepfakes,” Jadun told the Post. “But this is very difficult to stop.”
India's elections began last week and will run until early June. Preview how the explosion of AI tools is transforming democratic processes and facilitating the development of seamless fake media around campaigns. More than half of the world's population lives in more than 50 countries that will hold elections in 2024, making it a pivotal year for global democracy.
It's unclear how many AI fakes about politicians have been created, but experts say they're seeing a rise in deep fakes about elections globally.
“I’m looking at it more. [political deepfakes] This year is more sophisticated than last year, and what I'm seeing is more sophisticated and convincing,” said Hany Farid, a computer science professor at the University of California, Berkeley.
The regulatory vacuum is emerging as policymakers and regulators from Brussels to Washington rush to craft legislation to restrict AI-enabled audio, images and video in election campaigns. The European Union's landmark AI law will not come into force until after parliamentary elections in June. A bipartisan bill in the U.S. Congress that would ban the use of AI to falsely portray federal candidates is unlikely to pass before the November election. Several US states have enacted laws to punish people who make deceptive videos about politicians, creating a patchwork of policies across the country.
However, there are limited guardrails to prevent politicians and their allies from using AI to deceive voters, and enforcement officials have to guard against fakes that can spread quickly on social media and group chats. There are few equals. The democratization of AI means that it will be up to individuals like Jadun, not regulators, to make ethical choices to stop AI-induced electoral disruption.
“Let's not stand by and watch elections descend into chaos,” Sen. Amy Klobuchar (D-Minn.), chair of the Senate Rules Committee, said in a speech at the Atlantic Council last month. “…this is just like a ‘hair on fire’ moment. This is not a case of “let's wait three years and see what happens.'' ”
“It becomes more sophisticated and attractive.”
For years, nation-state groups have been mass disseminating misinformation on Facebook, Twitter (now known as has been imitated. However, as AI enables the participation of smaller actors, the fight against falsehoods becomes a fragmented and difficult task.
In a memo, the Department of Homeland Security warned election officials that generated AI could be used to enhance foreign influence campaigns targeting elections. DHS said in the memo that AI tools could allow malicious parties to impersonate election officials and spread misinformation about voting methods and the integrity of the election process.
These warnings are becoming reality around the world. Earlier this year, state-sponsored attackers used generated AI to interfere in Taiwan's elections. On election day, a group affiliated with the Chinese Communist Party posted an AI-generated audio of a prominent politician who withdrew from Taiwan's elections throwing his support behind another candidate, according to a Microsoft report. However, Foxconn owner and politician Terry Gow never made such an endorsement, and YouTube removed the audio.
Taiwan ultimately elected Lai Ching-de, a candidate opposed by the Chinese Communist Party leadership, suggesting the limits of campaigning to influence election results.
Microsoft expects China to adopt a similar strategy in India, South Korea and the United States this year. “China's increased experimentation in augmenting memes, videos, and audio is likely to continue and may prove more effective in the future,” Microsoft's report states. There is.
But the low cost and wide availability of generative AI tools has made it possible for people without state backing to carry out their schemes. It is comparable to a nation-state campaign.
In Moldova, an AI deepfake video shows the country's pro-Western president, Maia Sandu, calling on people to resign and support a pro-Putin party during local elections. In South Africa, digitally altered rapper Eminem endorsed South Africa's opposition party ahead of May's general election.
In January, Democratic political operatives forged President Biden's voice and urged New Hampshire primary voters not to go to the polls, a move aimed at raising awareness of problems in the media. It was an act.
The rise of AI deepfakes could change the demographics of who runs for office, as bad actors disproportionately use synthetic content to target women.
Bangladeshi opposition politician Rumin Farhana has faced sexual harassment online for years. But last year, an AI deepfake photo of her in a bikini surfaced on social media.
Farhana said it was unclear who created the image. But in Bangladesh, a conservative Muslim-majority country, the photo drew hateful comments from the public on social media, with many voters believing it was real.
Farhana said the assassination of such a person could prevent women candidates from entering politics.
“No matter what new thing happens, it is always used against women first. They are the victims in every case,” Farhana said. “AI is no exception.”
“Please wait before sharing”
In the absence of parliamentary action, countries are taking action and international regulators are forcing companies to make voluntary commitments.
About 10 states have adopted laws that punish those who use AI to deceive voters. Last month, Wisconsin's governor signed a bipartisan bill that imposes fines on those who fail to disclose AI in political ads. Additionally, Michigan law provides that within 90 days after an election he will punish anyone who knowingly distributes AI-generated deepfakes.
But it's unclear whether the penalties — which range in some municipalities to fines of up to $1,000 and up to 90 days in jail — are harsh enough to deter potential offenders.
With limited detection technology and fewer designated personnel, it can be difficult for enforcement officers to quickly confirm whether a video or image is actually generated by AI.
In the absence of regulation, government officials are seeking voluntary agreements from politicians and technology companies alike to control the spread of AI-generated election content. European Commission Vice-President Vera Zhurova said she had written to the main political parties in European member states with a “plea” to resist the use of manipulative methods. However, she said politicians and political parties would not face any disadvantage if they did not comply with her demands.
“I can't say whether they will follow our advice or not,” she said in an interview. “It would be very sad if it didn't. If we have ambitions to govern member states, we must also show that we can win elections without using dirty means.”
Jurova said that in July 2023, he asked major social media platforms to label AI-generated works ahead of the election. She said the request received mixed reactions in Silicon Valley, where some platforms told her it would be impossible to develop technology to detect AI.
OpenAI, which develops chatbot ChatGPT and image generator DALL-E, has also sought to forge relationships with social media companies to address the distribution of AI-generated political material. At the Munich Security Conference in February, 20 leading technology companies pledged to work together to detect and remove harmful AI content during the 2024 election.
“This is a society-wide issue,” Anna Makanju, vice president of international affairs at OpenAI, said in an interview with PostLive. “It is not in our interest for this technology to be leveraged in this way, especially as we have learned lessons from past elections and the past few years. is.”
However, no penalties will be imposed if companies fail to fulfill their promises. There was already a gap between OpenAI's stated policy and its enforcement. A super PAC backed by Silicon Valley insiders has launched an AI chatbot for leading presidential candidate Dean Phillips that uses the company's ChatGPT software, a proprietary technology banned by OpenAI for political campaigns. violation of the use of The company didn't ban bots until the Washington Post reported.
Jadun, who is an AI political campaigner for India's main electoral party, said the deepfake epidemic cannot be solved by the government alone and the public needs to be better educated.
“Stop and wait before sharing content that takes your emotions to the next level,” he said.