- Meryl Sebastian
- BBC News, Kochi
Last November, Muralikrishnan Chinnadurai was watching a livestream of a Tamil event in the UK when he noticed something odd.
A woman introduced as Dwaraka, daughter of extremist Tamil Tiger leader Velupillai Prabhakaran, was speaking.
The problem was that Dwaraka had been killed in an airstrike in 2009, during the final stages of Sri Lanka's civil war, more than 10 years ago. The body of the 23-year-old man was never found.
And now, here she was – seemingly a middle-aged woman – encouraging Tamilians around the world to advance their political struggle for freedom.
Chinnadurai, a fact-checker from the southern Indian state of Tamil Nadu, watched the video closely and noticed flaws in it, quickly identifying it as a person generated by artificial intelligence (AI).
Mr. Chinnadurai said the potential problem quickly became clear: “This is an emotional issue in the state.” [Tamil Nadu] And with an election right around the corner, misinformation can spread quickly. ”
As India heads to the polls, there will be a wealth of AI-generated content available, from election campaign videos to personalized voice messages in various Indian languages and even automated phone calls made to voters in the candidate's voice. You can't avoid content being created.
Content creators like Shahid Sheikh are even using AI tools to show Indian politicians in never-before-seen avatars, such as wearing athleisure, playing music, and dancing. Have fun.
But as tools become more sophisticated, experts are concerned about the impact they can have in making fake news appear real.
“Rumors have always been a part of campaigning. [But] In the age of social media, it can spread like wildfire,” says SY Qureshi, the country's former election commissioner.
“It could actually set the country on fire.”
Indian political parties are not the first in the world to take advantage of recent developments in AI. Across the border in Pakistan, jailed politician Imran Khan was allowed to address a rally.
And back home in India, Prime Minister Narendra Modi has already taken full advantage of emerging technology to campaign effectively, using a government-developed AI tool called Basini to address audiences in Hindi. Translated into Tamil in real time.
But it can also be used to manipulate words and messages.
Last month, two viral videos of Bollywood stars Ranveer Singh and Aamir Khan campaigning for the opposition Nationalist Congress Party went viral. Both have filed police complaints, alleging that the deepfakes were created without their consent.
Then, on April 29, Prime Minister Modi expressed concern that AI was being used to distort the speeches of his and other ruling party leaders.
The next day, police arrested two people, one each from the opposition Aam Aadmi Party (AAP) and the Nationalist Congress Party, in connection with Home Minister Amit Shah's doctored video.
Mr. Modi's Bharatiya Janata Party (BJP) faces similar criticism from opposition leaders in the country.
The problem, experts say, is that despite the arrests, there are no comprehensive regulations in place.
This means, “if you're caught doing something wrong, you're probably going to get a slap on the wrist at best,” said Srinivas Kodali, a data and security researcher.
In the absence of regulation, creators told the BBC they must rely on personal ethics to decide what work they do or don't do.
The BBC has learned that requests from politicians include pornographic images and morphing of rivals' videos and audio in order to damage their reputations.
“I've been asked to make the original video look like a deepfake, because if the original video was widely shared, it would make politicians look bad,” said Divyendra Singh Jadun. Reveal.
“So his team wanted me to create a deepfake that could be passed off as the original.”
Jadun, founder of The Indian Deepfaker (TID), which has developed a tool that uses open source AI software to help create campaign materials for Indian politicians, claims liability for anything he creates. They insist on adding a clause and claim that it is obvious that it is not genuine.
However, it is still difficult to control.
Sheikh, who works for a marketing agency in the eastern state of West Bengal, has seen his work shared by politicians and political pages on social media without permission or credit.
“A politician used my image of Mr Modi without any context and without mentioning that it was created using AI,” he says.
And creating deepfakes is now so easy that anyone can do it.
“What used to take seven to eight days to create now takes three minutes,” Jadoun explains. “All you need is a computer.”
In fact, the BBC has seen first-hand how easy it is to create a fake phone call between two people, in this case myself and former US President Donald Trump.
Despite the risks, India initially said it was not considering enacting AI legislation. But in March this year, the plan was put into action following the uproar over Google's Gemini chatbot's response to the question, “Is Prime Minister Modi a fascist?”
Rajeev Chandrasekhar, the country's junior information technology minister, said it was a violation of the country's IT laws.
Since then, the Indian government has required tech companies to get explicit permission before publicly releasing “unreliable” or “poorly tested” generative AI models and tools. It also warns against responses using these tools that “threaten the integrity of the electoral process.”
But that's not enough. Fact checkers say continuing to debunk such content is a challenge, especially during election periods when misinformation is at its peak.
“Information travels at a speed of 100 kilometers per hour,” says Chinnadurai, who runs a media watchdog in Tamil Nadu. “The debunked information we spread moves at 20 kilometers per hour.”
And these fakes have found their way into mainstream media, Kodali said. Despite this, “the Election Commission remains publicly silent about AI.”
“Overall, there are no rules,” Kodali said. “They're letting the technology industry self-regulate instead of creating actual regulations.”
There's no surefire solution, experts say.
“but [for now] If action is taken against people forwarding fake information, it may scare others away from sharing unverified information,'' Qureshi said.