The sensational article about the Israeli prime minister's “psychiatrist” that exploded online was generated by AI, spawning hundreds of technologically-enabled fictions disguised as news. Researchers warned that the message originated from one of the websites.
Propaganda-spouting websites typically relied on armies of writers, but generative artificial intelligence tools offer a much cheaper and faster way to fabricate content that is difficult to decipher from real information. .
Hundreds of AI-powered sites imitating news organizations have sprung up in recent months, leading to an explosion of false reporting about everything from wars to politicians and high-stakes elections around the world. Researchers say they are on high alert this year, when the
'Israeli Prime Minister's Psychiatrist Suicides' was featured on Pakistani digital site Global Village Space even after it went viral online in November with unsubstantiated claims denouncing a suicide note. It still remains at the top of the “Popular Articles” list. Prime Minister Netanyahu.
A “significant portion” of the site's content, including this article, appears to have been gathered from mainstream sources using AI tools, according to an analysis by NewsGuard, a US research agency that tracks misinformation. That's what it means.
After scanning the site for error messages specific to content generated by AI chatbots, NewsGuard found that a text about Prime Minister Netanyahu's “psychiatrist” and a fictitious 2010 article on a satirical website It was announced that significant similarities were found between the two.
NewsGuard analyst Mackenzie Sadeghi said that when she urged Microsoft-backed OpenAI's ChatGPT to rewrite the original article for general news audiences, the results were “very similar” to the Global Village Space article. ''
“The rapid increase in AI-generated news and information sources is worrying because these sites are not perceived as legitimate and trustworthy sources by the average user,” Sadeghi told AFP. This is because there is a possibility that
~Promotion of propaganda~
The fabricated article, which came as Prime Minister Benjamin Netanyahu pressed for war against Hamas militants in the Gaza Strip, bounced around social media platforms in multiple languages, including Arabic, Farsi and French.
Several sites posted obituaries for fictitious “psychiatrists.”
The falsehood was also picked up on a TV show from Israel's arch-enemy Iran, where the show's host directed viewers to read the full article on “Global Village Space.”
The website relabeled Netanyahu's article as “satire” after criticism, but did not respond to AFP's request for comment.
NewsGuard identifies at least 739 AI-generated “news” sites across multiple languages that operate with little or no human oversight and have generic names such as “Ireland Top News” Did.
But even that list is probably “low-hanging fruit,” says Darren Linvill of Clemson University.
Linville is one of the university's disinformation experts who discovered that Russia-linked websites were copying news and promoting Kremlin propaganda about the Ukraine war ahead of November's U.S. presidential election. be.
That includes DC Weekly, which NewsGuard says uses AI to rewrite articles from other sources without credit.
The site is believed to be owned by John Mark Dugan, a former U.S. Marine who defected to Russia, but Ukrainian President Volodymyr Zelenskiy has bought two luxury yachts worth millions of dollars in U.S. aid. It has posted numerous false claims, including that it has purchased .
Illustrating the power of AI-driven misinformation to influence policy decisions, some U.S. lawmakers repeated false statements during a critical debate on aid to Ukraine.
– “Camouflage” –
“Automatically generated misinformation is likely to be a big part of the 2024 election,” Gary Marcus, a professor at New York University, told AFP.
“Fraudsters are using (generative) AI left, right and center.”
Linville told AFP that AI-generated content on websites such as DC Weekly serves as a “kind of camouflage” that gives credibility to false articles written by humans.
These websites highlight the potential of AI tools (chatbots, even more than photo generators or voice cloners) to fuel misinformation, while further undermining trust in traditional media, researchers say. say:
Polarizing content that can cause confusion and sway political beliefs is intended to attract attention and generate advertising revenue.
The revenue model for many of these websites is programmatic advertising, which could lead to top brands inadvertently supporting the websites while also fearing violating free speech protections. researchers say it may be difficult for governments to crack down on the virus.
“I'm particularly concerned about its use by commercial companies,” Linville said.
“If we don't stop and pay attention, the already blurred line between reality and fiction will only further erode.”
ac-bmc/bfm