THis campaign will be his first exposure to generative artificial intelligence. Generative artificial intelligence is the technology behind popular apps like ChatGPT that allows non-experts to create fake but realistic-looking text, video, and audio perfect for political manipulation. is. At the same time, many major social media companies have backed away from some of their previous promises to promote “election integrity.” The November election will also be the first to see the impact of TikTok's huge popularity. TikTok uses a recommendation algorithm that some experts believe is particularly suited to spreading misinformation.
Let’s start with the rise of generative AI. Generative AI enables virtually anyone to generate persuasive text, images, and audio based on relatively simple natural language prompts. In January, Facebook spread a fake AI-generated image of Donald Trump sitting next to Jeffrey Epstein on the private jet of the disgraced financier and sex offender. In February, a Democratic consultant working for a leading rival admitted to commissioning AI-generated robocalls impersonating President Joe Biden in an attempt to dissuade thousands of voters from participating in the New Hampshire primary. Ta. The state's attorney general has opened a criminal investigation.
The United States is not alone in this regard. An audio clip posted on Facebook last September, just two days before Slovakia's national presidential election, shows candidates from the pro-NATO, pro-Ukrainian Progressive Slovak Party discussing ways to rig the results. It was reflected. The AI-generated fake audio posts went viral during the pre-election media moratorium, limiting the extent to which they could be debunked and losing candidates to pro-Russian rivals.
Analysts are having a hard time grasping the possibilities. Research published in academic journals in February PNAS Nexus says that any political ad maker equipped with generative AI tools can create “a highly scalable 'manipulation machine' that targets individuals based on their unique vulnerabilities, without the need for human input.” I discovered that I had the ability to build. This suggests that the kind of Russian digital operatives who meddled in the 2016 election in favor of Donald Trump could use AI as a force multiplier to re-incite an already polarized American electorate. This suggests that there are concerns about gender.
But AI doesn't tell the whole story. The threat of artificially generated disinformation is made even more terrifying by a more familiar technology: social media.
Despite the chaos and violence caused by Trump's attempts to undermine the 2020 election results, and the threat of similar upheaval this year, major platforms such as Facebook, YouTube, and most dramatically, has retreated from some of its past election integrity policies. According to a new report we co-authored for the Center for Business and Human Rights at New York University's Stern School of Business.
Discussions about backsliding always start with Company X, which Elon Musk acquired in October 2022. Competition from Meta's new Threads app and mask-related controversy cost Company X users. But it still receives an average of more than 220 million visitors each month and retains the loyalty of many influential politicians, journalists and entertainers. Musk has laid off 6,000 people, or 80% of Company X's workforce, by spring 2023. In September 2023, He said “Oh, they're gone,” said a person in the platform's election integrity department. The new owner's reduction in content moderation in the name of promoting free speech contributed to a surge in racist and anti-Semitic language, leading advertisers to leave Company X en masse. .
Following the hiring surge caused by the coronavirus pandemic, other social media companies have also implemented mass layoffs, with many cutting back on their “trust and safety” teams, the teams that create and enforce content policies. ing.
In the lead-up to the 2020 election, Mehta established a 300-person force dedicated to election integrity. However, despite the chaos caused by Trump supporters storming the Capitol on January 6, 2021, the company has since reduced the size of its team to about 60 people, including the group's leaders and Mark Zuckerberg. Regular meetings with the CEO have been canceled. Meta officials said several former team members in new roles are still contributing to election integrity efforts and that top management is kept informed of the efforts. But the signals are clearly mixed.
To its credit, Meta is still funding an industry-leading network of over 90 external fact-checking organizations that help platforms de-emphasize and label false content. It offers. But the company continues to exempt politicians from fact-checking their statements. Meanwhile, YouTube has reversed its previous policy of removing tens of thousands of videos that falsely claimed the 2020 election was illegal. Both companies argue that the new approach promotes free speech.
Meta also announced that it would relax rules allowing political ads on Facebook and Instagram that question the legitimacy of the 2020 presidential election, with the same goal of promoting free debate. Sure enough, in August 2023, Facebook allowed the Trump campaign to run a spot declaring that “there was election fraud in 2020.”
Now, with an average of more than 1 billion monthly users worldwide and more than 170 million users in the United States alone, TikTok faces a new and completely different set of challenges. Its ties to Chinese parent company ByteDance have raised suspicions, so far unproven, that the authoritarian Beijing government has influence over the U.S. platform. The House of Representatives just recently approved a bill forcing ByteDance to sell TikTok or face a U.S. ban on the short video app.
However, there are also questions about TikTok's recommendation algorithm, which selects the content to present to users. Other major platforms rely on a “social graph” and select content based on who you follow and what they’re sharing. In contrast, TikTok selects short videos for its “For You” page based on algorithmic content recommendations. outside So are their social networks. This difference may help explain why TikTok has been successful in providing users with videos that they find novel and appealing.
But researchers at New York University's Center for Social Media Politics say this distinction could pose a danger during elections, saying, “Generative AI is making it easier to create fake videos.” “Political misinformation may reach users” they wrote in January, which TikTok would not be able to reach with other social graph-based platforms. ” Moreover, TikTok users are younger, and studies have shown that younger people are more likely to believe false information. TikTok said the analysis was “completely speculative” and reflected “an insufficient understanding of how our platform works.”
There is still time for platforms to take precautions before the election. You can impose limits on the rampant re-sharing of content, which is one way misinformation spreads. You can trigger a “circuit breaker” on certain viral posts, giving content moderators a chance to determine whether the content is malicious. You can fill your missing content management team. You can also remove clearly false content from users' feeds while keeping a marked and archived copy that cannot be shared but is available to anyone tracking the misinformation.
The social media industry should view these and other safeguards as the cost of doing business responsibly during what is shaping up to be another volatile election season. Inaction could make the real crisis for American democracy even worse.