TOPEKA — Facebook's unsophisticated artificial intelligence misclassifies a Kansas Reflector article about climate change as a security risk, setting off a cascade of failures. Blocked news site domain According to technology experts interviewed for this article and an official statement from Facebook, the company published the article.
The assessment is consistent with an internal review by Kansas Reflector's parent organization, State Newsroom, which blames Facebook's AI flaws and lack of accountability for mistakes.
It's unclear why Facebook's AI judged the article's structure and content to be a threat, and experts say Facebook may not actually know what attributes caused the accidental bombing. It states that there is.
“Facebook appears to have used excessive and unreliable AI to misidentify the Reflector article as a phishing attempt,” said Chris Fitzsimmon, president and publisher of The State Newsroom. “Facebook’s response to this incident was confusing and difficult to decipher. Similarly, it falsely suggested that our content somehow posed a security risk and damaged our reputation and credibility. Similarly, the information provided to our readers remains unresolved among our followers on the platform.”
On April 4th, Facebook denied sharing by Kansas Reflector. Opinion column written by Dave Kendall It has since deleted all posts by all users pointing to the story on the Kansas Reflector website. Facebook restored those posts about seven hours later, but continued to block the column.
The next day, the Kansas Reflector attempted to share the column as it appeared. news from americais operated by States Newsroom. hand basket, a newsletter run by independent journalist Marisa Cabas. Facebook rejected these posts and removed all links pointing to both sites, similar to what it had done to Kansas Reflector the day before.
In removing the post, Facebook sent a notification to users that incorrectly identified the news site as a cybersecurity risk.
Meta, which operates Facebook, Instagram, and Threads, has publicly apologized for a “security error.” But a Meta spokesperson said Facebook would not follow up with users to correct incorrect information they provided.
Kansas Reflector continues to hear from readers who are confused about the situation. Facebook's actions also disrupted the Kansas Reflector's reporting efforts at the end of the legislative session, and had dire implications for other news outlets.
Daniel Kahn Gilmore, senior staff technologist at the American Civil Liberties Union, said Facebook's actions demonstrate the dangers of a society that relies heavily on a single communications platform to decide what is worth discussing. He said that
“That's not their core competency,” Gilmore said. “On one level, you can see Facebook as someone who is ahead of skiing. Facebook was originally a dating app for college students, and now all of a sudden people are using Facebook to separate fact from fiction. Now I do.”
“Welcome to AI”
Adam Mosseri, head of Instagram at Meta, said the error was due to the machine learning classifier. Machine learning classifiers are a specific type of AI that is trained to recognize characteristics associated with phishing scams that attempt to trick people into divulging personal information.
Mosseri said the classifier evaluates millions of pieces of content every day. In thread post, you may sometimes get it wrong.Mr. Mosseri did not respond. thread post Looking for more details.
Jason Rogers, CEO Inverleyuses NSA licensed technology. University of Kansas Innovation ParkI reviewed Kendall's columns that appeared in , Kansas Reflector, News From the States, and The Handbasket.
Rogers said Facebook's sensors can be sensitive to things like the large number of hyperlinks in a column or the resolution of photos displayed on a page. Still, he said, “It's strange to be flagged as a 'cyber' threat by AI.”
“Welcome to AI and why it's not as 'ready' as some people think,” Rogers said.
He said Kansas Reflector may be trying to circumvent Facebook's filters by directing people to the following link to read Kendall's column: KansasReflector.comand then tried to share the same column from other sites, which could have signaled to the AI that this was phishing behavior and caused it to block the domain for all three sites.
Sagar Samtani Director, Data Science and Artificial Intelligence Laboratory, Indiana University Kelley School of Businessstates that false positives and false negatives are common with this type of technology.
He said Facebook is going through a “learning process” to assess how people around the world view different types of content and how it protects its platform from bad actors.
“Facebook is just trying to learn what is good and appropriate content,” Samtani said. “So in that process, there's always going to be some 'oops' where you're like, 'I shouldn't have done that.'”
And Facebook may not be able to explain why its technology incorrectly classified Kansas Reflector as a threat, he said.
“Sometimes it's actually very difficult to say something like that because the model doesn't necessarily output exactly the features that might have set off the alarm,” Samtani said. “That may not be something that is within their technical capabilities.”
“Where does the responsibility lie?”
Kendall's column criticized Facebook after the platform refused to buy ads to promote his climate change film. Facebook told him the topic was too controversial.
In two calls on April 5, Meta spokesperson Andy Stone insisted that Facebook's actions against the three news sites that published Kendall's column had nothing to do with the content of the column. did.
Gilmore, the ACLU technician, questioned that explanation.
“They're acting as a filter for their readers, trying to keep them away from things that are considered negative, whatever that means,” Gilmore said. “I would be extremely shocked if there was nothing that would trigger these detectors that a normal person would consider 'content'.”
He said it would actually be difficult to program an AI to ignore the meaning of articles.
“They know how people react to the media they read,” Gilmore said. “They know how long people stay on an article. They know a lot of information. I don't see how or why they would exclude it from the classifier.”
He also said the AI system may not be able to provide an explanation “that a normal human would understand” as to why it rejected Kendall's column and blocked the domain of the news site that published it. He also said.
Stone, Meta's spokesperson, declined to answer questions for this article, including: How does Facebook think it should be held accountable for its mistakes? Does Facebook actually know what caused the mistake? What changes have been made to prevent it from happening again? Is it Facebook's own? monitoring committee Are you reviewing the situation?
Mr. Gilmore's work with the ACLU focuses on how technology impacts civil liberties such as free speech, freedom of association, and privacy.
“This is a great example of one of the big problems with relying so heavily on one ecosystem to distribute information,” Gilmore said. “And the explanation you're getting from them is, 'Well, we screwed up.' Well, you screwed up, but the consequences are there for everyone.”
“Where does the responsibility lie in this case?” he added. “Is Facebook going to hold its AI systems accountable?”