Audio and video counterfeit detection experts say there is overwhelming evidence that a recording of a Baltimore County school principal making racist and anti-Semitic comments was generated by AI. It has said.
say two experts, the director of the university's Media Forensics Laboratory and the CEO of an artificial intelligence detection company that has worked with companies such as Visa and Microsoft. Audio has the following characteristics fake.
Audio that circulated on social media in January purports to show Pikesville High School Principal Eric Eiswart making derogatory comments about students and staff. In the video, the speaker refers to “ungrateful black children who can't even be taken out of paper bags.”Outrage sparks investigation into Baltimore County Public Schools The situation continued for nearly two months, with no news of the outcome.
Eiswart has always maintained that the audio is fake. Audio and video created using artificial intelligence, known as deepfakes, have been used to spread misinformation about public figures like President Biden; It is unusual for technology to be used to tarnish the reputation of a local figure like a principal.
Speaking for the first time since the audio was released, Eiswart told The Banner that the audio was created to damage his career.
“I did not say these things, and these ideas are not what I believe as an educator or as a human being,” he said. I have written about the disturbing audio.
A spokesperson said Baltimore County Public Schools does not comment on investigations because they are still ongoing. The school will notify the public once the investigation is complete.
Audio has AI “features”
Shiwei Liu, director of the Media Forensics Laboratory at the University at Buffalo, said the audio was not particularly sophisticated. At the State University of New York, Liu developed technology to detect sounds and images created using artificial intelligence.
The voice is “not a hard case for an algorithm. I think someone just created this using an AI voice generator,” Liu said, adding that the person who created it put a lot of effort into this work. He added that he did not believe that. Online voice generation tools like Eleven Lab are available to everyone and tout the ability to instantly create voices that are indistinguishable from human speech.
But there is clear evidence that the audio was manipulated, Liu said.
“There are some signs of editing, like piecing together different parts,” he said. “It has an AI-generated sound feature. The tone is a little flat.”
AI-generated audio tends to have unusually clear background sounds or lack consistent breathing or pauses, Liu said.
In recent months, universities and companies have been developing ways to use artificial intelligence to detect deepfakes. In a way that is not possible with the human ear. Their methods have improved over time. For example, Lyu created her DeepFake-o-meter platform.
Liu, who conducts research in digital media forensics, computer vision, and machine learning, said his team applied several recent deepfake audio detection techniques to audio. Three were created by our lab and two were created by other researchers. In four cases, the audio was deemed to have been generated by the AI with 99% certainty, and in the other case it was deemed to have been generated by the AI with 74% certainty.
Liu said this detection method is highly uncertain and identifies “vocoder artifacts,” or evidence of the step that converts synthesized speech into speech. He said it is a less reliable method of detecting deepfakes than other methods.
Reality Defender has developed a proprietary methodology that CEO and co-founder Ben Colman says can be performed with 99% accuracy. The company has worked with governments and companies including Visa, Microsoft, and NBC to detect deepfakes of images, text, or audio.
Colman's team also determined that Icewort's voice was almost certainly generated by AI.
“Not only did our platform discover that it may have been manipulated, but when our team investigated it, it had the signature of an AI-generated voice, and then from the speakers. We found that the recording may be recorded on another device, masking its generative nature,” Colman said in the paper. statement.
“Aside from the result of it being AI, there are some moments in the audio where there is clearly no sound,” Coleman said. “There are clear, sudden, and incredibly brief pauses between snippets of dialogue that indicate the absence of audio, which in itself indicates some level of file manipulation.”
Liu and Colman said detection techniques can't be 100% sure whether AI is involved. Liu said the tool is not as good as DNA technology or fingerprinting.
Eisbert's life changed completely.
Billy Burke, President of the Management Association A representative for Eiswart said the audio was “manufactured to harm Principal Eiswart, but it also harmed students, staff, and the Pikesville community, and the lack of information exacerbated the harm.” Stated.
Many were quick to draw their own conclusions about the authenticity of the recording. Mr. Eiswart's former colleagues told The Banner that Mr. Eiswart would never say such things and that the comments were inconsistent with him as a person. Others said the opposite. His former students took to social media to declare that he was the one making the recordings. Pikesville High School students also told The Banner they believed those were his words.
Mr. Burke maintained Mr. Eiswart's innocence and said others should have done the same. He said in January that he was disappointed that people assumed Mr. Eiswart was guilty before the investigation was complete, leading to harassment and threats against them. At the Jan. 23 county school board meeting, Burke said the school had arranged for police to be present at Eiswart's home.
In a statement, Eiswart said his track record of more than 25 years in the school system is “unblemished” and that he believes all students can succeed. “We have created programs to celebrate diversity and excellence in the United States.” ”
“This is what makes this crime so insidious,” he wrote. “Believing in the potential of everyone has been at the heart of my entire career.”
Explosive technology, less regulation
The world is starting to realize how problematic deepfakes are. Nieman Journalism Lab cites multiple examples of his AI voices influencing elections in Slovakia, Pakistan, and Bangladesh. In January, an AI-simulated voice of President Joe Biden was used to dissuade voters in New Hampshire from voting in the primary. While Neiman warns readers about the impact of his AI on elections, a less common topic is how the technology will affect people outside of the public eye.
According to NBC News, Biden signed a “wide-ranging” executive order in October introducing regulations for AI companies. For example, it is asking the Commerce Department to develop guidance on watermarking AI content to make it clear that it was not created by humans.
Colman said the technology available to create deepfakes has exploded in popularity over the past year, with thousands of tools now readily available to create AI-generated human audio and video. . Lieu and Coleman said there aren't enough federal regulations in place to prevent deepfakes, and few people are prosecuted.
“What's even scarier is that you don't have to be an expert in computer science, cybersecurity, or AI to use these tools,” Colman said. “The tools are everywhere, but the protections are not.”
Nieman reports that a one-minute recording of someone's voice is enough to be simulated by an AI tool that costs $5 a month. Obtaining a sample of a principal's voice is not difficult. There is a 3-minute video of him on icewort online in 2018.
Melba Pearson, vice chair of the American Bar Association's criminal justice division, said she could not think of a single possibility that prosecutors would bring criminal charges against someone who faked a voice impersonating Eiswart. Nothing was stolen. The computer was not hacked. The audio was widely broadcast, so it's probably a vague accusation by the federal government. Otherwise, Creators may be able to get away with it.
“I think we're in pretty uncharted territory because of the fact that artificial intelligence has really taken off over the last couple of years,” said Pearson, director of the Jack D. Gordon Public Research Institute's Prosecution Project. Told. Florida International University Policies.
Eiswart could sue, but only for monetary compensation without criminal charges, she said. Pearson wants legislation to prevent people from using AI to destroy lives.
Liu said he was not aware of any cases in which deepfake creators were prosecuted.
“The court may not accept this as evidence. This analysis is not conclusive. I think other evidence should also be attached,” Liu said.
Icewort's audio shows the danger of the extent to which artificial intelligence can be used to harm individuals, Liu said. If deepfakes are used against celebrities or well-known politicians, they are easier to detect because the abundance of video and audio of their voices makes it more likely that the public will believe and see through the fakes. It will be. “If they're focusing on people who are less visible…the damage they're causing is even greater.”