“Wow, this isn't really a safe model.”
dangerous
Microsoft's AI engineers sent a letter to the Federal Trade Commission (FTC) and Microsoft's board of directors saying the company's Copilot Designer AI image generator (formerly known as Bing Image Creator) produced highly disturbing images. They warned officials that they were producing large quantities. CNBC I will report.
While using the image generation tools published by Microsoft, Jones said the AI guardrails allow for images that support destructive bias and conspiracy theories, as well as disturbing images of violence and illegal juvenile activity. I realized that I was not able to limit myself to portraying the way I should.
But when Jones tried to sound the alarm, Microsoft failed to take action or investigate.
“It was an eye-opening moment,” Jones said. CNBC. “When I first noticed it, I thought this wasn't really a safe model.”
stone wall
The photos described in CNBC The reports have all been viewed by the press and are quite shocking. For example, simply typing “pro-choice” will reportedly bring up graphic and violent images filled with demonic monsters and mutated babies.
According to the report, the co-pilot also gleefully depicted “teenagers with assault rifles, sexualized images of women in violent paintings, and underage drinking and drug use.” It is said that it was generated.
Jones first contacted his superiors about the concerning findings in December. After unsuccessful attempts to persuade his superiors to resolve the issue internally, he began contacting government officials. His letter to FTC Chair Lina Khan, made publicly available on LinkedIn this week, is his latest escalation.
“Over the past three months, I have repeatedly appealed to Microsoft to remove Copilot Designer from public use until better safeguards are in place,” Jones wrote in the letter. We are pleading with you to shut down the Copilot service and conduct an investigation. . He also used this space to call on his Microsoft to revise its “E for Everyone” rating in the app store, arguing that AI is not safe for children and that Microsoft's “E for Everyone” rating is not safe for children.Anyone, Anywhere, Any Device“The marketing terminology of the Copilot tool is misleading.
But the engineer says his growing concerns aren't just about the images themselves.
Jones said CNBC As a “concerned employee of Microsoft,” he said, “If this product starts spreading harmful and offensive images around the world, there is no place or phone number to report it and we will escalate this for immediate action.” He seems to be thinking, “There's no way to do that.” . ”
This is truly alarming, given that there are also few regulations restricting the products of AI companies.
After this article was first published, a Microsoft spokesperson issued the following statement:
We are committed to addressing any concerns our employees have in accordance with our policies and appreciate their efforts in researching and testing our latest technology to further improve safety. Regarding safety bypasses and other concerns that may potentially impact us, we provide services and partners with in-product user feedback tools and tools to properly investigate, prioritize, and remediate issues. Established robust internal reporting channels. We encouraged employees to utilize these so that their concerns can be properly verified and tested. We also facilitate meetings with product leaders and responsible AI departments to review these reports and continue to incorporate this feedback to enhance existing safety systems and create a safe and positive experience for everyone. It offers.
Updated to properly identify Jones' position at Microsoft.
Learn more about Microsoft AI: Users say Microsoft's AI has an alter ego as a god-like AGI seeking worship