The National Institute of Standards and Technology today announced the launch of a new initiative called NIST GenAI, aimed at evaluating generative artificial intelligence models and creating systems that can identify AI = created text, images, and videos. Did.
The launch of the new program comes as NIST unveiled its first draft publication on AI risks and standards.
NIST GenAI will work to create new AI benchmarks and attempt to build so-called “content authenticity” detection systems that can detect AI-generated media such as text and “deepfake” videos. This is an effort to counter the dangers of AI-generated false and misleading information.
NIST said in a press release that the new program “issues a series of challenge questions” aimed at assessing the capabilities and limitations of generative AI models. It attempts to use these assessments to pursue strategies that can “promote information integrity and encourage the safe and responsible use of digital content.”
NIST AI's first project is an effort to build a system that can accurately identify whether content was created by a human or an AI system, starting with text, according to the new website. Although there are existing tools that claim to be able to detect things like deepfakes, which are videos manipulated by AI, various studies have shown that they are not particularly reliable.
That's why NIST is inviting teams from academia, the AI industry, and other researchers to submit what it calls “generators” and “discriminators.” Generator is an AI system that generates content, and Discriminator is a system designed to identify content created by her AI.
In its study, NIST requires submitted generators to produce summaries of 250 words or less on a specific topic and set of documents. The discriminator, on the other hand, is tasked with detecting whether the summary was created by a human or an AI. To ensure fairness, NIST GenAI will prepare its own test data. It added that systems trained on publicly available data that do not comply with applicable laws and regulations will not be accepted in research.
NIST wants to act as soon as possible and plans to begin enrolling in the study on May 1, before the August deadline. 2. Research will then begin, with results expected by February 2025.
The study comes at a time when AI-generated misinformation and disinformation appears to be becoming more widespread. A recent study by deepfake detection firm Clarity found that the number of deepfakes published since the start of the year has increased by more than 900% compared to the same period in 2023.
The implication is that as generative AI itself becomes more widely available, misleading content will become more of a problem. People have expressed concerns about the dangers of AI-generated content, and a recent poll by YouGov found that 85% of Americans are worried about being misled by deepfakes. I did.
AI policy draft document
In addition to launching the new program, NIST also released a series of draft proposals aimed at shaping U.S. government policy on AI. The document contained a draft document aimed at identifying the risks of generative AI and strategies for deploying this technology. It was created with input from a public working group of more than 2,500 researchers and experts and will be used in conjunction with NIST's existing AI risk management framework.
In addition, NIST has published draft related resources for existing secure software development frameworks that outline best practices for developing generative AI applications and dual-use base models. The report states that the dual-use underlying model, trained on “extensive data,” can be “modified to deliver high levels of performance on tasks that pose serious risks to national security and national economic security.” It is defined as being designed for a “wide range of situations'' that are “sexual'' and “sexual.'' , national public health or safety, or a combination of those matters. ”
Other documents released by NIST today concern mitigating risks associated with synthetic content, along with plans to develop global AI standards. All four documents are open for public comment until June 2nd.
Laurie LoCascio, director of NIST and undersecretary of commerce for standards and technology, said the agency is concerned that generative AI carries risks that are “very different” than those seen with traditional types of software. . “These guidance documents not only inform software creators about these inherent risks, but also help them develop ways to reduce them while supporting innovation,” he said.
Today's announcement is NIST's response to U.S. President Joe Biden's executive order on AI, which established rules requiring AI companies to be more transparent about how their models work. The order also established standards for labeling generated AI content and more.
Image: Microsoft Designer
Your upvote is important to us and helps us keep our content free.
Your one click below will support our mission of providing free, deep and relevant content.
Join our community on YouTube
A community of over 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies Founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many other celebrities and experts. Please join us.
thank you