NIST Launches GenAI to Combat AI-Generated Misinformation and Deepfakes

  • Editor
  • May 7, 2024
    Updated
nist-launches-genai-to-combat-ai-generated-misinformation-and-deepfakes-by-developing-detection-systems-and-setting-benchmarks

The National Institute of Standards and Technology (NIST) announced the introduction of NIST GenAI on Monday, a groundbreaking initiative focused on the evaluation and mitigation of Generative AI technologies, including AI that generates text and images

This move comes as part of a broader effort to combat the rising tide of AI-generated misinformation and deepfakes.

NIST GenAI aims to establish benchmarks, develop systems for detecting “content authenticity,” and foster the creation of tools that can identify the origins of fake or misleading AI-generated information.

Critics seem to be excited about this news:

According to the announcement made on the newly launched NIST GenAI website and through a press release, the program will feature a series of challenge problems designed to test the capabilities and limitations of generative AI technologies.

The initiative’s first project is a pilot study that focuses on distinguishing between content created by humans and that generated by AI, particularly text.

While there are existing services that claim to detect deepfakes, their reliability, especially concerning text, remains questionable.

 NIST GenAI is calling on teams from academia, industry, and research labs to participate in this study by submitting either “generators” or “discriminators.”

Have a look at NIST Gen AI’s cybersecurity framework:

nist-launches-genai-cybersecurity-framework-to-combat-deepfakes

Generators will produce short summaries based on provided documents and topics, while discriminators will work to identify whether these summaries were written by AI.

To maintain a level playing field, NIST GenAI will supply the necessary data for testing the generators. Systems trained on publicly available data that fail to comply with relevant laws and regulations will not be accepted.

Registration for the pilot study opens on May 1, with the first round concluding on August 2. The final results are expected to be published in February 2025.

This initiative is part of NIST’s strategy to address concerns highlighted in President Joe Biden’s executive order on AI, which demands greater transparency from AI companies and establishes new standards for AI-generated content labeling.

Many Twitter pages have gone forward to share the news, providing a sigh of relief for many concerned about security in the world of AI:


The launch also marks the first major AI initiative since Paul Christiano, a former OpenAI researcher known for his cautionary views on AI’s potential risks, joined NIST’s AI Safety Institute. 

Despite some controversy surrounding his appointment, NIST asserts that the insights gained from NIST GenAI will directly inform the AI Safety Institute’s efforts to ensure the safe and responsible deployment of AI technologies.

With the rapid increase in AI-generated content and the potential for its misuse, initiatives like NIST GenAI are critical in establishing standards and tools that help safeguard public discourse from the dangers of sophisticated digital misinformation.

To find out more for the latest and most exciting AI News, visit www.allaboutai.com.

Was this article helpful?
YesNo
Generic placeholder image

Dave Andre

Editor

Digital marketing enthusiast by day, nature wanderer by dusk. Dave Andre blends two decades of AI and SaaS expertise into impactful strategies for SMEs. His weekends? Lost in books on tech trends and rejuvenating on scenic trails.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *