OpenAI Prioritizes Safety Forms Committee for New Frontier Model Training

  • Editor
  • May 29, 2024

OpenAI has announced the formation of a new Safety and Security Committee to ensure the safe and responsible development of its machine learning research. This move comes in response to recent internal changes and external criticisms regarding the company’s commitment to AI safety.

The newly formed committee will consist of nine members and will be chaired by OpenAI CEO Sam Altman, alongside board Chair Bret Taylor and three other directors.

The committee also includes OpenAI engineering executives Aleksander Madry, Lilian Weng, John Schulman, Matt Knight, and Jakub Pachocki.

This panel’s creation follows the disbandment of OpenAI’s internal Superalignment team, which focused on mitigating AI risks and was led by co-founder Ilya Sutskever and former head of alignment Jan Leike.

The dissolution of the Superalignment team has raised concerns about OpenAI’s dedication to AI safety. Jan Leike, who recently joined Anthropic PBC, expressed frustrations on social media, citing a lack of support for his team’s efforts.

Leike described a challenging environment within OpenAI, stating that his team had been “sailing against the wind” for several months. Sutskever and Leike’s departures have resulted in the remaining Superalignment team members either resigning or transitioning to other roles within the company.

Despite these challenges, OpenAI has started training its next frontier model, aiming to enhance capabilities and progress toward Artificial General Intelligence (AGI). The company has been leveraging Microsoft’s public cloud infrastructure for this purpose.

OpenAI CTO Mira Murati has hinted at a major update to GPT-4, expected to be more substantial than recent enhancements. Microsoft CTO Kevin Scott has compared the new model to a “whale,” suggesting a significant parameter increase compared to GPT-4.

The newly established committee’s first priority is to evaluate and improve OpenAI’s AI risk mitigation workflows. Within 90 days, the committee will submit recommendations to OpenAI’s full board, with plans to publicly disclose which recommendations will be adopted. The company aims to demonstrate transparency and responsiveness to both internal and external safety concerns.

byu/Advanced_Drink_8536 from discussion

Additionally, OpenAI plans to consult external safety and security experts, including former cybersecurity official Rob Joyce and former DOJ official John Carlin.

This advisory role is intended to bring a broader perspective and enhance the committee’s recommendations. The committee is designed to function as an advisory body, making recommendations to the board rather than enforcing policies directly.

byu/Advanced_Drink_8536 from discussion

OpenAI’s recent announcements also coincide with industry-wide discussions about AI safety. The company has advocated for international standards and introduced measures like a “kill switch” for advanced AI Models, intended to halt development if certain risk thresholds are exceeded.

This initiative has sparked a debate about the balance between innovation and safety in AI development. Proponents argue that such measures are necessary to prevent catastrophic failures, while critics question the feasibility and potential overreach of these safeguards.

The formation of the Safety and Security Committee marks a considerable step in reinforcing the company’s commitment to developing AI responsibly. As the industry evolves, OpenAI’s actions will likely influence broader discussions on AI safety and governance.

For more news and insights, visit AI News on our website.

Was this article helpful?
Generic placeholder image

Dave Andre


Digital marketing enthusiast by day, nature wanderer by dusk. Dave Andre blends two decades of AI and SaaS expertise into impactful strategies for SMEs. His weekends? Lost in books on tech trends and rejuvenating on scenic trails.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *