See How Visible Your Brand is in AI Search Get Free Report

Former Palantir CISO Dane Stuckey to Lead Security at OpenAI!

  • August 22, 2025
    Updated
former-palantir-ciso-dane-stuckey-to-lead-security-at-openai

Key Takeaways:

  • Dane Stuckey, former CISO of Palantir, has joined OpenAI as co-CISO, emphasizing the company’s focus on security.
  • Stuckey’s role will involve high standards of compliance and trust to protect OpenAI’s hundreds of millions of users.
  • OpenAI appears to be strengthening ties with government sectors, including the U.S. Department of Defense, following the lifting of its military AI ban.
  • Recent security-focused job postings highlight OpenAI’s commitment to developing secure AI infrastructure as a foundation for safe AGI.

Dane Stuckey, previously the Chief Information Security Officer (CISO) at Palantir, has joined OpenAI in a dual role as co-CISO alongside Matt Knight, OpenAI’s head of security.

Stuckey announced his move in a post on X (formerly Twitter), emphasizing that security is fundamental to OpenAI’s mission.

Stuckey began his career at Palantir in 2014 as a detection engineer, and his background includes over 10 years in digital forensics, incident response, and security program development for both government and commercial clients.

His extensive experience with government contracts at Palantir could signal OpenAI’s intentions to expand its own relationships with federal agencies.

According to Forbes, OpenAI is reportedly partnering with Carahsoft, a government contractor, to seek closer ties with the U.S. Department of Defense.

Since lifting its military AI application ban in January, OpenAI has collaborated with the Pentagon on multiple software projects, including those focused on cybersecurity.

OpenAI also recently appointed retired Gen. Paul Nakasone, former head of the NSA, to its board.

To further its security initiatives, OpenAI has been enhancing its operational infrastructure.

In line with this, the company recently listed a new role for a head of trusted computing and cryptography to oversee the development of secure AI infrastructure.

This role will involve creating advanced access controls and performing security evaluations to safeguard AI systems, underscoring OpenAI’s commitment to a secure and responsible AI future.

October 15, 2024: Microsoft’s VP of GenAI Research Departs to Join OpenAI!

October 14, 2024: OpenAI’s New ‘Swarm’ Framework Sparks Debate on AI-Driven Automation!

October 14, 2024: OpenAI Predicts Five Years Until Achieving Profitability!

October 10, 2024: OpenAI Seeks Dismissal of Musk Lawsuit, Labels It ‘Harassment Campaign’!

October 10, 2024: OpenAI Reports Ongoing Threat Actor Attempts to Influence Elections!

For more news and insights, visit AI News on our website.

Was this article helpful?
YesNo
Generic placeholder image
Articles written 948

Khurram Hanif

Reporter, AI News

Khurram Hanif, AI Reporter at AllAboutAI.com, covers model launches, safety research, regulation, and the real-world impact of AI with fast, accurate, and sourced reporting.

He’s known for turning dense papers and public filings into plain-English explainers, quick on-the-day updates, and practical takeaways. His work includes live coverage of major announcements and concise weekly briefings that track what actually matters.

Outside of work, Khurram squads up in Call of Duty and spends downtime tinkering with PCs, testing apps, and hunting for thoughtful tech gear.

Personal Quote

“Chase the facts, cut the noise, explain what counts.”

Highlights

  • Covers model releases, safety notes, and policy moves
  • Turns research papers into clear, actionable explainers
  • Publishes a weekly AI briefing for busy readers

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *