In late September, Brandon Tseng, co-founder of Shield AI, asserted that fully autonomous lethal weapons—systems that use AI to decide whether to kill without human intervention—would never be adopted in the United States. Tseng’s assertion that such technology lacks political and public support was soon challenged. Five days later, Palmer Luckey, co-founder of Anduril, presented a different perspective, expressing skepticism about blanket opposition to autonomous weapons. He emphasized that U.S. adversaries, like Russia and China, often make emotionally charged but strategically inconsistent arguments against AI weaponry. When asked to clarify, Anduril spokesperson Shannon Prior explained that Luckey was not advocating for a carte blanche approach to autonomous systems but instead voiced concerns about the risks of “bad people using bad AI.” His stance aligns in some ways with Trae Stephens, another Anduril co-founder, who emphasized human accountability in lethality decisions. The U.S. government has so far adopted an ambiguous stance on autonomous weapons. Although the military has not pursued fully autonomous lethal systems, it continues to use mines and missiles—weapons that operate with some autonomy. No binding bans prevent companies from developing these technologies or selling them internationally. Instead, voluntary AI safety guidelines for military applications are in place, though they stop short of explicitly prohibiting fully autonomous systems. Margaux this is bad and you should feel bad about attaching your name to it — Eclectic Connoisseur ⌜e/λ⌟ (@meltingdiodes) October 11, 2024 This lack of regulatory clarity has drawn various responses from Silicon Valley. At a Hudson Institute event, Palantir co-founder Joe Lonsdale criticized the binary framing of the autonomous weapons debate. He described a scenario where China fully adopts AI weaponry while the U.S. imposes manual confirmation requirements for every action. He noted: “You very quickly realize, well, my assumptions were wrong if I just put a stupid top-down rule, because I’m a staffer who’s never played this game before. I could destroy us in the battle.” Lonsdale stressed that it is not the role of defense technology companies to set AI policy; instead, this responsibility lies with elected officials. The war in Ukraine has further complicated the debate over autonomous weapons, providing combat data that defense tech companies can use to refine AI technologies. Ukrainian officials have advocated for increased automation in weaponry, seeing it as a potential advantage over Russian forces. A prevailing concern in Washington D.C. and Silicon Valley is that China or Russia might deploy fully autonomous weapons before the U.S., potentially forcing an American response. At the Hudson Institute event, Lonsdale highlighted the importance of educating American policymakers about the potential of AI for national defense, saying that the tech sector must “teach the Navy, teach the DoD, teach Congress” about AI’s capabilities to ensure that the U.S. remains competitive with China. Anduril and Palantir have collectively invested over $4 million in lobbying efforts to influence Congress on AI-related defense policies. The debate over AI-powered autonomous weapons in Silicon Valley reflects a wider societal conversation on technology and its role in national security. Surely the elected government determines the rules of which its armed services must adhere to when deploying weapons. — Nick Kirtley 🇳🇱 🇵🇹 (@nickkirt) October 11, 2024 October 3, 2024: SB 1047 Lawmaker Expresses Frustration with Silicon Valley After AI Bill Veto! October 3, 2024: Kamala Harris Campaign Motorcade Stalled by Confused Robotaxis! October 4, 2024: AI to Become Mandatory in California School Curriculums! October 10, 2024: AI Pioneer Geoffrey Hinton Said He is Proud of His Student Who Fired OpenAI’s Sam Altman! August 15, 2024: California’s AI Bill Aims to Prevent Disasters, But Silicon Valley Sees Risks! For more news and trends, visit AI News on our website.
Reflecting on this ambiguity, U.S. officials have remarked that it’s “not the right time” to pursue a ban on such weapons.
With key figures like Tseng, Luckey, and Lonsdale advocating for different approaches, the future of AI in defense remains uncertain.
Debate in Silicon Valley: Should AI Weapons Be Allowed to Kill?

Key Takeaways:
He remarked: “Congress doesn’t want that. No one wants that.”
During a recent talk at Pepperdine University, he questioned the moral consistency of such arguments, noting: “The U.S.’s adversaries use phrases that sound really good in a sound bite: Well, can’t you agree that a robot should never be able to decide who lives and dies? And my point to them is, where’s the moral high ground in a landmine that can’t tell the difference between a school bus full of kids and a Russian tank?”
Stephens previously stated: “I think the technologies that we’re building are making it possible for humans to make the right decisions about these things, so that there is an accountable, responsible party in the loop for all decisions that could involve lethality, obviously.”
He elaborated on this point, saying: “The key context to what I was saying is that our companies don’t make the policy, and don’t want to make the policy: it’s the job of elected officials to make the policy. But they do need to educate themselves on the nuance to do a good job.”
Mykhailo Fedorov, Ukraine’s Minister of Digital Transformation, shared his perspective in an interview, stating: “We need maximum automation. These technologies are fundamental to our victory.”
During a UN debate on AI weapons last year, a Russian diplomat acknowledged the broad global preference for human oversight, but added: “We understand that for many delegations, the priority is human control. For the Russian Federation, the priorities are somewhat different.”
Was this article helpful?
YesNo