Home / General / AI Nuclear Weapons

AI Nuclear Weapons

/
/
/
215 Views

Can an LLM tell you how to make a nuclear weapon? The Department of Energy’s National Nuclear Security Administration partnered with Anthropic AI to check their Claude LLM out. This article doesn’t say much about what was tested, but a while back, Alex Wellerstein, historian of nuclear weapons, checked out ChatGPT and Gemini to see how they would do in designing nuclear weapons. The thread starts here. I’ll post a couple of the more amusing tries.

Out of curiosity, I ran the same prompts through Gemini, and got similarly garbled output. Its Fat Man design is much closer to the reality than any of the others, and its Teller-Ulam design at least has some awareness of the relevant parts (although it too garbles them). Its Little Boy gets an F.— Alex Wellerstein (@wellerstein.bsky.social) 2025-08-15T00:19:15.353Z

Don't forget our friend Hugh Explosive— Alex Wellerstein (@wellerstein.bsky.social) 2025-10-22T17:02:24.651Z

It’s hard to understand why these are so bad. Wikipedia, for example, has generally accurate diagrams which must be part of the source.

The Wired article says nothing about how Claude was trained or what the prompts were.

So my provisional answer to the top question is “Definitely not.”

Beyond the design, there’s one more thing that is needed to make a nuclear weapon: fissile material, which is difficult to impossible to get. Nuclear weapons designed by LLMs won’t be a problem any time soon, perhaps never. But I can see why NNSA wanted to look into it.

  • Facebook
  • Twitter
  • Linkedin
  • Bluesky
This div height required for enabling the sticky sidebar
Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views :