LLMs cannot do war game simulations. They produce plausible text - that's it. They don't do strategy or cost-benefit analysis or anything like that. What they DO do is replicate likely text based on a corpus, and if that corpus includes, say, works of fiction involving nuclear war... this happens
add a skeleton here at some point
4 days ago