Iran, En-Lai, Napoleon, Mike Tyson and Model Collapse
March 11, 2026
Every model that is incapable of recognizing its own failure has already crossed the event horizon into model collapse.
Let’s start by stipulating I am addressing commentary on the war, not the war. As I noted in my post The War (March 1, 2026), war takes lives; it is not a chess game or an abstraction.
Commentary, as I noted in Perverse Incentives Have Created a Runaway Media Monster, is all about making money via clicks / “engagement.” Exaggerating claims of expertise / certainty and touting predictions all activate “engagement,” even when the claims are false and the predictions are nothing more than click-bait.
The Big Book of Herbal...
Check Amazon for Pricing.
Today’s topic–model collapse–is difficult. it doesn’t lend itself to 10-second sound bites or tweets. But three quotes offer insightful entry points.
The first is by Chou En-Lai, the People’s Republic of China’s first foreign minister and Premier: “It’s too soon to tell.”
The second is by Napoleon: “Do you know what amazes me more than anything else? The impotence of force to organize anything.”
The third is by boxer Mike Tyson: “Everybody has plans until they get hit for the first time,” which has been recast as a more visceral “Everyone has a plan until they get punched in the face” (or mouth).
I discussed the first two in Channeling Napoleon and Chou En-Lai (January 5, 2026).
The context here is the non-linear nature of warfare. Our models for planning and understanding war are inherently linear because there is no way to project what emergent properties the war will generate, or anticipate all the second-order effects (consequences generate their own consequences) unleashed by these dynamics.
What emergent properties describe is the way that complex interactions generate dynamics that have their own separate properties that are different from the initial conditions. We start with systems we think we understand–military forces, logistics, political structures, etc. To manage these complex systems, we distill them into models that enable us to control the systems.
But once these complex systems interact, the interactions generate knock-on effects which manifest properties that operate outside the models. The leadership attempts to make sense of fast-moving events within the frame of reference established by the model, unaware that the model is incapable of making sense of emergent dynamics operating outside the model’s limits.
Put another way, these non-linear dynamics disrupt their model’s OODA (observe, orient, decide, act) loop, Colonel John Boyd’s decision-making process. The coherence of the model’s causal (i.e. predictive) capacity is lost, and every step of the OODA loop become incoherent: observations miss what’s critical, the orientation / frame of reference no longer maps reality, and so the decisions and actions are disastrously misguided.
The keys here are 1) the leadership’s confidence in the model’s value and 2) the fatal time lag between the disorientation generated by the model’s failure and the recognition that the model has failed. By this stage, there is no longer enough time to construct a more coherent model that more accurately maps events in real time, and so the model–and the systems it controls–collapse.
In a peculiar irony, AI programs illuminate this human behavior. AI tools are programmed to process their data sets and probabilistic algorithms on the assumption that all necessary knowledge and information is available to generate a high-probability solution.
Lavender: 50 Self-Care...
Check Amazon for Pricing.
The AI tool doesn’t “know” when its model has failed, and so it hallucinates “solutions” that are catastrophically out of touch with reality as if these hallucinations are facts. This is what happens when models break down in warfare and other fast-moving events in which complex interactions generate emergent dynamics operating outside the model’s orientation / “understanding:”
The humans operating within the failed model are hallucinating “solutions” while fully believing they are dealing with facts and responding appropriately. The result is model collapse: the model has become incoherent and is generating hallucinations that are taken as “solutions” by those who are incapable of recognizing the limits and failure of their model.
Which brings us back to the three quotes.
“It’s too soon to tell.” Every prediction of outcomes is nothing more than a wild guess because non-linear emergent dynamics are inherently unpredictable. The second-order effects may play out for years or decades, so “It’s too soon to tell.”
“Do you know what amazes me more than anything else? The impotence of force to organize anything.” Implicit claims of expertise in military strategy, tactics, weaponry, etc. are proliferating at the same rate as war-related AI slop. All this click-bait churn distracts us from the limits of force and the overlooked aspects of power, which Napoleon summarized: “There are only two powers in the world: the spirit and the sword. In the long run, the sword will always be conquered by the spirit.”
Amazon eGift Card - Ha...
Check Amazon for Pricing.
Visa Physical Gift Car...
Check Amazon for Pricing.
Roblox Digital Gift Ca...
Check Amazon for Pricing.
$50 Visa Gift Card (pl...
Buy New $54.95
(as of 10:51 UTC - Details)
Copyright © OfTwoMinds.com

