Yeah, I know it’s Thursday — GET OFF MY CASE.
My colleague at the Monterey Institute, Ward Wilson, has a new book out entitled Five Myths About Nuclear Weapons. Ward and I disagree about some things, but I am always up for an argument about the epistemology of nuclear deterrence and manual approaches to elephant exclusion. (I hear snapping fingers has a strong empirical record.)
So, I’ve invited Ward to submit a weekly post looking at a number case studies. And because he is a W.W., I figured I would post them on Wednesday. UNLESS I DON’T GET AROUND TO IT. GET OFF MY CASE.
If it helps him sell a few books, all the better. Here’s the first one, after the jump.
Doubts about Nuclear Deterrence, Part I
by Ward Wilson
I’ve been writing a book that the publisher talked me into naming Five Myths about Nuclear Weapons. “Five” because book titles with numbers sell. “Myths” because people love conspiracy, secrets, and mysterious knowledge. (“Click here to learn one old-fashioned trick to lose thirty pounds!”) I wanted to call it Some Pretty Reasonable Doubts Based on Historical Reinterpretations but apparently the marketing department said no. Go figure.
As part of the research for the book I went back and reexamined a bunch of the Cold War crises that involved nuclear weapons. I wanted to double check what everyone basically knows: nuclear deterrence works really well because nuclear weapons are really scary. You can count on people to be more sensible if nuclear weapons are in the mix, even in a crisis when emotions are running high. I didn’t expect to find anything, I just wanted to look. What I found was pretty disturbing. It seems as if nuclear deterrence failed regularly during Cold War crises. None of those failures led to catastrophic nuclear war (obviously), but there were a number of places where deterrence theory would have predicted that leaders would have backed off, and instead they took risky, aggressive actions that made the crisis worse. What I found seems to be a pretty serious blow to the idea that nuclear deterrence works reliably and robustly.
Nuclear deterrence is different from ordinary deterrence. Ordinary deterrence is, say, keeping little Jimmy from taking cookies by warning him he’ll get spanked. Or beheading adulterers. Or even using conventional military forces to send warnings about what the consequences of aggressive action will be. All of these would be ordinary deterrence. Nuclear deterrence is using the threat of nuclear retaliation to warn someone not to take aggressive action. The assumption has always been that ordinary deterrence works some of the time. Maybe–I don’t know–sixty percent. (I made that up. There’s debate about this. Most people obey the law, for example, not because they’re deterred but because they’ve developed the habit of obeying the law. How large and effective a role deterrence actually plays is uncertain. But there’s general agreement that ordinary deterrence works some of the time.)
Nuclear deterrence, on the other hand, is assumed to work much closer to 100 percent of the time. Because the consequences of nuclear war would be mind bogglingly horrible, people assume that nuclear deterrence is much, much more effective than ordinary deterrence. Nearly or even absolutely perfect. Which is a really good thing. But what if nuclear deterrence is only about as effective as ordinary deterrence? What if nuclear deterrence fails thirty or forty percent of the time? Human beings are not terribly rational creatures. It would make sense if people failed to be afraid–even though any reasonable person would–about forty percent of the time. Since any failure of nuclear deterrence runs the risk of escalating to catastrophic nuclear war, it’s pretty serious news if there are obvious failures of nuclear deterrence in the historical record.
The bad news? That’s what I found.
“But Ward,” I hear you saying, “there’s forty years of IR analysis and historical accounts that say you’re wrong.” Which is true. Most writers on this subject paint a pretty rosy picture about nuclear deterrence. In almost all of the serious, scholarly writing about these crises, nuclear deterrence (reassuringly) works almost every time. But this sort of group consensus can be wrong. Listen to what Ludwig Wittgenstein and Richard Ned Lebow have to say about this. Wittgenstein (the famously difficult-to-follow Austrian philosopher) pointed out that our feelings often get in the way of clear understanding:
What makes a subject hard to understand—if it’s something significant and important—is not that before you can understand it you need to be specially trained in abstruse matters, but the contrast between understanding the subject and what most people want to see.
Lebow makes something of the same point, but talking about the effect of a strong theory on observational objectivity:
Philosophers of science have observed that scientists tend to fit data into existing frameworks even if the framework does not do justice to the facts. Investigators deduce evidence in support of theory. Theory, once accepted, determines to which facts they pay attention. According to Thomas Kuhn, the several fields of science are each dominated by a “paradigm,” an accepted body of related concepts that establishes the framework for research. The paradigm determines the phenomena that are important and what kinds of explanations “make sense.” It also dictates the kinds of facts and explanations that are to be ignored because they are outside the paradigm or not relevant to the problem the paradigm has defined as important. Paradigms condition investigators to reject evidence that contradicts their expectations, to misunderstand it, to twist its meaning to make it consistent, to explain it away, to deny it, or simply to ignore it.
In our case we have both a strong theory (nuclear deterrence) and really strong emotions (fear of nuclear war) pushing us to focus only on the successes of nuclear deterrence. Most people really hope that nuclear deterrence is one hundred percent effective. As a result, objectivity was in pretty short supply during most of the Cold War.
What I found in going back over the evidence is that good news about nuclear deterrence is repeated (and even exaggerated) in the literature. Bad news–potential failures–are largely ignored. You may disagree that the events I’m going to review are nuclear deterrence failures. History is always open to interpretation. But I don’t think there’s any doubt that these potential failures have been largely overlooked in the literature. Not completely, but enough so that they hardly stand as well-known and oft-discussed landmarks in the debate.
Think about it this way: when a plane crashes and maybe a couple of hundred people die, the FAA (here in the United States) does a painstaking review of what went wrong. Months or even years of effort and millions of dollars are spent on reconstructing exactly what happened. Sometimes they even reconstruct the plane from all the scattered little bits that fell out of the sky. One of the reasons why flying is so safe is because of this tenacious attention to failure. The opposite appears to be true in nuclear weapons scholarship. What appear to be clear failures (to me) of nuclear deterrence have been consistently ignored. Shouldn’t potential failures of nuclear deterrence (where millions of lives are at stake) receive at least as much careful, cautious, objective attention as we give airplane crashes?
Jeffrey Lewis has kindly invited me to write a series of posts about failures of nuclear deterrence during various Cold War Crises (and one non-Cold War crisis.) I’m going to be writing about the Cuban Missile Crisis, the Middle East war of 1973, the Falkland Islands War, the Gulf War and then wind up with some general thoughts about nuclear deterrence. Each of these events points to remarkable failures of nuclear deterrence. Taken together they raise the question whether nuclear deterrence is reliable and robust.
Because I’ve read extensively on the issues treated in the book, I wasn’t sure how much of it I’d find time to read. But, when I started leafing through it, I was drawn in and finished it the same day! That hasn’t happened in a long time, and is a tribute to both Ward Wilson’s masterful writing style and his concise arguments.
One point in this column that deserves elaboration is the question of how perfect nuclear deterrence needs to be in order for us to reasonably rely on it. Even if we could expect it to work for 1,000 years before failing (a time horizon most people see as widely optimistic), that would correspond to almost a 10% risk over the 80-year life expectancy of a child born today (80/1,000 = 8%). And if the time horizon is 100 years, that child has worse than even odds. It’s high time we devoted more energy to reducing that risk.
Martin
Pedantry alert for the first paragraph!
Martin,
I’m not sure if you’re aware, but working (reliably) “…for a 1,000 years before failing…” is a very different probability from failing once in a thousand years. If we’re assuming a once in a thousand years failure rate then (assuming the failure is equally possible in any one year) that implies a 7.7% chance of failure in any 80 year span.
Pedantry aside, what’s more interesting is how the failure rate might be developing. I would argue that while the failure rate might appear to be decreasing between established nuclear powers because of improving communications, protocols, inspections, and the like, in fact this effect is being overtaken by opportunities for failure because of the increasing number of nuclear weapon owners: If we assume failure occurs between two protagonists then the number of possible pairings i.e. the number of opportunities for a mistake or failure to occur, grows very rapidly (actually the binomial coefficient, nCr) with the number of actors who have nuclear weapons. For example, two nuclear powers can only pair in one way, whereas seven nuclear powers produce 21 pairings, and ten nuclear powers produce 45 pairings.
In general, the impact of each new entrant into the nuclear club increases the possible number of pairings by a value equal to the number of members already in the club. Thus once we went beyond just a handful of nuclear weapon states the increase in risk caused by having one additional member started to outweight the decrease in risk as a result of treaties, diplomacy, protocols, etc.
As written elsewhere in this thread, it seems only a matter of time before something goes horribly wrong; and basic mathematics suggests that in order to reduce our risk we should put more effort into reducing the number of players i.e. eliminating nuclear weapons completely in as many players as possible, than into simply trying to reduce bilateral risks. For example, from a world-risk point of view it would be preferable for the UK or France (or India?) to give up nuclear weapons entirely and put themselves under an American nuclear umbrella, than for them to seek bilateral risk reduction.
To take the analysis to extremes, one could even conceive of two multi-national ICBM nuclear weapons facilities (call them “pink” and “yellow”) aiming warheads at each other or at a target designated by any of their members. Existing nuclear weapons states would give up their own facilities and instead would choose to join either the pink umbrella or yellow umbrella, with any nation under an umbrella able to initiate a launch. This would massively simplify the issues of trust, false alarms, verification, etc and thus reduce the risk of a mistake occurring.
Of course, a pink and yellow solution is a pipe-dream – but it does exemplify how the risk of unintentional nuclear war is determined more by the number of players than by the quality of any bilateral “risk-reduction” measures. Note: This ignores the difference between USA and Russia having thousands of weapons compared to, say, North Korea with just a handful; thus the levels of damage are not equal between any randomly selected pairings.
I had the privilege of reading ‘Myths’ prior to publication. There are points I take issue with and other I do not find very compelling, but Ward is stirring a pot that desperately needs stirring so: Hurrah!
One thing I will be looking for is a critique of the idea that anyone ‘wins’ a nuclear crisis. US accounts of the Cuban missile crisis are full of self-congratulations: Kennedy ‘won’; Khrushchev ‘lost’. But nuclear deterrence claims that one need never lose and that logically should apply to all sides. Arguing the Kh lost because he made a deceptive move; and Ke won because he had right on his side doesn’t wash. There are many situations (including Cuba) in which right and wrong are genuinely perceived differently. Thus any nuclear crisis represents a failure of deter and every peaceful resolution represent the success of something other than deterrence.
The only well-founded arena for deterrence is deterrence of nuclear attack itself. But even that depends on a tit-for-tat balance that we now know is not actually realizable. I am talking, of course, about catastrophic climate change; but more on that later.
Looking forward to reading these case studies immensely – and I don’t mind what day they appear, because as the late Douglas Adams reminded us, time is relative (and lunchtime doubly so).
If you observe Ward in action, you’ll note that his message resonates — and provokes — the audience. Just want to thank Martin Hellman again for doing the math that shows that nuclear deterrence, at best, has a short shelf life.
Thanks Russ. Thanks! If anyone wants more details on how we can estimate the risk of an event (e.g., nuclear war) which has not yet happened, my paper in the Bulletin (see link below) is a short summary, and my Briefing Paper on the 50th anniversary of the Cuban Missile Crisis provides some current risks and ways to reduce them.
I’ll also reply to Hairs’ comment above. Given the very rough estimates possible here, the kind of differences he mentions are lost in the noise.
Martin
Bulletin:
http://www-ee.stanford.edu/%7Ehellman/publications/75.pdf
50th Anniversary of Cuban Missile Crisis Briefing Paper
http://wagingpeace.org/articles/pdfs/2012_hellman_nuclear_poker.pdf
I’m certainly not an ACW, but I am also looking forward to this series of posts.
One of the first things that came to mind while reading the introduction was a ponder centered around nuclear deterrence and Taleb’s Black Swan theories. I think there may be some connection here. Taleb certainly seems to have generated an enviable cash flow by placing relatively small bets for (or against) events with apparently minuscule odds of occurring. The frightening thing is that the common thread is the unexpected Black Swan happens enough to make this kind of financial bet profitable. Nuclear war not so much.
Benjamin Walthrop: I bought a copy of the Black Swan once. Now you make me want to read it.
“nuclear deterrence works really well because nuclear weapons are really scary”
Depends how you define “nuclear deterrence works.” And whereas “really scary” might describe a movie, we are talking about the gut terror of imminent death along with family and world destruction, which you know is absolutely real if you make a mistake, and maybe even if you don’t.
“It seems as if nuclear deterrence failed regularly during Cold War crises.”
I think most people would say that deterrence only “fails” if nuclear weapons are used again to actually kill and destroy, and it “works” if it actively prevents that, despite how convenient it might sometimes be for one head of state to phone another and say “Do X or I’ll nuke you.”
The fact that owning nuclear weapons does not enable presidents to do the latter does not mean that nuclear deterrence does not work, although it is evidence that nuclear compellance may not work.
“You can count on people to be more sensible if nuclear weapons are in the mix, even in a crisis when emotions are running high.”
More sensible than what? What is the definition of sensible? Who says this is the definition of nuclear deterrence? It’s clear that nuclear weapons induce all kinds of craziness, but since 1945, no actual nuclear war.
“…there were a number of places where deterrence theory would have predicted that leaders would have backed off, and instead they took risky, aggressive actions that made the crisis worse.”
What deterrence theory? Whose deterrence theory? Did the leaders think their actions were risky and aggressive, and would make the crisis worse? As compared with what alternatives? What did they think the risks of the alternatives were? What were the actual risks of alternative moves, from our superior vantage point?
“Nuclear deterrence is using the threat of nuclear retaliation to warn someone not to take aggressive action.”
Whoah. It’s one thing to say “If you roll tanks across the central front our main force armies will be engaging directly, and if we seem to be losing we may resort to using nuclear weapons, even though we both know what that risks.” It would be quite another to say “If you arm the guerrillas in our client state we’ll nuke you.” The latter would not be credible.
“forty years of IR analysis and historical accounts”
67 years of no nuclear war.
“Most people really hope that nuclear deterrence is one hundred percent effective.”
That’s the real point: it can’t be. Sooner or later it’s bound to fail, one way or another, which is why we must de-alert, dismantle, and abolish nuclear weapons.
I think you can point to a lot of examples where we have come dangerously close to the failure of nuclear deterrence.
But nuclear deterrence hasn’t failed yet, and isn’t a myth. That’s good, because it helps to make nuclear abolition possible.
In a world without nuclear weapons, why wouldn’t someone secretly make some and use them to take over the planet? Because that wouldn’t work; it would only lead to nuclear rearmament and then quite likely to nuclear war.
Any major aggression using non-nuclear weapons would likewise run a similar risk of nuclear rearmament and nuclear war.
Thus, even after nuclear weapons are abolished, nuclear deterrence will still work.
With respect, actual occurrence of conflict is scarcely the datum most relevant to an analysis like this. I would think that trigger systems and near-miss events suggest that deterrence had remarkably little effect on the psychology of chief war makers and the systems that they put in place. The fact is that the systems designed contained risk-avoidance mechanisms that were less than proportionate to the risk of particular levels of damage and death that would result. For example, the events referred to in
http://en.wikipedia.org/wiki/World_War_III
I remember the lack of horror that Hermann Kahn’s delightful work ‘On Thermonuclear War’ produced in the strategic studies world in those days.
I disagree with regard to the relative “lack of horror” in the strategic studies world at the time only insofar as the universe of those pondering and preaching on nuclear war, weapons effects, and strategy was wider than those self-identified nuclear strategists. Linus Pauling, for example and among many others, was moved by “On Thermonuclear War” to right “No More War!” in direct refutation. Furthermore, Kahn’s delightful work, fascinating as a historical artifact of the day’s abstraction and positivism, is widely attributed with ending most public and congressional interest in civil defense. Correct me if I’m wrong.
.
Interesting , some observations none the less
1-In risk management the outcome has to be considered
not all probabilities have the same weighing
rolling a dice to win a 5$ bet and playing Russian roulette have the same probability
certainly not the same consequence
2- while learned men calculate the odds , the roll of the “dice” is done by people who see an outcome in term of win-loose,
their rise to the top of their profession has been by making winning moves .
3-deterence is not a rigorous fact it’s a psychological one , and not symmetrical either.
one side could be using a chess outlook , the other outlook is poker .
4-It is in the nature of things that the gap in perception be used to the utmost ,
there is one one reasonable assumption in deterrence
having one homeland invaded and not having conventional means of protecting is red line .
the scared dog in a corner situation
if you thinks your opponent is rational , then it can be advantageous to appears irrational .
it will allow the pushy player to gain advantages in the grey zone of uncertainty ,
their opponent will have to back off further in a confrontation
In anticipation of what promises to be a fascinating “WWW series,” I hope Ward and/or informed commentators will be addressing one personal, psychological deterrence issue that can only be addressed by the experts.
With respect to people who are trained in, and knowledgeable in, and spend their lives assessing threats to the species by the species – i.e., ACWs – does the fact that such a person with pertinent data in hand decides to bring beautiful children into the world necessarily imply that that person believes deterrence will be 100% effective within their children’s lifetimes? Within their grand-children’s projected lifetimes?
Another way to put this is: For people with the means to make rational assessments of the probability of the occurrence of an anthropogenic event that would destroy the species or cause global suffering, at what probability level of such an event do such experts decide that the joy of having and raising children is offset by the likelihood that those children will suffer the consequences of such an event — of failed deterrence? 20% likelihood? 10%? 5%?
I would argue that the decision to have children, vel non, is the acid test – perhaps the only test – as to whether one really “believes” in the efficacy of deterrence.
Wonks, like most folks, have only a limited ability to stitch “rationality” together into a patchwork of domains within which only an approximation of reason is achieved, and between which any degree of contradiction may exist.
What is the reason for having children, anyway? One knows that one will die, and worse, that one’s children will be mortal, too. Yes, bringing children into the world means exposing them to the risk of nuclear holocaust, or for that matter the collapse of civilization for other reasons. And in any case to the certainty of personal pain, anxiety, fear, sorrow and loss.
I don’t think wonks know any better than other people do whether nuclear war is likely; they may have a better idea of how it may occur and what makes it more or less likely, but they should also know that even among experts opinions vary widely and there are many aspects of the matter which they do not know and which may be unknowable.
I think it is actually far more rational to admit that one does not know how likely the world is to come to one bad end or another than to affect either optimism or pessimism, buttressed by whatever facts and arguments strike one as definitive, and pretend that everyone who disagrees is an idiot.
Denis:
Absolutely brilliant and profound question/observation.
Hard to answer, so let me start at the trivial level. I have rooted all my life for the Boston Red Sox. The team was awful when I was growing up, and awful now. Many baseball fans find themselves in the same boat. And yet, we look forward to spring training. Hope springs eternal — the expectation of failure notwithstanding. The expectation of team failure certainly does not prevent us for carrying on the wonderful tradition of taking our children to the ball park.
Paul Warnke — another Red Sox fan and a former Director of the Arms Control and Disarmament Agency, when we had one — used to joke that rooting for the Red Sox was good training for arms control. We experienced all kinds of failure at ACDA, because we were pygmies compared to the Pentagon. We also experienced some success and we helped to avoid the the biggest failure of all — or so, at least, we liked to think so.
Holocaust survivors decided to raise children. Survivors from other war zones have decided to raise children. People with far less reason to be pessimistic about the future decide to raise children. You don’t have to be an optimist by nature to want to raise kids. The reasons for this commitment are so powerful and run so deep that the prospect of mushroom clouds doesn’t enter into it.
MK
No wonder I chose a career in arms control.
Jonathan Haidt has a wonderful sequence on the extent to which we consciously control ourselves in his book The Happiness Hypothesis. He says the basic notion of consciousness is the race car driver analogy. In this, your conscious mind is like a race car driver: you turn the wheel left and you go left. There’s little doubt who’s in control.
The second way of looking at it is Freudian. This is the horse and buggy analogy. Here your conscious mind is the Ego. You are the driver of a horse and buggy, and your Id is the horse: fractious, difficult to control, constantly trying to run off in all directions. And while you struggle with this beast, your father sits in the back seat criticizing everything you do.
Haidt’s version of conscious control is the little boy on the elephant. As long as the elephant is willing, the little boy can direct it wherever he chooses. But if the elephant gets some other idea in mind, then the little boy’s remonstrances are useless.
I think we imagine our conscious, rational minds are like the race car driver. But in fact we’re much more like the little boy. The fact that almost none of us (and this certainly includes me) are our ideal weight, demonstrates that there is at least a certain lack of rationality in our make ups.
I would guess the “decision” to have children is far more driven by desires and drives below the surface beyond our rational control than we are likely to admit. So I’d hesitate to draw conclusions about matters like nuclear deterrence from the proliferation of children.
Oops, hit enter before I finished my comment. No wonder I chose a career in arms control, I grew up in Fenway Park. I’ve been a RS fan since 1967… I know how to live with lousy odds. What I found difficult was how to live with success. I (and many of my relatives) felt adrift after the Sox won the series. I suspect I would feel the same way if we actually resolved this debate over the value of deterrence and the role of nuclear weapons. What would I do with my life then????
Back to the subject at hand. First, I didn’t factor the risk of nuclear weapons into my decision to have children. There are far greater and immediate risks that could end my child’s life (cancer, car crashes, etc). Any any prospective parent knows life is a roll of the dice and decides to take the chance, anyway.
Second, I suspect that ACWs actually rate the chance of nuclear war lower than the rating given by your average man on the street. That’s because we know just how horific nuclear use would be and hope (maybe without cause) that political leaders will understand that horror and be deterred if/when the time came. I suspect most people, especially those who have come of age since the end of the Cold War, don’t know or think that nuclear use would be so horific. This is either because a conflict that escalates in a regional standoff is far less likely than the U.S.-Soviet stand-off to lead to “global nuclear war” or because the Bush Admin seemed somewhat cavalier with its nuclear deterrent threats, convincing those who were just learning that nuclear war may not be so awful for the United States.
In either case, I think people weigh far more immediate risks (and dismiss most of them) when deciding to have children. And a whole new generation of these kids needs to learn about the horrors of nuclear weapons if we are to reduce or eliminate the risk of nuclear war.
Eactly.
On the decision to have children, my biggest worry was paying the bills. It never occurred to me to worry about nuclear war. Work was separate from family.
MK
Thank you, gentlemen, for your thoughts. My question had its origins in a couple recent grumbles from Jeffery about baby-burping and diaper-changing, and I thought it wonderful, not just that he has a new baby but that the people who see the arms data still see the world as being sufficiently safe to go it one more generation.
And I think that’s what I’m hearing you saying: the risks of a global melt-down are so small they are not a part of the calculations for these critical personal decisions.
And in MK’s point that the Fenway experience is just as valuable regardless of how the game ends (and I have verified that — multiple times!), I hear strains of H.G. Wells: “If at the end your cheerfulness is not justified, at any rate you will have been cheerful.” IOW, the beer tastes just as good whether or not they win, which was the excuse we used.
Good counsel. Thanks. I guess I was looking for an informed alternative or response to that durn Doomsday Clock, which may be mass-contraception plot by the ZPG folks.
Denis:
Not a big fan of the Doomsday Clock.
Didn’t go to Fenway often — we were scrimping — but I did see Ted Williams hit two home runs.
MK
I’m hearing you saying: the risks of a global melt-down are so small they are not a part of the calculations for these critical personal decisions.
I don’t know about Michael or anon, but that’s not exactly what I was saying.
I don’t know what the probability of nuclear war is. I don’t know if it is large or small. I don’t think anybody knows, and while I find attempts to estimate it somewhat interesting, I put very little stock in them, because when I look at how they are arrived at, I always see questionable assumptions both of fact and of framing.
I’m not even sure it makes sense to talk about the issue in terms of a probability, given that people are actively involved, multiple players, responding to each other and to the evolving situation.
What I am sure of is that it’s never going to be a good day for a nuclear holocaust. Therefore, national decision makers will always look for a way out of any situation that they fear is leading to it. This, I think, is the essence of nuclear deterrence.
I am also certain that, with arsenals set up in confrontation and ready to go at it, nuclear holocaust is a very real possibility.
Why should we continue to tolerate this threat? There is no reason, rather, this represents the failure of reason, a giant fault line in the fabric of human community and rationality, which we need to mend.
I don’t think we should immobilize ourselves either with complacency that, after all, nuclear war is very unlikely, nor with despair that it is inevitable. This is why I really think it is better to admit that we just don’t know, and can’t know, what will happen tomorrow or the next day. We can only try to make good things happen, and prevent bad.
The weapons are real. The danger is real. The bad trends are real. So are some good ones. So is the possibility of putting this danger to rest, or at least, burying it.
We go on living because that is what we do. We have children because that is life. We try to make the world safe because we want to go on living, we want our children to live, and their children…
It’s not a matter of placing a bet. This is the situation we find ourselves in. We just have to try.
what I’m saying is that regardless of the risk, which I cannot quantify, fathering children was crucial.
MK