Speaking of that ranch in Montana …

Dick Garwin tells Congress that, in his formidable opinion, the chance of a successful terrorist use of a nuclear weapon in the United States or Europe is *twenty percent* per year:

GARWIN: What we are missing is really the response to a terrorist nuclear explosion in a Western city. I think Senator Nunn alluded to this. We need to organize ourselves so that if we lose a couple hundred thousand people, which is less than a tenth percent of our population, it doesn’t destroy the country politically or economically.

But we need to have a way to survive such an attack, which I think is quite likely—maybe 20 percent per year probability, with American cities and European cities included. And we need to be able to survive that. We have no real planning to do it in the business community or in the government.

EDWARDS: I’m sorry. What did you say, Dr. Garwin, the probabilities were? Twenty percent?

GARWIN: Yes, to have a nuclear explosion—not just a contamination dirty bomb—in the next year, 20 percent in my estimation. Could be 10 percent, not 100 percent.

EDWARDS: If that doesn’t wake up this country, I don’t know what would.

[Full text in the comments]

As co-author (with Peter Zimmerman) of the “Bomb In the Backyard,” I am also worried, of course, about terrorists acquiring a nuclear device. But this is a bit much. Twenty percent change per year compounds to nearly 90 percent chance over ten years and 99 percent over twenty years.

In other words, a virtual certainty.

I would have said the probability was an order of magnitude lower—which is to unlikely but still very dangerous—given the evident difficulty of acquiring the fissile material.

My friend Matthew Bunn’s PhD dissertation, *Guardians at the Gates of Hell: Estimating the Risk of Nuclear Theft and Terrorism*, includes a detailed, plausible calculation that placed the annual risk at just over three percent. Although the model, as he concedes, isn’t definitive, it does “make explicit the assumptions about the key factors affecting the risk and provide a tool for assessing the effectiveness of alternative policies.”

The calculation also appears as an article, A Mathematical Model of the Risk of Nuclear Terrorism , in *The Annals of the American Academy of Political Science* 607, September 2006. I’ve stripped out the math, just to give you a little of the flavor:

Suppose, as one plausible estimate, that the factors in the equations for Pc and Rc have the following numerical values:

Number of plausible nuclear terrorist groups, Nn = 2

Yearly probability of an acquisition attempt by a particular group, Pa(j) = 0.3

Probability of choosing an acquisition attempt based on outsider theft, Po(j) = 0.2

Probability of choosing an acquisition attempt based on insider theft, Pi(j) = 0.3

Probability of choosing to attempt to purchase black market material, Pb(j) = 0.3

Probability of choosing to … convince a state to provide material, Ps(j) = 0.2

Probability that an outsider theft attempt will succeed, Pos(j,k) = 0.2

Probability that an insider theft attempt will succeed, Pis(j,k) = 0.3

Probability that a black market acquisition attempt will succeed, Pbs(j,k) = 0.2

Probability that an acquisition attempt from a state will succeed, Pss(j,k) = 0.05

Probability of … convert[ing] acquired items to nuclear capability, Pw(j,k) = 0.4

Probability of delivering and detonating bomb once acquired, Pd(j,k) = 0.7

Consequence of terrorist nuclear attack, Cc = $4 trillionIn this example, the number of plausible nuclear terrorist groups in the world is small, but greater than zero. For simplicity, assume for the sake of this example that the various probabilities are the same for all groups in the set Nn and for all acquisition attempts of a given type by those groups.

[snip]

With these values, one would expect a significant acquisition attempt roughly once every other year … The probability that such an acquisition attempt would be successful, and would lead to the detonation of a terrorist nuclear bomb somewhere in the world, would be in the range of 5 percent. … The yearly probability of nuclear terrorism would be just over 3 percent. … The probability of nuclear terrorism over a ten-year period, Pc(10), would be just under 30 percent.

Check out the real thing. The Σ won’t bite.

Here is the transcript of the Hearing.

I’m prepared to credit Matt Bunn as Doctor of Nuclear Philosophy, but his “Mathematical Model” is pseudoscientific and at best a framework for analysis. His “plausible set” of numbers are seemingly chosen freely to give a “plausible” answer, in other words, a guess which is no better than that of Richard Garwin (an actual scientist). Apart from building a structure on which to hang various arguments about what factors are the most important or the best targets for improving nuclear security, Bunn might have given some deeper analysis of the stability of his result with respect to changes in his assumptions, including the values of his parameters, possible correlations between them, and the structure of the model itself. Would a somewhat different way of factorizing the problem lead to different results? How about a dynamic approach, instead of a static probability assumption? Are the various players just sitting around rolling dice, or are they engaged with one another in a continually changing strategy game?

It is, of course, very hard to argue with specific selections of numbers for each of the different contributions so lets, for the moment, accept them all. Now, lets run the calculation backward in time to see what the probability would have been of a nuclear terrorist attack after the fall of the Soviet Union to now (a period of 16 years). Actually, given the increased—but still not perfet—security of Russian nuclear weapons the probability of a nuclear terrorist attack should have been larger for the previous 16 years than it will be for the next 16 years. This calculation is important because, while we might not have a good sense of the net future probability, we can make some interesting comments on the past period where we know Al Qaeda’s past activities.

The numbers presented here says that the probability of such an attack was 42%, or nearly 1 out of 2. I dont believe we were in that much danger. After all, Al Qaeda attacked the US using conventional means. Furthermore, they actively discouraged Jose Padilla from a nuclear attack saying that it was unrealistic.

So in the past, Al Qaeda judged probabilities of a successful nuclear terrorist attack as much less than 50%, perhaps a self-reinforcing judgement but nevertheless I think it implies that the numbers presented here are too large.

I would question the numbers for success of theft attempts. Perhaps a better estimate for those would be down by a factor of 10. (Im not sure about the 5% probability that a nation would give a terrorist group a bomb but that too might be a factor of 10 too high.) If so, then the yearly probability drops to 0.3% per year.

Let me be clear, I am not saying that just because we have not had an example of a nuclear terrorist attack in the past 16 years means the probability is low. I am saying that Al Qaeda assessed the probabilities as much lower than the example gives over the past 16 years when it was much easier to get nukes means the different components in the example are too high.

Dear Jeffrey Lewis,

here is a link to an European study done by Mycle Schneider, International Consultant on Energy and Nuclear Policy, about the risk of Nuclear Weapons Proliferation in a Rapidly Changing World, which deals with this issue. It is called:

The Permanent Nth Country Experiment –– Nuclear Weapons Proliferation in a Rapidly Changing World

It can be found at:

http://www.cornnet.nl/~akmalten/07-03-18_MycleNthCountryExperiment-2.pdf

Peace, orsaved bythe pigeon,

Ak Malten,Global Anti-Nuclear Alliance

Let’s assume we know nothing at all about the yearly attack probability(except that it’s between 0 and 1). Averaging over a uniform distribution, then, the odds that Garwin’s numbers are right would be:

1 in 44 for p>20%1 in 6 for p>10%

And, the expected value of p would be about 5.5% per year. This does not confirm Matt’s calculations, but it is consistent with them. At the same time, Geoff’s comments are consistent with the observed facts also.

The only person who is inconsistent with the observed facts is Garwin.

I don’t doubt that Garwin could well be the world’s expert in these matters. Nonetheless, his comments seem to conflict with observations at the 95% confidence level. That doesn’t mean that he is wrong, just that he needs to articulate more facts for his estimate to be credible. For example, he could argue that the risk is getting higher over time, or that there is a 10 year lag time, or something like that.

I assume that Garwin would say that his estimate cannot be extrapolated past about 5 years due to skewness in the probability distribution of probabilities.

Lacking support though, I think Garwin’s estimate is not credible.

While an interesting academic exercise, I fail to see any pragmatic utility in attempts to calculate the probability of a such a scenario. It’s great for politicians and those in the Homeland Security industry to secure funding and contracts but not much else. Additionally, given the significant assumptions required to generate a “yearly attack probability” the method’s accuracy is suspect at best. No accuracy + no precision = no utility in my book.

I suspect that some of the commenters have read Jeffrey’s summary and the comments, but not the actual article. The article makes it very explicit that all of the input parameters are profoundly uncertain, and that the utility in such a model comes from using it as a tool to try to think through systematically how different factors and different potential policy interventions might reduce the risk, NOT from the particular probability estimate that results from the set of input parameters I use as an example (which is described in the article ONLY as an example of the use of the model). In my view, such a model can help make the discussion more systematic, narrow the range of disagreements, and identify promising areas for policy intervention and for collection of additional information to reduce the uncertainty. As far as I am aware, the article represents the only available discussion that at least attempts such a systematic approach, and walks through what little evidence there is about what the values of each of the input parameters of such a model might be. (And it does include, contrary to the suggestion of some commenters, some extended discussions of the sensitivity of the results to changes in the input parameters.)

I find it particularly interesting that some commenters are willing to sling quite serious charges like pseudo-science without being willing to attach their names to their remarks. I would prefer to keep the debate on a more substantive level.

I’m afraid I don’t understand Geoff’s argument that the record shows al Qaeda estimated a less than 50% chance of success over the past 16 years. At no time during that period, as far as we know, did al Qaeda actually acquire nuclear bomb material, despite multiple attempts to do so; in the absence of such material or of a nuclear weapon, its chances of succeeding in launching a nuclear attack would obviously have been zero. The fact that they didn’t think Jose Padilla could make a nuclear bomb likely reflects a correct judgment of his very limited capabilities, not necessarily a judgment about what other groups of operatives might have been able to do. (However, the fact that in those discussions, even quite high-level al Qaeda operatives apparently suggested using uranium in a dirty bomb suggests that at that time those individuals, at least, had quite a limited understanding of nuclear matters.) Anthony Wier and I discuss the record of al Qaeda’s efforts (what little of it is publicly available that is), as well as Aum Shinrikyo’s, in “The Demand for Black-Market Fissile Materials,” available at http://www.nti.org/e_research/cnwm/threat/demand.asp

I believe that when one examines the specifics of security arrangements at nuclear sites around the world and the kinds of insider and outsider threats they would likely be able to defend against, and the kinds of capabilities that terrorists and thieves have shown they can put together to successfully attack or steal from guarded facilities, it is very difficult to sustain the argument that the real probabilities of successful thefts are an order of magnitude lower than those I postulated. What I believe we should be trying to achieve is a world in which these probabilities ARE an order of magnitude lower (or even two orders of magnitude lower) than those I postulated, and the risk of nuclear terrorism correspondingly lower, but I think we’re a long way from being there so far.

Geoff is quite correct in saying that nuclear stockpiles in the former Soviet Union were less secure in the 1990s than they are today. I would add an additional factor: then, al Qaeda had a centralized command structure with a secure national home base, which probably gave it a higher chance of carrying out something as complicated as a nuclear weapons project than they have in their current, more scattered state (though unfortunately that only reduces, but does not eliminate, the risk that a cell such as the one Jeffrey and Peter described in their “Terror Farm” article could pull off such an enterprise).

Matthew,

I did read the article and I still stand by my comments. If, as you admit, the input variables are “profoundly” uncertain then of what use is any yearly probability calculation based on those variables? What use is a systematic methodology if the conclusion will invariably be suspect? The issue with policy, in my view, is that with such uncertain variables, policymakers are likely to pick numbers that support their predisposed policy objectives. Rather than informing policy, your system is much more likely to be misused by those who control budgets and policy. But maybe I’m just cynical.

Another issue I have is extrapolating an already dubious single-year probability out to ten years or more. In such a case small differences in a few parameters can multiply into great disparities over a decade. (I read the article on LexisNexis, which did not include the actual equations, unfortunately) Such extrapolation ignores that parameters will change significantly over time and therefore change the extrapolated probability.

Even if you had left out the overall probability calculation and focused exclusively on the parameters themselves there are still significant problems. For example, the article states, “It also makes it possible to identify policy options to modify each of the parameters to reduce the risk and to explore quantitatively what the effect of such policy options might be.” The problem here is the effect of a particular policy option on a particular parameter or parameters comes down to judgment. How does one quantify, say, the installation of new physical security measures at a particular site or multiple sites? Or the allocation of money to Russian nuclear security? It’s easy to say that such policy options will likely improve nuclear security and reduce the probability of an attack, but it’s a much more difficult proposition to identify by what degree security is improved and more difficult yet to quantify that judgment into a precise number.

Because systematic estimation is difficult does not imply it’s worse than relying on intuition. I continue to believe that a systematic approach helps in focusing the discussion, identifying areas of disagreement, identifying areas where additional information would reduce the range of uncertainty, and, yes, offering an at least somewhat more focused approach to assessing which policy options might be most important. I claim neither more nor less.

That’s quite reasonable Matthew. My perspective as someone who spent a career in Intelligence is that no system, no matter how thorough, well-thought out or systematic can make up for a fundamental lack of data. I’ve seen similar methodologies which may provide value to analysts misused by policy makers to reach and justify unsupported conclusions. That is the danger based on my experience. I can only hope that some agenda-driven think-tank does not use your systematic approach to further its agenda.