Jeffrey LewisMissile Defense: To Test or Not to Test

One year after Boeing began sticking missile defense interceptors in Alaska’s frozen tundra, Alex Fryer at the Seattle Times has a long article on the travails of the US Ballistic Missile Defense System.

After three flight test failures, MDA doesn’t seem to be eager to take another shot. An anonymous military official told the AP that MDA wouldn’t resume flight tests “until this fall at the earliest.”

That question is important, but sidestepped by Fryer in a passage employing the hackneyed “he said, she said” journalistic device:

Philip Coyle, former director of Operational Test and Evaluation at the Defense Department, likens the evolutionary approach to the Winchester Mystery House, the Victorian mansion with so many architectural oddities that it has become a tourist attraction in San Jose, Calif.

“You can build a house the same way, where you don’t have a floor plan. But it’s pretty expensive and you may not be happy with the result.”

[snip]

During a congressional hearing, Sen. John Cornyn, R-Texas, articulated the White House view: “If we waited until we went through a traditional test and operation before we then concerned ourselves with possibly deploying these in the case of emergency, it really might be too late.”

Well, that really helps an informed reader balance speedy deployment against technical risk, doesn’t it?

It is too bad Fryer didn’t make an effort, because this question—how to balance risk against speedy deployment—is the missile defense debate right now.

A few weeks ago, an Independent Review Team told the Missile Defense Agency:

In the development of complex systems, such as Ground-based Midcourse Defense, there are four program management variables—performance, schedule, cost, and risk. The Ground-based Midcourse Defense program has well-defined performance requirements, and to meet the increasing ballistic missile threat, has had essentially no schedule margin, while at the same time proceeding with minimal budget reserve.

As development problems are encountered, which happens on all complex system development efforts, the only remaining variable is risk, which then grows as the system development continues. To achieve mission success, risk in the Ground-based Midcourse Defense system must be constrained such that schedule and/or cost provide the relief. If risk remains the relief valve, additional test failures will likely occur that will result in schedule slips and cost increases.

The IRT recommended the BMDS testing program become “event-driven rather than schedule-driven.” In other words, slow down.

I criticized the IRT Report because I suspected the panel intended its recommendations to stop—not just slow—testing, largely to shield MDA from embarrasment.

Some readers expressed doubt about my interpretation:

On an admittedly cursory reading, I see nothing to indicate that the purpose of the recommendations is as advertised: to move the program from a calendar-based to an event-based operation.

[snip]

I’m writing to ask whether I missed something in my cursory reading of the report that supports your interpretation or whether there is any collaborative info in Brad Graham’s reporting or elsewhere.

The IRT’s decision to consider non-technical factors—such as likely North Korean perceptions—struck me as suspicious. Such considerations exceed the panel’s Terms of Reference as summarized in the Final Report. Moreover, the Independent Review Team members had no obvious expertise about how North Korea would likely to perceive such a system. (Do any members, for example, speak Korean?)

I mentioned this to Shachtman, who declared ”’Nuff sad.”

The focus of the IRT on non-technical reasons to avoid test failures is a substantial departure from the attitude expressed by a previous review panel chaired by General Larry Welch.

The Welch Report also proposed slowing testing to avoid what it termed a “rush to failure.” But the Welch Panel was not worried about test failures per se, rather the degree to which such failures indicated technical risk.

The Welch Report called the Sprint missile “a successful, high-priority development program … executed under intense schedule pressures.” Yet the Sprint program was marred by test failures:

With this highly compressed test schedule, the first 10 tests were characterized by a high failure rate. However, this failure rate was made tolerable by the extensive planned series of tests that followed.

The attitude of the Welch Report was learning from mistakes, not eliminating them, with the primary goal of building a system that works.

If MDA accepts the IRT recommendations, I suspect few tests will be flown. If the system is a dud, it is better not to know, they imply. The AP story announcing no tests until fall “at the earliest” is suspicious, to say the least.

Related: I will have a post about Lowell Wood’s continuing effort to pimp Brilliant Pebbles over at DefenseTech.