Believe it or not, the gray battleship of the establishment’s fleet, Foreign Affairs, published an article decrying the use of cluster bombs during the Vietnam War. I can believe it because I was the author. How in the world did this staid journal publish an essay by an anti-war activist titled, “Weapons Potentially Inhumane: The Case of Cluster Bombs,” in its April 1974 issue?
Here’s my theory: The Council on Foreign Relations had recently chosen William Bundy, brother of McGeorge, as its new editor of Foreign Affairs. Both men were heavily involved in the Vietnam War, McGeorge as the national security adviser for Presidents Kennedy and Johnson, Bill as the Assistant Secretary of State for East Asian and Pacific Affairs. The latter’s appointment as editor of Foreign Affairs elicited howls of protest within the Council which, like the rest of U.S. society, was riven by divisions over the war. My hunch is that Bill Bundy, who didn’t know me from Adam, published this article as a rejoinder to those who opposed his appointment, and as a personal declaration of editorial independence.
My article was a period piece, written when creating a norm against the use of cluster bombs was completely beyond the pale. Instead, I proposed feeble operational and procedural limits on their use. For those who think that arms control is a hopeless affectation or a cadaver waiting to be buried, consider this: In 2008, representatives from over 100 nations gathered in Oslo to adopt the text of a Convention to ban the development, production, possession, transfer and use of most types of cluster munitions.
When treaties aren’t possible, codes of conduct that establish norms will do. Curtailing the use of cluster bombs, like land mines, has been an exercise in progressive norm-building. In both cases, arms control has not been driven from the top down. The United States, which usually is behind the wheel for arms-control initiatives, has barely been in the back seat for Conventions dealing with cluster bombs and land mines.
It’s easy to take potshots at the Convention on Cluster Munitions. There are plenty of important outliers besides the United States, including Russia, China, India, Pakistan, Israel, Iran, and Brazil. Many Arab states have also kept their distance. A nuclear-related treaty without these signatories would not help very much. Why, then, seek normative measures in this instance? One reason is because cluster munitions, unlike nuclear weapons, have often been used on battlefields.
Norms are broken, especially at the outset, and especially by outliers. Since the turn of this century, cluster bombs have reportedly been used in civil wars within Syria, Sudan, and Libya; during clashes between Russian and Georgian troops; by Israeli forces against Hezbollah in Lebanon; by American and British forces in Iraq; and by U.S. forces in Afghanistan. The outliers listed above aren’t surprising. What’s most notable about this list is that includes the United States, Great Britain, and Israel.
The continued battlefield use of cluster munitions provides more ammunition to skeptics of the CCM. What’s the use of arms control that doesn’t change the behavior of outliers but constrains the military options of responsible states? This familiar argument, if carried to its logical conclusion, would result in a world without any sustainable controls over weapons that have extraordinary lethality and/or indiscriminate effects.
Norms matter. They set standards for responsible behavior. They clarify outlier status. Without norms, bad behavior is far less condemnable and harder to stop. If and when outliers want to improve their forlorn status, accepting norms is one way to do so. Conventions also have value for states that wish to disassociate themselves from outliers. The United Kingdom signed the CCM in 2008 and ratified it in 2010. Martin Butcher emailed me that, as of September 2012, seventy per cent of the UK’s stocks have been destroyed. He adds, “The UK does not allow the US to stockpile cluster munitions in the UK. We do have an issue with some UK companies continuing to finance clusters abroad, but that is getting worked out.”
When major powers fail to sign Conventions, outliers enjoy greater political cover. When major powers use these weapons because of perceived military imperatives, they place themselves in very poor company. When major powers avoid using “branded” weapons but don’t sign up, they find themselves in limbo, an awkward cul-de-sac where leadership contracts.
Norms relating to branded weapons, such as cluster munitions and landmines, have already demonstrated their utility. They have reaffirmed outlier status and provided mechanisms to change it. They have shaming and shrinking effects on non-outlier states that use branded weapons. Over time, these clarifying effects strengthen norms.
How does the leadership of a great power contract when they dismiss an emerging ‘norm’?
In a world ruled by law, I could see it. We live in a world ruled by force, by the gun.
Ben,
Think about the CTBT. In practice, the US accepts the norm after taking a leading role in trying to create it. But US ratification is in limbo. Would you agree that this is harmful to the global standing of the US?
MK
US ratification limbo appears to function as an unfortunate political reality across this issue among numerous others at this point. The arguments in favor of ratification to strengthen norms could benefit the United States, but I have to wonder what other reasons we have yet to see participation in the CCM.
One idea which comes to mind has to do with a comprehensive approach to the jurisdiction of warfare. With the detention of so called unlawful enemy combatants, the Executive Branch has made a series of decisions increasingly designed to avoid international and domestic scrutiny. Approached from this mindset, do we still see a situation where the United States would benefit in terms of global standing or would we be better off acknowledging different classes of standing?
Considering the goals of United States participation in the CCM might not offer the direct benefits that immediately ratifying could allow, but perhaps there is more we can learn from what we have not seen. If we can identify the features which make opposing the use of cluster munitions impossible (for the United States), perhaps we could identify a manner by which these features could be circumvented.
The latest effort to outlaw weapons by “branding” them is the Campaign to Stop Killer Robots announced by Human Rights Watch and its coalition partners. For several years the International Committee for Robot Arms Control has called for a ban on autonomous weapons (AW) and limits on teleoperated weapons, and bans on robotic nuclear and space weapons.
HRW builds its case primarily around arguments about the inability of robots to make the kinds of judgment calls required by international humanitarian law (IHL) or the law of war. It argues that AW will endanger noncombatants and shift the burden of risk in war away from soldiers and onto civilians.
However, these arguments are based on assumptions about technical limitations which are a function of time, and also about how AW will be used, in particular that AW would be used in ways that increase risk to civilians, contrary to the responsibility of humans. The Department of Defense has already responded with a Directive promising that AW will be thoroughly tested and used under rules, doctrines and “appropriate levels of human judgment” to ensure their compliance with all relevant humanitarian standards.
The arms control case for banning killer robots is that we stand at the cusp of automating warfare to a dangerous degree and propelling the world into an arms race that no one will win. As recently as five years ago, military sources routinely denied even considering the idea of “taking the human out of the loop”; now we have a DoD Directive on Autonomous Weapons, setting a policy which is basically full speed ahead.
Except that in some applications, autonomy has been standard for a long time.
CAPTOR mine, a mine with a torpedo in it, for anti-sub use. Sure, it’s a whole sub full of people, and not a Predator + small cluster of Afghan militiamen, but it’s autonomous in operation.
Plenty of fire-and-forget missiles. And torpedos. Fired at long range, almost all anti-air missiles cruise to a location and then turn on looking for a target. Hopefully the only aircraft in their target box then will be hostile military aircraft. Antiship missiles, quite often cruise for some time before being in radar range of their targets. What if there’s a cruise ship in the target box when they arrive? Torpedos with 20+ mile range have a few-mile range seeker, fired with either inertial profiles or a wire guidance rig so that they’re steered into the target box. If the wire’s cut the standard setting is for the torpedo to keep going to the target box. Again, if there’s a cruise ship there when it arrives…
There’s somewhat more with the idea of letting the weapons pick their own target points, yes. At least with all of the above a person said “go there” or “target anything coming here”. So that is an escalation.
George, you are absolutely correct about the CAPTOR mine, and there are other examples of existing weapons which meet any reasonable definition of AW (in particular, the Pentagon’s definition). Frequently cited are point-defense systems like Aegis, Iron Dome and C-RAM. These cases need to be reviewed. Some may be defined out of the class of Autonomous Weapons for the purposes of a treaty. Others may be explicitly allowed, subject to limits, or banned. My sense would be that CAPTOR is a fully autonomous offensive weapon which should be banned, but defensive, human-supervised, semi-autonomous weapons (DoDD 3000.09) that defend human-inhabited territory or vehicles should be allowed. Landmines should be defined out of the class of AW.
The missile examples you cite are very problematic but the resolution is probably as you suggest: as long as the weapons are targeted to a kill box and we accept that the responsibility to ensure that such kill boxes contain only combatants and legitimate military objectives falls on the people who make the decision to launch the weapons, we can treat these weapons as missiles.
In order for this definition to make sense, the size of kill boxes has to be limited, otherwise the robot goes on a hunt of unlimited extent after its arrival “on target”.
However, it is hard to object if, within a kill box of limited size, technology could spare noncombatants, if any are present, by directing lighter fire more precisely and discriminately. It would also be hard to object to the weapon being equipped an option to abort or delay execution of its mission at the last moment in case it determines that the target identification was incorrect, or if it spots civilians (or humans) near the target. These would be evolutionary “improvements” from the point of view of safety and protection of noncombatants, but would move the weapon further in the direction of AW.
I think we need to think about where we are going if we decide it’s OK for machines to decide when and where and against whom or what to use violent force. First of all, the machines will make mistakes, because we are already doing this and the technology is still very crude. In fact, we are using drones to kill thousands of people and one quarter to one third of them are civilians. Someone may argue that robots that only make mistakes a third of the time are good enough.
But let’s say the technology gets really good. So good that only robots are qualified to fight wars, and only AI systems are qualified to direct them. You’ve seen Terminator so you know where this leads. Very funny, right? But that’s where the Pentagon is going with this. I like to say, “Stupid armed robots are dangerous. Smart armed robots are even more dangerous.”
International humanitarian law (IHL) demands that military forces discriminate between combatants and noncombatants and that reasonable judgment be used in weighing harm to civilians against military objectives. For robots to meet these requirements without direct human control would require artificial intelligence with human-like levels of sophistication and performance.
If anybody succeeds in creating such computers, I do not want them to be created to be soldiers and I do not want them to be put in control of weapons. In fact, I want them air-gapped from the internet let alone any military systems.
Mark,
Interesting comments, but I feel compelled to challenge you on a few points.
First, I don’t think we’re really on the cusp of automating warfare. Targeting in a dynamic combat environment is extremely difficult at the best of times and even the state-of-the-art AI is woefully inadequate for anything beyond highly scripted and simplistic demonstrations. What we can expect is continuing improvement in precision, sensors and other secondary aspects of weapons design. We are, however, not approaching the capability for weapons to reliably find targets and confirm them as hostile, much less the ability for high order reasoning like the application of LOAC principles in a multitude of combat scenarios in the context of specific ROE. That is still the stuff of science fiction.
Secondly, drones. I hate that word, but it’s become the vernacular. Anyway, you inserted a comment about drones killing civilians in between sentences on robots and AI, implying that drones are somehow autonomous:
I would reverse the implication and state categorically that the armed drones in use today are some of the LEAST autonomous weapons systems we have today, subject to more C2 than just about any system short of nukes. There is, simply put, zero autonomy involved in any drone lethal action. Teleoperated weapons are actually subject to more controls and greater oversight than more traditional weapons. The civilians killed in those drone attacks are dead because of human actions and decisions, not because of any autonomy provided by drones (which doesn’t exist).
To explain further, drones provide greater human influence in kill decisions than most other weapons systems because of their capabilities. Drones can pipe a live video feed all over the planet which allows decision makers (up to and including the President!), analysts and even lawyers to provide input before a decision to employ weapons is made. Drones can loiter for longer periods than other systems which give decision makers the option of waiting to strike at a more opportune time. A manned aircraft pilot and his/her wingman must make these judgments and decisions with less information than is available to drone crews on a much tighter timeline – and they must do so while piloting their aircraft in a combat environment. Plus, at least in the case of military drones used to support military forces (which is where I have experience), the video is recorded and archived along with reams of meta-data which allows for easy investigation of any incidents. There aren’t dozens of people looking over an F-16 pilot’s shoulder as she fly’s her mission. Same for a rifleman on the ground or an artilleryman showering an unseen grid-square with HE rounds. Simply put, drones are not autonomous, they are not robots and they do not “make mistakes.” Since you brought up the Terminator, however, I’ll just say that the direction drones are taking us is closer to Ender’s Game (teleoperated forces via superluminal communication) than autonomous killing Schwarzeneggers.
Lastly, your point on kill boxes:
That is an impossible standard – there is, in most cases, no way to ensure a “kill box” only contains legitimate enemy forces. Besides, if we actually had that level of fidelity, we’d know precisely where the legitimate targets are and thus wouldn’t need kill boxes to begin with. But even the concept of this proposed legal restriction doesn’t make much sense to me. Restricting one class of weapon will mean that other weapons would be used instead – are the alternatives really better? I can’t see how a legal regime which restricts cruise missiles, yet allows massed artillery fire on the same target, makes any sense from a humanitarian perspective.
Andy,
Fully autonomous weapons already exist. The United States has just adopted a formal policy to develop, acquire and use autonomous weapons, full speed ahead. So you can argue about being “on the cusp” but we are certainly automating warfare, and not only at the level of weapons targeting and fire decisions. We are using automated battle management, command, control, communications, intelligence and logistics planning systems to run our wars. All this is real and there is a policy to continue it without limit.
Your point about drones being heavily overseen at present is correct, but to say there is no autonomy here is not correct. The drones have a great deal of technical autonomy, in some cases very nearly flying themselves in response to “go here” commands from the “pilots.” However, there has been a great deal of oversight of fire decisions involving drones, and in spite of this, a third of the victims are civilians. So, if robots could be that discriminating, some might think it good enough.
Some people propose that drone technology should be pursued with human-in-the-loop or human-on-the-loop by remote control to the greatest extent possible, in preference to full autonomy with artificial intelligence in control of fire decisions. However, difficulties with this approach may create a military necessity to avoid reliance on communications links, the limited speed of human reactions and the high bandwidth needed if humans are to process the raw sensor data. If pure machines can outperform human/machine hybrids in machine vs. machine combat, and are cheaper to feed and house than even cubicle warriors, it will be hard to justify continued adherence to human-on-the-loop, or systems with operators.
You write that
“there is, in most cases, no way to ensure a “kill box” only contains legitimate enemy forces.”
But this is unacceptable; IHL requires that nations not direct indiscriminate fire at civilians, and it is therefore your responsibility to know what you’re shooting at. If you are laying down a pattern of artillery, you need to have determined that civilians are not present in the locations affected. If you send in a robotic weapon that may kill anyone or say, any airplane that is present, you need to have made sure, to some reasonable degree, that there were no civilian persons or airplanes in the kill box. How you do this is your own business, but it is your responsibility to distinguish and weigh proportionality before using an indiscriminate weapon.
Mark,
I get the sense that we have different ideas of what is “fully autonomous.” You say that fully autonomous weapons already exist – how about a few examples?
As for the US military being “full speed ahead” that is one interpretation, but I think a closer look would show that the military is skeptical of fully autonomous systems. The US military certainly does favor systems which provide for quicker response, shorten “sensor-to-shooter” processes, provide more certainty in decision-making, etc. but that is much different than ceding engagement control, not to mention tactical and operational planning.
Drones have no more “technical autonomy” than any other aircraft and, if you’re talking something like a Predator or Reaper, the autonomy is actually less than what you’d get with a manned aircraft. I wouldn’t expect that to last, however, and no doubt that future drones will have comparable systems to manned aircraft. But again, just like with manned aircraft, we are a very long way away from drones being able to acquire, track and engage targets independently to say nothing of applying higher-order thought like the application of ROE to a given situation.
“If pure machines can outperform human/machine hybrids in machine vs. machine combat,”
That is a very, very big “if” and one I don’t think is likely anytime soon, if ever. “Spoofing” automated systems is still pretty easy and there are, indeed, lots of people who do nothing but discover and devise methods to exploit automation at all levels from a single weapon sub-system all the way up to integrated battle-management systems. This is one area where I think the sum is greater than the parts and I doubt that pure machines will ever stand a chance against humans teamed up with machines.
“But this is unacceptable; IHL requires that nations not direct indiscriminate fire at civilians, and it is therefore your responsibility to know what you’re shooting at. If you are laying down a pattern of artillery, you need to have determined that civilians are not present in the locations affected.”
It is rarely, if ever, possible to determine that civilians are completely absent before an attack is made. IHL also does state that attacks are forbidden when civilians are present or are likely to be affect by attack. And knowing that civilians ARE present does not automatically mean that an attack cannot be made either. The decision to attack is based on military necessity and, as you note, proportionality.
Personally, my view is that fully autonomous systems are already illegal since they are incapable of applying these LOAC principles. Perhaps one day science fiction will become science fact and machines will be able to make such decisions, but that is a long, long way off. Today machines can barely navigate from point A to point b on their own.
Andy,
I mentioned a number of examples of existing AW in my previous reply. For the purpose of defining “autonoumous weapon” we are interested in lethal autonomy, or the ability of the weapon system to autonomously make a lethal or otherwise fateful decision in the use of violent force. Here “autonomously” means simply that the system’s decision process is contained locally within the system; it may be observed remotely but the action decision is not taken by a remote operator.
I recognize that there has been considerable reluctance within the US military in the past to embrace autonomous weapons such as LOCAAS and NetFires. However, the policy set by DoDD 3000.09 is clearly full-speed ahead with regard to any potential issues with safety, IHL compliance, or concerns about an arms race.
I don’t know what you mean saying that for “Predator or Reaper, the autonomy is actually less than what you’d get with a manned aircraft.” This certainly is not true for manned combat aircraft. Even if a computer intervenes between the controls and the flaps, there is a very tight loop between the pilot’s actions, the plane’s response, and the pilot feeling the plane’s response.
Sure, autopilot systems are very good. That’s one of the technical enablers for drones which explains why we have them now and didn’t 20 years ago.
You are probably right that medium altitude long endurance (MALE) drones like Predator and Reaper aren’t going to be picking out their own targets any time soon. This wouldn’t make much sense for the wars we’re using them in. But UCAVs like the X-47 and neuron will be expected to enter contested airspace. Communications links will be vulnerable, and rapid responses may be required when threats are detected.
Ground and urban combat are another arena where autonomous robots may outperform teleoperated robots at least in the pitch of battle. It is especially hard to maintain comms links with underwater vehicles, which accounts for the Navy’s keen interest in autonomous weapons.
Mark,
“However, the policy set by DoDD 3000.09 is clearly full-speed ahead with regard to any potential issues with safety, IHL compliance, or concerns about an arms race.”
I think we’ll have to agree to disagree on that one as I don’t see your interpretation in the directive at all. To me it clearly says at the outset that humans must be in the loop and most of the rest is concerned about preventing systems from making mistakes “unintended engagements.”
“I don’t know what you mean saying that for “Predator or Reaper, the autonomy is actually less than what you’d get with a manned aircraft.” This certainly is not true for manned combat aircraft. ”
Predator and Reaper are remotely piloted aircraft and by modern standards, they lack many of the sophisticated avionics and automated features of manned aircraft. With the exception of lost-link procedures, they have very little actual autonomy.
“Sure, autopilot systems are very good. That’s one of the technical enablers for drones which explains why we have them now and didn’t 20 years ago.”
Well, not really. We’ve had reliable autopilots for quite a lot longer than 20 years. And we had remotely piloted drones as far back as Vietnam.
“But UCAVs like the X-47 and neuron will be expected to enter contested airspace. Communications links will be vulnerable, and rapid responses may be required when threats are detected.
Ground and urban combat are another arena where autonomous robots may outperform teleoperated robots at least in the pitch of battle.”
The problem of communication links is one I often point out to those who think all manned aircraft can be replaced by remotely piloted aircraft. By contrast, here you argue that the reliability of comm links will be a driver for more autonomy. I tend to go the other direction and think it will be a driver for maintaining human primacy in warfare. The reason is that autonomous systems aren’t remotely on the cusp of replacing humans in war in terms of capabilities.
To replace humans in ground and urban combat they have to be able to reliably get from point A to point B just to start. Robot ground vehicles are currently barely able to navigate a predetermined route without crashing or breaking down. That is a fundamental function that robots have yet to master. Once they actually reach reach and area where the enemy might be located, they must be able to reliably identify friendly, enemy and civilian personnel, vehicles, structures, etc. and then engage appropriate targets with appropriate weaponry consistent with tactical objectives, the commander’s intent and applicable ROE. Harder yet are real, adaptive tactics, the integration of tactics with operations and all the other higher-order tasks involved in warfare. We are a long, long, long way from a future with “Terminators.”
Andy,
First, as a matter of fact: DoDD 3000.09 does not “clearly says at the outset that humans must be in the loop”. To the contrary, the Directive pre-authorizes the development, acquisition and use of non-kinetic anti-materiel weapons in which humans are fully out of the loop and the weapons detect, identify, authenticate and engage targets autonomously. It also sets up an approval process for kinetic and anti-personnel weapons which are fully autonomous and unsupervised, and which could in principle decide which humans to kill and when. I do not agree to disagree about this. It is not a matter of interpretation. The language of the Directive is unambiguous.
As to why the vulnerability of comms (not a simple subject; comms can be hardened) points to more autonomy rather than “maintaining human primacy in warfare,” the latter comes at the price of human vulnerability. There is a big push to get “warfighters” out of harm’s way. Today’s technology may still look crude, but it is creating great excitement. Underlying trends of exponential gains in information technology continue to unfold.
So, I’m not sure what “We are a long, long, long way from a future with “Terminators.”” means, but have killer robots in the present and lots more coming in the near future, and if we ask what kind of future that leads to, Terminator is one kind of answer.
As a point of honest curiousity, what’s the issue?
If a military is dropping weapons it is rationally expected that they are diligently trying to kill their targets (and destroy associated materiel).
Due to geometric attenuation effects, a weapon obtains significantly larger kill effect with properly deployed submunitions as opposed to unitary warheads of equivelent explosive mass. If the concern is unexploded ordnance becoming a persistent hazard to non-combatants post conflict; to prevent that issue, don’t you really want to argue for ‘reliability standard’ that all submunitions provide prompt explosive yield?
We certainly aren’t talking things like the rationale behind banning chemical munitions (I understand that studies determined that tonne for tonne high explosives had far greater military effect) as those bans were in the interests of all pertaining militaries.
The conventions against biological agents are even more in the common-sense category as they have extremely dubious military effect and would present extreme complexity for attempted use in terms of logistics and support. Again in the interests of pertaining militaries.
Nuclear explosives are probably too large (and have numerous undesirable persistent side effects) for any rational conflict on a planet that you continue to occupy. So conventions, treaties and protocols restricting their use are sensible for all parties.
What I am asking, if you are to consider a treaty restricting a type of weapon, why would the militaries that rationally benefit from that weapon type accept a limitation against a legitimate means of military effect in conflict?
I am not looking for a moral argument, just a rational one that explains why submunitions / cluster bombs are bad for the user that drops them.
j_kies writes:
If the concern is unexploded ordnance becoming a persistent hazard to non-combatants post conflict; to prevent that issue, don’t you really want to argue for ‘reliability standard’ that all submunitions provide prompt explosive yield?
That was tried; the more reliable weapons used recently weren’t as improved as their target reliability levels indicated they should be.
Inherently self safeing weapons, such as ones where environmental exposure causes the explosive to desensitize itself say, may be another story, but haven’t been tested much yet.
Reactive fragments may be another story. Well, are another story – the materials are flammable but not detonable afterwards, if any don’t impact fast enough to trigger. So there’s no EOD hazard in the traditional sense.
But a case can be made that traditional cluster munitions aren’t getting to “reliable enough” fast enough.
Classical cluster munitions used very large numbers of “bomblets.” Hundreds per unit, tens of thousands in an airstrike, millions per war. So, if even a small fraction of the safety devices fail (in practice it is of the order 10%), live munitions are left in the soil. Decades after the war, people are still dying and losing hands and feet.
This has no practical effect on those who have no shame.
The modern ultra-reliable sensor-fuzed munition cluster bomb units carry only 10 submunitions, each of which is multiply safed. They are targeted on materiel not personnnel. As a result, such weapons are explicitly excepted from the Convention on Cluster Munitions.
The argument is a rational and moral argument – removing an indiscriminate and lingering threat.
But I’m not entirely convinced of anti-CCM measures. They seem to me to be a reaction to a rather particular legacy of munitions development and production, which has subsequently departed of its own accord. Cold war miniature DPICMs have passed into history along with their targets, massed armor formations crossing the Fulda Gap.
Optimum sizing for cluster munitions has increased because of target evolution. I’d say up to around 105mm shell equivalent, providing for structure demolition and useful long range fragmentation. And – an important coincidental result of these larger submunitions: larger, more reliable fusing, which does away with the original problem. Haven’t even begun to consider the rise of unitary thermobarics – the future of anti-personnel weapons IMHO.
JO-
What makes you think the cluster bomb “departed of its own accord”? As far as we can tell, some countries still use them, and only relentless pressure from the CCM states has managed to cage their use and production by the refractory powers. The “sensor-fuzed munition” opens the door to a new world of issues and weapons.
I guess you’re right, for indiscriminate slaughter, “unitary thermobarics” are a reliable standby.
Mark Gubrud –
In the last 30 years the target environment changed. In the Cold War, targets envisaged were a column of 100 vehicles, most armored. An Airbase, or, hundreds of troops, lightly dug in or on the move.
Now the target is a vehicle or 2 or 3 vehicles (usually soft), a single building that is sending fire, a fortified compound or machine gun post above a bunker. The old style cluster bomb is too big.
And, the daisy cutter effects of close bursting small munitions aren’t effective in built up areas or against reasonable fortification/entrenching. Their kill outcomes are spotty – they are best against massed troops in the open without body armor, which is not what militaries currently face. Attacking huge columns of armor with unguided tiny AT bombs is almost unthinkable now.
The optional fusing on modern cluster type weapons allows for building demolition – this is probably the most important advantage of larger submunitions.
Consider this: a 250lb bomb within CCM can have 9 programmable fuzed units 10kg each, 105mm equivalent (actually better given the greater HE and directed fragmentation). The programmability lets you optionally, (at point of release) level the building instead of going for fragmentation, The fragmentation can be directed down into fortifications and sized to defeat body armor. And really this theoretical example is total overkill – what target needs 10 rounds 105mm in this day and age?
The new stuff is just all around improved, and doesn’t test CCM treaty limits at all, rather its going the other direction.
There is a straightforward technical solution for safing cluster munitions (and landmines for that matter) after a time – borrowing some safing concepts from nuclear weapons.
Consider a munition containing only an insensitive high explosive (TATB) and using an exploding foil detonator to generate the initiation shock. The munition is powered by a supercapacitor which is charged before weapon delivery, and has in addition a built in power drain that draws the power down so that it goes inert over a set interval.
Such a device cannot explode without having adequate capacitor power, and cannot remain live indefinitely.
There is an uncertainty distribution about the average “dead capacitor” time, but power is finite and it will go dead. Even if the drain did not work somehow (but if it is an integral part of the firing circuit it is hard to see how), the supercapacitor itself self-discharges over the course of a month or so (you could probably deliberately design them to discharge faster).
Carey, your observations about IHE and electrical foil or EBW type initiators were one of the technology directions that was being pursued last I looked. My concern with it was whether the IHE would remain chemically stable in the soil, given possible reactions with acidic or alkali soils, heat and cold, etc. It at least has the advantage of not having an eternally poised fuze system.
The moral dimension is tenuous. We are talking about killing people efficiently in a cost-effective manner.
Does it matter if somebody dies in a face to face combat (a la Vietnam where you see the guy, and you wonder if he’s got a family, mother, etc. but can’t afford to hesitate), or you push a button flying at 10,000 meters to kill people who’s face you don’t see, so you don’t feel guilty? And if the people dye of bayonet or shrapnel wounds or radiation or blast or gas, does it really matter?
Combat is dehumanizing, and I’m sorry to say, you either kill or you’re killed…
I remind myself of Auschwitz, where officers would supervise the factory of death during working hours, and then go home to their family to play with the kids and play Beethoven at the piano…
Ara writes:
The moral dimension is tenuous. We are talking about killing people efficiently in a cost-effective manner.
Restricted only in time and place to the battlefield, perhalps. But UXO is a gift that keeps on giving for decades, and cluster bomblets are insidious like mines in terms of size and likelyhood of civilian interaction long past the war.
Ara
I am a weaponeer, I am unashamed to beat plowshares into swords to address the needs that my extended family / tribe / society has for organized violence. I am responsible to assure the weapons I build achieve known and constrained results when applied appropriately. I perform research as to appropriate constraints to weapon effects including asking people philosophically opposed to such views in order to attain better insights as to the issues and arguments. I learn from such exchanges and I hope it improves my work as a battlefield victory that leads to unsustainable peace wasnt much of a victory.
One of my mentors had a numeric tatoo so I do not appreciate certain comparisons as they are inappropriate and possibly intellectually lazy.
The Nazi’s recognized the debilitating effects of the factory slaughter had on their own people, so they organized the victims themselves to perform the tasks hence that deserved status as purest evil.
j_kies
You are missing the point, i.e. that human nature is very malleable and fragile given the circumstances, and the line between good and evil is very thin.
It is “easier” to kill somebody you don’t see than looking the other guy in the eye before you blow his brains out, so the distinction is not “academic”. Some of our drone operators who view the mayhem experience same type of psych damage as front line soldiers…
As far as cluster bombs vs. other instruments of death, we are playing semantics; it is in the eye of the beholder. It is the same as the exaggerated claims Hollywood and Silicon Valley “instant experts” make on subject of water or fracking while they live in utter luxury and phantasy world, and the pour millions/billions in the 3rd world, while we have miserable conditions right here at home, like in Appalachia…but they and the news media are not interested to see or care….
I have been at Auschwitz, so please no cheap shots.My point was to illustrate the dichotomy of human morality/immorality.