Sign In

Communications of the ACM


When Will We Learn?

Every major software incident requires a thorough and public analysis (#6)

View as: Print Mobile App Share:
Bertrand Meyer

"#6" is a lower bound of the number of times I have made this argument here, starting almost ten years ago. I have lost count (see a few samples here) but am going continue ad nauseam (or ad kickedoutam). The argument, to quote myself once more, was (abridged):

Airplanes today are incomparably safer than 20, 30, 50 years ago: 0.05 deaths per billion kilometers. That’s not by accident. Rather, it’s by accidents.

What has turned air travel from a game of chance into one of the safest modes of traveling is the relentless study of crashes and other mishaps. ...

Now consider software. No week passes without the announcement of some debacle due to "computers" — meaning, in most cases, bad software... Software disasters attract attention when they arise, and inevitably some kind of announcement is made that the problem is being corrected, or that a committee will study the causes; almost as inevitably, that is the last we hear of it...

Only with a thorough investigation of software projects gone wrong can we help the majority of projects to go right.

Whether it is a law as suggested then or simply an obligatory professional practice, a change of attitude remains as necessary and urgent as ever.

Latest example (thanks to Martin Robillard for tweeting about it): on May 7th, the first test of Canada's brilliant new nationwide alert system failed. See here. Let us refrain from commenting (beyond a couple of exclamation marks) on the supposed cause: a "coding error" (!) that could not cope with an extraneous space (!) in an input. But if you are thinking "do I not know about this already?" you are right on substance if not on history and geography. The article recalls that an eerily similar incident happened a few months ago in Hawaii, and goes on to explain:

"The [Canadian regulatory agency] has no insights with respect to what occurred in Hawaii, other than what has been reported in the media," the regulator said.

But it added that Canada has safeguards in place to prevent false signals from being distributed to mobile devices.

 In other words: we do not know about history and are cheerfully doomed to repeat it.

Let me take a bet: as usual, this will be the last we hear about the matter, although sure enough there will be some inquiry which will point to "human error". So long (until installment #7).


Antonio Costa

Nice article, are we doomed to repeat mistakes in software over and over?

Displaying 1 comment

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account