acm-header
Sign In

Communications of the ACM

Viewpoint

Tony's Law


ants and honey jar

Credit: Andrij Borys Associates / Shutterstock

Someone did not tighten the lid, and the ants got into the honey again. This can be prevented by placing the honey jar in a saucer of water, but it is a nuisance, occupies more counter space, and one must remember to replenish the water. So we try at least to remember to tighten the lid.

In the context of security, the software industry does not always tighten the lid. In some cases it fails to put the lid on at all, leaving the honey exposed and inviting. Perhaps the most infamous example of recent years is the WINvote voting machine, dubbed the worst voting machine in the U.S. A security analysis by the Virginia Information Technologies Agency in 2015 found, among other issues, the machines used the deprecated WEP encryption protocol, that the WEP password was hardwired to "abcde," that the underlying Windows XP (which had not been patched since 2004) administrator password was set to "admin" with no interface to replace it, and that the votes database was not secured and could be modified.7 These machines had been used in real elections for more than 10 years.

Such cases constitute malpractice, and call for regulation. Regulation is necessary because not everything can be trusted to market forces. There are many examples in diverse industries. The sale of alcohol to minors is prohibited. Construction and housing cannot use asbestos and lead-based paints due to public health concerns. The automotive industry is required to install seat belts and report pollution levels. Aviation is strictly regulated, including airspace utilization (distances between planes), aircrew work schedules, aircraft noise levels, and more. Advertisers are required to add warning labels on advertising for cigarettes and other tobacco products.

Computers are regulated in terms of electrical properties, such as the FCC regulations on radiation and communication. But the software running on computers is not regulated. Nearly 40 years ago, in his ACM A.M. Turing Award acceptance speech, Tony Hoare had the following to say about the principles that guided the implementation of a subset of Algol 60:2 "The first principle was security. [...] A consequence of this principle is that every occurrence of every subscript of every subscripted variable was on every occasion checked at runtime against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously, they urged us not to—they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980, language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law." [emphasis added].

Hoare said this when personal computers and the Internet were in their infancy, long before the Web, DDoS attacks, and data breaches. Indeed, a lot has changed during this time (see Table 1). But one thing that has not changed is the lack of any meaningful regulation on the software industry.

t1.jpg
Table 1. Changes in software and computing in the last 30 years.

In retrospect, Hoare's pronouncement exhibited great foresight. To this day buffer errors represent the single most common vulnerability,a even more so among high-severity vulnerabilities (see Figure 1 and Figure 2). Just imagine if a law requiring bounds checks had been enacted more than 40 years ago, and there were no buffer overflows today. As it stands, Microsoft for one instituted its Security Development Lifecycle as a mandatory policy in 2004. This includes—among many other features—the option to require compilation with flags that insert bounds checks and the option to ban unsafe library functions. On the one hand this demonstrates that such practices are just a matter of deciding to use them. On the other hand they are still not universally required, and indeed even Microsoft products still occasionally suffer from buffer issues.b

f1.jpg
Figure 1. The number of software vulnerabilities cataloged by the NIST National Vulnerability Database skyrocketed in 2017, and the fraction of vulnerabilities involving buffers (either categorized as "buffer error" or containing the keyword "buffer") kept pace.

f2.jpg
Figure 2. According to the National Vulnerability Database, since the beginning of the decade approximately 15% of all vulnerabilities have been related to buffer errors, and this rises to between one-quarter and one-third of the vulnerabilities if only those with a high severity score are considered.

Similar sentiments have been repeated several times since Hoare's speech. Twelve years ago, ACM President David Patterson put forward the "SPUR manifesto,"3 suggesting the development of 21st-century computer (software) systems should focus on security, privacy, usability, and reliability—SPUR. The goal should be to be as safe as 20th-century banking, as low maintenance as 20th-century radio, and as reliable as 20th-century telephony. But more than a decade has passed, and it seems the focus on low cost, multiple features, and above all time to market is as strong as ever. Manufacturers of home appliances compete, among other ways, by offering superior warranties for their products. The software industry, in contradistinction, has been getting away with software that comes "without warranty of any kind, expressed or implied, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose."

Indeed, lectures such as Patterson's are typically either ignored or stir up a chorus of naysayers. The typical arguments are the perceived monetary costs, the difficulties or even the impossibility of implementation, and the fear of reduced innovation and technological progress. Schneider, in a recent Communications Viewpoint, also notes the need for a detailed cost/benefit analysis to ascertain what society is willing to pay for improved security, where the costs also include reduced convenience (due to the need for authentication) and functionality (due to isolation).4 And indeed all regulations are, by definition, limiting. But do we really need to wait for a large-scale security catastrophe, possibly including significant loss of life, before we act at all? As the Microsoft example shows, extensive technological solutions and best practices actually already exist. It is just a matter of making their use pervasive.

So why are software security faults tolerated? A possible explanation is that software deficiencies have so far been less tangible than those of traditional industries. Many people install multiple locks on their doors and would consider holding intruders to their homes at gunpoint, but fail to take sufficient safeguards to protect their home computers from hackers. The problems resulting from identity theft are much more common but also much more bureaucratic, boring, and less visual compared to more dramatic problems such as exploding gas tanks in pickup trucks.

But above all else, it seems there is a market failure in incentivizing the industry to take the required actions.1,6 Buyers will not pay a premium for value (security) they cannot measure, and which in many cases does not affect them personally and directly. Approaches suggested by economists to measure the value of protection do not help because the cost of a security catastrophe is up to anyone's imagination. This has prevented an insurance industry for software producers from emerging, and as Anderson and Moore write, "if this were the law, it is unlikely that Microsoft would be able to buy insurance."1 In practice, the reduction in stock value after disclosing a vulnerability is less than 1%.5 The abstract danger of large-scale attacks leading to financial loss and even loss of human life is not enough to change this.

At the same time, we are inundated by increasing numbers of reports of data breaches and hackers infiltrating various systems (see Table 2 for prominent recent examples). Some of these incidents demonstrate that extensive physical civil infrastructures are at peril across the globe—including hospitals, power plants, water works, transportation systems, and even nuclear facilities. And the root cause at least in some cases is the failure of the software to take appropriate precautions.

t2.jpg
Table 2. Notable security incidents from 2007–2017.

The software systems in a modern car—not to mention a passenger plane or a jet fighter—are of a scope and complexity that rivals any operating system or database produced by the traditional software industry. Indeed, every industry is now a software industry. And the products of every industry are vulnerable due to software defects. In such a context, required software regulation includes:

  • Transparency: the obligation to investigate and report all exploits including their technical details.
  • The prohibition of dangerous practices, such as not using type-safe languages and appropriate encryption.
  • Holding companies accountable for their unsafe practices.

These requirements need the backing of legal regulations, because market forces compel industry not to invest in security too much. The market promotes a race to the bottom; except in niche applications, whoever is faster to market and cheaper wins, and whoever is tardy due to excessive investment in security loses. Regulation is the only way to level the playing field, forcing everybody to invest in what they know to be needed but think they cannot afford to do when the competition does not.

Of course, it will not be easy to implement these ideas and agree on the myriad details that need to be settled. Who gets to decide what is a "dangerous practice"? How do we deal with installed systems and legacy code? Who is charged with enforcing compliance? Moreover, it is not clear how to make this happen at the political level. In addition, no single country has jurisdiction over all software production. So a system of certification is required to enable software developers to identify reliable software, and to perform due diligence in selecting what other software to use.

International frameworks already exist demonstrating these issues can be solved. The EU General Data Protection Regulation (GDPR), which concerns the rights of individuals to control how their personal information is collected and processed, is an encouraging example. Another example is the Common Criteria for Information Technology Security Evaluation, an international framework for the mutual recognition of secure IT products. But this covers only high-level desiderata for security, not the regulation of low-level technicalities. This gap is partly filled by the Motor Industry Software Reliability Association (MISRA), which has defined a set of suggested safe coding practices for the automotive industry. However, these are not required by any formal regulations.


Regulation is in the interest of the long-term prosperity of the software industry.


Protracted discussions on what to do and what we are willing to pay for are counterproductive. Such things cannot be planned in advance. Instead we should learn from the iterative approach to constructing software: try to identify the regulations that promise the highest reward for the lowest cost, work to enact them, learn from the process and the results, and repeat.

Regulation is in the interest of the long-term prosperity of the software industry no less than in the interest of society as a whole. Software vendors with integrity should stop resisting regulation and instead work to advance it. The experience gained will be extremely important in discussing and enacting further regulations, both in a preemptive manner and—in the worst-case scenario—in the aftermath of a security catastrophe.

Back to Top

References

1. Anderson, R. and Moore, T. The economics of information security. Science 314, 5799 (Oct. 26, 2006), 610–613; https://bit.ly/2GctSYd.

2. Hoare, C.A.R. The emperor's old clothes. Commun. ACM 24, 2 (Feb. 1981), 75–83; DOI: 10.1145/358549.358561.

3. Patterson, D.A. 20th century vs. 21st century C&C: The SPUR manifesto. Commun. ACM 48, 3 (Mar. 2005), 15–16; DOI: 10.1145/1047671.1047688.

4. Schneider, F.B. Impediments with policy interventions to foster cybersecurity. Commun. ACM 61, 3 (Mar. 2018), 36–38; DOI: 10.1145/3180493.

5. Telang, R. and Wattal, S. An empirical analysis of the impact of software vulnerability announcements on firm stock price. IEEE Trans. Softw. Eng. 33, 8 (Aug. 2007), 544–557; DOI: 10.1109/TSE.2007.70712.

6. Vardi, M.Y. Cyber insecurity and cyber libertarianism. Commun. ACM 60, 5 (May 2017), DOI: 10.1145/3073731.

7. Virginia Information Technologies Agency. Security assessment of WINvote voting equipment for department of elections. (Apr. 14, 2015); https://bit.ly/2EgvBct

Back to Top

Author

Dror G. Feitelson (feit@cs.huji.ac.il) is the Berthold Badler Chair in Computer Science at The Rachel and Selim Benin School of Computer Science and Engineering, The Hebrew University of Jerusalem, Israel.

Back to Top

Footnotes

a. The NIST National Vulnerability Database uses 124 of the nearly 1,000 types listed in the Common Weakness Enumeration to categorize vulnerabilities. In 2015–1017, buffer errors CWE-119 accounted for 15.2%–18.4% of all vulnerabilities each year. The next highest categories were information leak/disclosure CWE-200 at 9.3%–10.9%, permissions, privileges, and access control CWE-264 at 8.2%–10.0%, and cross-site scripting CWE-79 at 7.3%–11.2%.

b. One example: Microsoft Office Equation Editor stack buffer overflow; see https://bit.ly/2zTngss


Copyright held by author.
Request permission to (re)publish from the owner/author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2019 ACM, Inc.


Comments


Lawrence Brunelle

While the author has stated truly many of the problems with which we are beset, I find I can't agree with regulation as the solution any more than the many-times-promoted "solution" of certification.

Here's why.
Regulation and certification are both concepts dependent on someone (well, a large body of agreed someones) bearing the correct knowledge AND authority to compel correct behavior. There are indeed bodies who CLAIM to have the knowledge. My observation is that they do AND they don't. Certain prescriptive practices are perhaps appropriate for one context and quite the opposite for some others. We have all seen the ghastly results of municipal ordinances run amuck, because the local governing body was not competent to write them, and we have seen (all too few) examples to the contrary. We can't reasonably expect government regulators to get it right any more frequently.

Don't believe me? Think of regulations as requirements. We all who write software know (or should) that if we want to regard a requirement as meaningful, we ought to be able to test it. And yet, extracting conditions of satisfaction, stated meaningfully so as to resolve to a set of tests, is frequently the greatest part of the development effort. And that is when dealing with those whose business interest is in gaining reliable software producing known and desirable output. Some regulator working for some government has less interest than that, and may be perfectly happy to shut down a whole company - it isn't HIS livelihood.

But supposing his diligence and good faith, and even his competence, when and how will he and his fellows examine any body of software? Are we going to simply examine only the gross failures? Maybe we insist on a record of tests passed? What would actually be effective? Let's not forget that, as the author says, "every industry is now a software industry." That means that the scope of software development is beyond measurement. What regulator will have THAT scope of knowledge?

But who is the most likely to know a body of code and what its vulnerabilities are? That would be the very people who develop it, who are intimate with it. And I report that, after over 20 years of writing C++ on various Unix flavors, in a number of firms, I find that the best defense there would be people who actually CARE about the work they produce. Yes, the technical education is indispensable. And a decent testing environment is of great importance. But the people you want are those who will look for and find a way to know they got the code right, no matter the methodology. That is a better solution than certification or regulation. And it has this additional virtue: it is attainable, today.


Displaying 1 comment

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents: