acm-header
Sign In

Communications of the ACM

Privacy and security

The Research Value of Publishing Attacks


The Research Value of Publishing Attacks, illustration

Credit: Stuart Bradford

Information security is booming. Companies are making money selling fear and countermeasures. The research community is also extremely active, churning out papers featuring attacks on systems and their components. This includes attacks on traditional IT systems as well as IT-enhanced systems, such as cars, implantable medical devices, voting systems, and smart meters, which are not primarily IT systems but have increasing amounts of IT inside. Moreover, any new paper on analysis methods for critical systems is now considered incomplete without a collection of security-relevant scalps on its belt. Pretty much every system imaginable, critical or not, is now a target of attacks.

There are good reasons for this trend. Fear sells! Headlines are good for conference attendance, readership, and tenure cases. Moreover, negative messages about successful attacks are simple and understandable by the general public, much more so than other research results. And security and insecurity are, after all, two sides of the same coin.

Back to Top

Seek and Ye Shall Find

Systems have bugs and large, complex systems have many bugs. In their recent analysis of open source projects, Coverty2 used a static analysis tool to find 16,884 defects in approximately 37.5 million lines of source code from well-managed open source projects, which is approximately 0.45 bugs per 1,000 lines of code. These were medium to high-risk defects, including typical security-critical vulnerabilities such as memory corruption problems and API usage errors. For large-scale projects, developers cope with the seemingly infinite number of bugs in their products by employing triage processes to classify which bugs they work on and which they ignore. There are simply too many to address them all.

This should not come as a surprise. Complexity is at odds with security. Moreover, economic factors are often at play, where timeliness and functionality are more important than security. But there are other reasons why insecurity is omnipresent.

To begin with, systems undergo constant evolution. There has been a recent surge in attacks where once-closed systems, like medical devices and cars, open up and are enhanced with new communication interfaces (for example, see Francillon et al.,3 Halperin et al.,4 and Rouf et al.6). The problem here is that the extended capabilities were usually not anticipated in the original design, often resulting in vulnerabilities that are easy to exploit. Not surprisingly, adding wireless communication without measures to ensure the confidentiality and authenticity of transmitted data results in a system vulnerable to eavesdropping and spoofing. This problem is particularly acute for products manufactured by traditional industries that did not previously require expertise in information security.

Systems not only interface with the outside world, they also interface with each other. For their composition to be secure, the assumptions of one subsystem must match the guarantees of the other. However, economics and market availability often dictate the choices made, especially for hardware components where manufacturing one's own components is often not an option.

Finally, even when a system's security is carefully analyzed, this analysis depends on the deployment scenarios considered, in particular, the associated adversary model. What kind of an adversary should the system withstand? A system secure against a network attacker may be completely insecure against one with a screwdriver and physical access to the server. Many IT-enhanced systems have been developed using proprietary protocols and communication technology, leading to the belief that it was difficult for outsiders to interface with them. However, for wireless communication, the increasingly widespread availability of tools and equipment, such as Universal Software Radio Platforms, has made it easy and inexpensive for nonspecialists to communicate with even the most exotic systems, thus dramatically changing the adversary's capabilities. As scenarios and adversaries change over time, so do the possible attacks.

While publishing attacks has been controversial in the past, it has become common to publish attack papers. Today, there are, in fact, markets in vulnerabilities, with companies as well as governments participating in them.

Summing up, it is not surprising to see so many system attacks reported, in particular on IT-enhanced systems. But what makes attacks worthy of scientific publication? Are all these attacks of the "yet another buffer overflow" variety? Is there any point in publishing research papers that feature attacks on systems that were not designed to resist attacks, not used as they were designed, or used in scenarios for which they were not designed?

Back to Top

Learning from Attacks

A hallmark of good research is the generality of the insights gained. In security, these are insights into the problem and countermeasures.


As scenarios and adversaries change over time, so do the possible attacks.


Increasing awareness is a common argument for publishing attack papers and has its merits. In particular, a heightened awareness of problems and their severity may lead to the system in question being withdrawn from service; alternatively, others can follow up with designs that solve the documented problems. Such attacks have, in the past, raised awareness among policy makers of the immaturity of existing technologies and the associated risks. This is particularly valuable for new systems and technologies. Here, the novelty of the kind of attack is less relevant than the novelty of the system and the impact of its compromise.

Although raising awareness is important, it can backfire as too much sensationalism numbs the readers' sensitivity to what the real problems are. And there is usually limited research value in just showing how standard problems can be exploited in yet another setting. It is clear that unauthenticated communication opens the door to spoofing attacks, whether we are talking about cars, medical implants, or personal robots. The same holds for standard, well-studied, software vulnerabilities. In contrast, a paper that refines an existing attack, demonstrates a novel kind of attack, or contributes to new attacker models can have tremendous research value.

One benefit of studying attacks is a better understanding of the cause of the underlying vulnerability, for example, whether it is the result of a design or implementation error, the unavailability of solutions on the market, improper system usage, or an oversight in the risk analysis. This last reason occurs surprisingly frequently; systems are often left unprotected because the designers simply do not believe they need to be protected or assume the systems are sufficiently closed or obscure and therefore unlikely to be reverse-engineered by attackers (or determined researchers). As recent attacks on medical devices and modern cars show, these assumptions are incorrect.

An attack paper can also explicate what is required for a successful attack. Is the exploitation of a vulnerability straightforward or only possible by well-funded, technically sophisticated attackers? The devil is in the details! A good attack paper can show how to construct an exploit and the cost of doing so. Moreover, it can help refine the conditions under which the attack may succeed and its success probability. An attack might be conditioned not only on the attacker's computational capabilities but also on its physical location, antenna size, transmission power, and other factors. For example, the success of spoofing attacks on Global Positioning System receivers strongly depends on the locations and characteristics of the attackers' antennas.

To expand on our last point, what makes security special is the role of the adversary. A system's security can only be evaluated with respect to a model of the adversary, that is, a description of his capabilities. Thus, in our view, the most important reason for studying attacks is that they can help refine this model for the domain at hand. Here, we give two examples of this from the domain of security protocols and relay attacks.

In 1978, Needham and Schroeder proposed one of the first authentication protocols. Their protocol used public key cryptography to achieve mutual authentication between two principals in the presence of an attacker who can eavesdrop and spoof messages. Eighteen years after its publication, Lowe5 showed that the protocol could be attacked by a man-in-the-middle, who executes the protocol as an insider in two interleaved sessions. This attack sensitized the security protocol community to the importance of considering adversaries who have insider capabilities. Later, motivated by attacks on long-term keys stored in memory, weak random number generators, and the ability of adversaries to read out part of an agent's session state, cryptographers developed a host of more refined adversarial models and security definitions reflecting these enhanced capabilities. These new models have led to improved protocols as well as methods and tools for reasoning about the security of protocols and systems, with respect to these refined adversarial models; for example, see Basin and Cremers.1


As our physical and digital worlds become more tightly coupled, the incidence of attacks will increase as well as their consequences.


Second, more recent examples are Relay, Mafia-Fraud, and Wormhole attacks where the attackers simply relay messages, unmodified, between the two communicating parties. Such attacks have been recently used to compromise entry and start systems in cars3 and payment systems that rely on near-field communication. These attacks showed that the success of relay attacks on such systems strongly depends on the speed that attackers can process signals. They further demonstrated that existing technology enables attackers to build relays that have practically undetectable processing delays. This was particularly important in the case of entry and start systems for cars; the attacks revealed that these systems could only detect relays that introduce delays longer than several microseconds. This led to refined attacker models and also motivated new security solutions, for example distance bounding protocols.

Back to Top

Conclusion

As our physical and digital worlds become more tightly coupled, the incidence of attacks will increase as well as their consequences. Many of these attacks will be newsworthy, but most will not be research-worthy. This does not mean papers featuring attacks on highly visible systems should not find their way into research conferences; having had such papers published, the authors of this column do appreciate that the community accepts results of this kind. However, as researchers we should have high aspirations. With every attack paper there is an opportunity to truly contribute to the community with new insights into both systems and their vulnerabilities, and adversaries and their capabilities. We believe that one should take this opportunity and, after discovering an attack, take a step back and reflect on what can be learned from it, and afterward present it to the community.

Back to Top

References

1. Basin, D. and Cremers, C. Modeling and analyzing security in the presence of compromising adversaries. In Computer Security—ESORICS 2010, volume 6345 of Lecture Notes in Computer Science. Springer, 2010, 340–356.

2. Coverity Scan: 2011 Open Source Integrity Report. Coverity, Inc. San Francisco, CA, 2011.

3. Francillon, A., Danev, B., and Capkun, S. Relay attacks on passive keyless entry and start systems in modern cars. In Proceedings of the Network and Distributed System Security Symposium, 2011.

4. Halperin, D. et al. Pacemakers and implantable cardiac defibrillators: Software radio attacks and zero-power defenses. In Proceedings of the 2008 IEEE Symposium on Security and Privacy, SP '08, IEEE Computer Society Washington, D.C., 2008, 129–142.

5. Lowe, G. Breaking and fixing the Needham-Schroeder public-key protocol using FDR. Software—Concepts and Tools 17, 3 (1996), 93–102.

6. Rouf, I. et al. Security and privacy vulnerabilities of in-car wireless networks: A tire pressure monitoring system case study. In Proceedings of the 19th USENIX Conference on Security, USENIX Security'10. USENIX Association, Berkeley, CA, 2010, 21.

Back to Top

Authors

David Basin (basin@inf.ethz.ch) is a professor in the Department of Computer Science at ETH Zurich and the founding director of the Zurich Information Security and Privacy Center.

Srdjan Capkun (srdjan.capkun@inf.ethz.ch) is an associate professor in the Department of Computer Science at ETH Zurich and the director of the Zurich Information Security and Privacy Center.


Copyright held by author.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2012 ACM, Inc.


 

No entries found