acm-header
Sign In

Communications of the ACM

Forum

Forum


I strongly disagree with the implication in Steven M. Bellovin et al.'s "Inside Risks" column "Internal Surveillance, External Risks" (Dec. 2007) that because the U.S. is a transit point for so much transcontinental international Internet traffic, U.S. government agencies should be able to intercept international communications transiting the U.S. Internet infrastructure. They said that if the source or destination of the traffic is in the U.S., then the agencies should discard the data before any "further processing is done." This seems to mean that international traffic is fair game. Perhaps a better solution would be if when the source and destination of the traffic are both in the U.S. or both are outside the U.S., then the agencies should discard the traffic before any "further processing is done."

The government has no right to monitor international traffic that legally passes through the U.S. The transit points are by nature as international as the United Nations or the World Bank, and their sovereignty must be respected.

The U.S. should thus set a standard for integrity by not touching data with an international origin and destination that legally passes through U.S. transit points, unless it has a legal right to do so.

Vir V. Phoha
Ruston, LA

Authors' Response:

While we may agree with Phoha's sentiment, it is settled law in the U.S. that the Constitution does not grant legal rights to noncitizens outside the U.S. Publishing in Communications, an international, technical publication, we wrote as technologists addressing a technical problem. The issue Phoha raises should certainly be addressed by the U.S. Congress, perhaps even by the legal system as well.

Steven M. Bellovin
Matt Blaze
Whitfield Diffie
Susan Landau
Jennifer Rexford
Peter G. Neumann

Back to Top

Account for Unknowns, as well as Knowns

I would like to compliment Communications on the quality of two columns: "The Profession of IT" "The Choice Uncertainty Principle" by Peter J. Denning and the "Viewpoint" "Stop the Numbers Game" by David Lorge Parnas (both Nov. 2007).

The former reminds me of some of the fundamental issues and factors that sometimes stay hidden from most of us and thus are rarely if ever taken into consideration. I'm reminded of something along these lines that happened to me some years ago. An analog-to-digital convertor appeared to be malfunctioning, producing wrong numbers. When we looked it over, it appeared at every stage of the conversion process to be working perfectly, and in fact it was. The problem was very high (for the time) frequency noise on the conversion-completed-ready-to-read line, leading to the register with the number being read prematurely before stabilizing on the correct value.

The cause of the problem was detected only by accident because the time base had been set incorrectly on the oscilloscope. We never thought a premature readout could be the cause of the apparent malfunction.

Parnas's "Viewpoint" interests me because my work has long involved science and technology statistics. Some of the most heavily used (and unfortunately frequently abused) indicators are publications and patent counts and citations. They are easy to measure but only poor proxies for what one actually wants to know. The holy grail is an accurate measure of scientific and technological output. Publication counts, citation counts, and patent statistics are most certainly not the holy grail.

In his splendid "Viewpoint," Parnas pointed out many of the reasons we should not simply be counting publications and/or citations in computer science (or in any other subject). He delivered one of the best critiques I've ever read of bibliometrics and the likely effects of its misuse. It would be so good if this message would get across to all those inclined to use (and abuse) such data.

Ian Perry
Brussels, Belgium

Back to Top

Diligence-Based Security Preferred

The article "Necessary Measures" about security risk assessment by Wade H. Baker et al. (Oct. 2007) took an excellent scenario-based approach to risk assessment rather than trying to assess a single limited type of attack to justify a particular countermeasure. However, the authors fell into the trap of describing security risk assessment through an incomplete example that ultimately was not really risk assessment. However, I do support their conclusion that valid security risk assessment is difficult and may not succeed due to a lack of valid data and excessive complexity.

The example was supposed to involve a risk assessment for an organization afflicted by known frequency and impact of malcode (malware) attacks through email, using a 2004 survey of the loss experience of 40 organizations. But the results (total cost $32.7 billion without countermeasures, $44,000 with countermeasures) are estimates of actual costs, not risks, as labeled in the article.

The considerable cost of risk assessment and the initial costs of countermeasures, along with various losses, seemed to have been omitted in the example calculations. Lost sales, reputation damage, and regulatory and legal fees were omitted for "pedagogical reasons." Any of these losses, along with lost talent, project delays, and other business adversity, could exceed all other losses in a single incident.

Security risk deals with adversity that might or might not happen but was not the case in the example. The countermeasures against ongoing adversity can be tested on a trial basis (without risk assessment) and their effects measured to reach conclusions, justify further action, and calculate trade-offs. Meanwhile, the authors included another factor—countermeasure effectiveness—adding yet another dimension of uncertainty and complexity. They could have tried to measure residual risk against countermeasure effectiveness. Unfortunately, they identified only the effectiveness of deterrence, protection, detection, and recovery. Other important factors (such as avoidance, mitigation, audit, transference, insurance, motivation, and correction) should also be considered to achieve the full value of the countermeasures.

Also not accounted for was the possibility that the identified countermeasures might be effective for many other types of attacks, thus potentially adding significant value to them. Moreover, reducing the risk of one type of attack could increase or reduce the risk of other types of attacks, depending on the strategies and tactics of unknown perpetrators that increase or decrease the organization's overall security risk.

Another problem is the unknown complexity of the ways risks, frequencies, vulnerabilities, countermeasures, victims, and perpetrators affect one another; for example, a malcode attack could result in an extortion attempt.

The main reason security risk is so difficult to measure is that no one can know how unknown perpetrators at unknown times in unknown circumstances might affect vulnerabilities, whether known or unknown. Though good security inhibits the sharing of confidential and potentially damaging loss and security experience information with outsiders, it also severely limits the collection of data.

Intangible risk-based security ought to be replaced with diligence-based security through known countermeasures, standards, laws, regulations, and competitive business practices.

Donn B. Parker
Los Altos, CA

Back to Top

A Good Serial Programmer

In "Parallel Computing on Any Desktop" (Sept. 2007), Ami Marowka explored the principles involved in Gant charts, CPM, and PERT. It's good to see these things being discussed, but I wonder about all this concern over how to implement multithreads in a multiprocessor environment. It's been done since the 1970s.

This is not to suggest that all parallel computing issues have been resolved. Where once multithreading was almost exclusively invoked by system software and primarily concerned efficient use of system resources, we are now moving that capability into applications to speed task throughput by having multiple parts of the task run simultaneously.

As Marowka framed it, a central issue is when and how to multithread and determine the throughput improvements that might be achieved. Invocation and completion of a thread yields processing overhead, so a poorly defined thread might be at best an exercise in futility or at worst hurt throughput. This introduces the case of the application that simply does not lend itself to multithreading.

I pretty much agree with Marowka, except that the "complexity" issue seemed to be overstated. My take is that a good serial programmer with, say, five years of experience already has the skills needed to take on multithreading.

Don Purdy
Rutherford, NJ

Back to Top

Author

Please address all Forum correspondence to the Editor, Communications of the ACM, 2 Penn Plaza, Suite 701, New York, NY 10121-0701; email: crawfordd@acm.org.

Back to Top

Footnotes

DOI: http://doi.acm.org/10.1145/1314215.1314217


©2008 ACM  0001-0782/08/0200  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2008 ACM, Inc.


 

No entries found