Sign In

Communications of the ACM

Practical programmer

A Research Project with Important Practitioner-Oriented Findings

I've been known, over the years, for being critical of computing research. I've said things like "computing research fixates on ideas of dubious value" (for example, formal methods), "computing research does more advocacy research than evaluative research," and "computing research all too often disdains practice." Although I am a student of computing research, as shown by the several studies some colleagues and I did on the essential nature of that research (for example, [2]), I am aware that it is both a successful field and, all too often, an unsuccessful one as well.

All of that makes it a special pleasure for me to state in this column that I've found a recent computing research project I unequivocally like. The project involves a study of something very important (to both research and practice), which was reasonably well conceived, written in a way that makes it accessible to everyone in the computing field, and is a study whose findings are quite significant.

Here's what the research in question (see [1]) is about. The three authors chose to evaluate an international standard for software product development in the context of a new software project. That standard—ISO/IEC 9126—is about product quality. The authors noted that, at the beginning of their study, there had been substantive criticism of the standard in the computing literature, but that it had since been updated (in 2001 and 2002) and they were interested in seeing if those criticisms were still valid.

They chose specifically to examine the standard in the context of establishing product quality characteristics following the design phase of a software product. The standard takes the position that it is possible to do so, and from a research perspective examining the use of the standard following the design phase made it simpler to perform the evaluation without having to proceed with the implementation. The participants in the research were in their final year of college, were each given the same relatively real project with the same basic project inputs, and were asked to produce a design (the application was a network project management tool capable of managing a large number of autonomous collaborating partners). Outputs of the project were to be a process model in DFD format, a relational database storage model in table format, and an interface model with screen shots. The result of the design process, in other words, was to be capable of being handed to developers/programmers for implementation. In addition, the participants were to examine the quality of their resulting design, and were provided with a unified set of quality attributes and metrics for their evaluation, drawn from the standard whose use was being explored.

What was the bottom line of the study? There were serious problems in attempting to use the standard and the study is quite specific and detailed about what those problems were. In general, the standard was too open to interpretation, ambiguous in its meaning, and incomplete in what it achieved. Specifically, the researchers identified these problems:

  • Some concept definitions are ambiguous;
  • Some concept definitions overlapped;
  • Overlapping definitions could lead to ambiguous metric counting rules;
  • Some quality attributes, such as reliability and scope, are ignored;
  • Traceability measures are insufficient;
  • Some metrics require information that a designer could not have; and
  • No guidelines are provided for aggregating individual metrics into an overall evaluation.

The authors conclude, based on these findings, that the standard "in its present format fails to achieve its objectives and be useful to its users."

Now, let's step back for a moment and reevaluate the results reported here. The authors have considered an important topic—software product quality—through the lens of an international standard that was created to allow practitioners to better understand the quality of the products they are producing. They explored using that standard in a reasonably practical setting, and found it seriously wanting. They explored the specifics of using the standard from a hands-on point of view, and made specific recommendations for the ways in which the standard needs improvement.

Nice. Very nice. I would assert that this is the kind of research the computing field most desperately needs. Explore a topic with theoretical underpinnings and practical implications. Do an evaluation of that theory and the practice that results from its use. Make recommendations on the strengths and weaknesses in the theory, and ways of improving it.

I can only find two flaws with this study. It is, you may have noticed, the all-too-frequent research study using a relatively small project and student subjects. Given the cost of doing research using real projects and real practitioner subjects, this is of course all too understandable. The question that must always be asked in these circumstances is "Could these research-in-the-small findings be scaled up to research-in-the-large reality?" One of the reasons I am enthusiastic about this particular research is that it is easy to imagine the answer to that question is "yes." The issue here is the adequacy of a standard. I find it easy to imagine that if the standard is not comprehensible to reasonably bright students, it is probably also going to be incomprehensible to reasonably bright practitioners.

The other flaw? It's where this finding was published. This is the kind of research result that needs to be trumpeted to the field, announced emphatically from the highest platform. But the authors presented it to a limited audience at a relatively small conference held in a somewhat remote location.

In fact, that flaw is the primary reason I have chosen to write about these research results here. I think the findings themselves need the wider audience of Communications readers. And I also think it is important for practitioners like me to point out to researchers the kinds of research that we really find to have value.

Back to Top


1. Al-Kilidar, H., Cox, K., and Kitchenham, B. The use and usefulness of the ISO/IEC 9126 quality standard. In Proceedings of the ISESE 2005 Conference (Noosa, Australia).

2. Glass, R.L., Vessey, I., and Ramesh, V. An analysis of the research in computing disciplines. Commun. ACM 47, 6 (June 2004).

Back to Top


Robert L. Glass ( is the publisher/editor of the Software Practitioner newsletter and editor emeritus of Elsevier's Journal of Systems and Software. He is currently an honorary professor in the ARC Center for Complex Systems at Griffith University, Brisbane, Australia.

©2007 ACM  0001-0782/07/1100  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2007 ACM, Inc.


No entries found