Sign In

Communications of the ACM

Inside risks

The Risks of Self-Auditing Systems

The Risks of Self-Auditing Systems, illustration

Credit: Alicia Kubista / Andrij Borys Associates

Over two decades ago, NIST Computer Systems Laboratory's Barbara Guttman and Edward Roback warned that "the essential difference between a self-audit and an external audit is objectivity."6 In that writing, they were referring to internal reviews by system management staff, typically for purposes of risks assessment—potentially having inherent conflicts of interest, as there may be disincentives to reveal design flaws that could pose security risks. In this column, we raise attention to the additional risks posed by reliance on information produced by electronically self-auditing sub-components of computer-based systems. We are defining such self-auditing devices as being those that display internally generated data to an independent external observer, typically for purposes of ensuring conformity and/or compliance with particular range parameters or degrees of accuracy.

Our recent interest in this topic was sparked by the revelations regarding millions of Volkswagen vehicles whose emission systems had been internally designed and manufactured such that lower nitrogen dioxide levels would be produced and measured during the inspection-station testing (triggered by the use of the data port) than would occur in actual driving. In our earlier writings, we had similarly warned about voting machines potentially being set to detect election-day operations, such that the pre-election testing would show results consistent with practice ballot inputs, but the actual election-day ballots would not be tabulated accurately. These and other examples are described further in this column.


CACM Administrator

The following letter was published in the Letters to the Editor in the September 2016 CACM (
--CACM Administrator

Overall, the Inside Risks Viewpoint "The Risks of Self-Auditing Systems" by Rebecca T. Mercuri and Peter G. Neumann (June 2016) was excellent, and we applaud its call for auditing systems by independent entities to ensure correctness and trustworthiness. However, with respect to voting, it said, "Some research has been devoted to end-to-end cryptographic verification that would allow voters to demonstrate their choices were correctly recorded and accurately counted. However, this concept (as with Internet voting) enables possibilities of vote buying and selling." This statement is incorrect.

While Internet voting (like any remote-voting method) is indeed vulnerable to vote buying and selling, end-to-end verifiable voting is not. Poll-site-based end-to-end verifiable voting systems use cryptographic methods to ensure voters can verify their own votes are correctly recorded and tallied while (paradoxically) not enabling them to demonstrate how they voted to anyone else.

Mercuri and Neumann also said, "[end-to-end verifiability] raises serious questions of the correctness of the cryptographic algorithms and their implementation." This sentence is potentially misleading, as it suggests confidence in the correctness of the election outcome requires confidence in the correctness of the implementation of the cryptographic algorithms. But end-to-verifiable voting systems are designed to be "fail safe"; if the cryptographic algorithms in the voting system are implemented incorrectly, the audit will indeed fail. Poor crypto implementations in the voting system will not allow an audit to approve an incorrect election outcome.

Finally, we note that end-to-end verifiable election methods are a special case of "verifiable computation," whereby a program can produce not only a correct result but also a "proof" that it is the correct result for the given inputs. Of course, the inputs need to be agreed upon before such a proof makes sense. Such methods may thus be useful not only for election audits but elsewhere.

Joseph Kiniry
Portland, OR
Ronald L. Rivest
Cambridge, MA



We cannot fully elucidate here the flaws in each of the many proposed cryptographically verifiable voting subsystems. Their complexity and that of the surrounding systems environments undemocratically shifts the confirmation of correct implementation to a scant few intellectually elite citizens, if even accomplishable within an election cycle. However, all of these methods have vulnerabilities similar to the Volkswagen emission system; that is, stealth code can be triggered situationally, appearing correct externally while internally shifting vote tallies in favor of certain candidates over others. We have previously discussed the incompleteness of cryptographic solutions embedded in untrustworthy infrastructures, potentially enabling ballot contents to be manipulated or detected via vote-selling tags (such as write-in candidates or other triggers). The mathematics of close elections also requires that a very high percentage of ballots (over 95%) be independently checked against the digital record, which is not likely to occur, leaving the results unverified.

Rebecca T. Mercuri
Hamilton, NJ
Peter G. Neumann
Menlo Park, CA

Displaying 1 comment

Log in to Read the Full Article

Sign In

Sign in using your ACM Web Account username and password to access premium content if you are an ACM member, Communications subscriber or Digital Library subscriber.

Need Access?

Please select one of the options below for access to premium content and features.

Create a Web Account

If you are already an ACM member, Communications subscriber, or Digital Library subscriber, please set up a web account to access premium content on this site.

Join the ACM

Become a member to take full advantage of ACM's outstanding computing information resources, networking opportunities, and other benefits.

Subscribe to Communications of the ACM Magazine

Get full access to 50+ years of CACM content and receive the print version of the magazine monthly.

Purchase the Article

Non-members can purchase this article or a copy of the magazine in which it appears.