# Communications of the ACM

Research Highlights

## Technical Perspective: What Does Provable Security Mean for Cryptographic Schemes?

In the early ages of cryptography, cryptographic schemes were considered secure until an attack could break them. But such an attack may take time to be discovered, and it might be too late. In the 1980s, a new paradigm emerged with the notion of provable security, with three important steps: the security model, the computational assumptions, and the proof by reduction.

The most important step is the formalization of a security model, which precisely states what security should mean, or alternatively, which attacks one wants to withstand. With public-key encryption, as studied in the following paper, one could simply expect the one-wayness, which basically requires that from a ciphertext, without the decryption key, it is difficult to recover the plaintext. But this is a very weak security notion. It has thereafter been improved into semantic security, which can be seen as a computational variant of perfect privacy: no adversary can recover one bit of information of the plaintext, still from the ciphertext and without the decryption key.

For a given cryptographic scheme, as perfect security is almost never achievable, one requires a computational assumption: some longstanding problems are famous, such as the integer factoring, the discrete logarithm problem or lattice-based problems. Some of them have a quite simple statement and have been scrutinized for many years. Their intractability is then well-admitted, and a new efficient algorithm to solve them would be of great importance from the algorithmic point of view. Computational assumptions are some well-defined problems that are widely accepted as difficult.

Eventually, one can state and prove the security claim: under the intractability of that hard problem, the cryptographic scheme achieves the security notion. To demonstrate such a claim, one assumes the existence of an efficient adversary against the security notion, and uses it, as a subroutine, in an algorithm to efficiently break the underlying hard problem. The latter cannot exist, hence, by contradiction, neither can the former. This is a proof by reduction, regarding complexity theory, by showing a problem (the long-standing problem) reduces to another problem (the security notion): the latter problem is at least as difficult as the former.

The ElGamal public-key encryption scheme, as explored in the paper, is well-known to have secure instantiations in appropriate groups: against chosen-plaintext attacks, it achieves the semantic security under the Decisional Diffie-Hellman Problem. Then, one usually claims it as a secure public-key encryption scheme. But if the adversary may have additional information than just the public key, security is no longer guaranteed. In real-life, concrete implementations of provably secure schemes may be insecure because of incorrect behaviors: randomness does not contain enough entropy, more information leaks, by legitimate accesses or using side-channel attacks that exploit power-consumption, execution-time, shared cache memory, among others.

The authors perfectly illustrate that provable security is not a certification for all the applications nor all the implementations: the ElGamal public-key encryption scheme in OpenPGP is vulnerable to several kinds of attacks, and for multiple reasons. First, the standard that describes ElGamal is open to several different and incompatible interpretations, leading to cross-configuration attacks. Second, in a stronger scenario than just the chosen-plaintext setting, a side-channel attack can recover the decryption key.

Provable security guarantees an adversary will unlikely be able to break a specific security notion if the scheme is correctly implemented and used in appropriate situations. It is thus of primary importance to identify the required security notion for the specific application (no need of excessive security, for efficiency concerns), and to have a correct implementation that prevents any unwanted leakage. Standardization bodies should also make all the requirements more precise to avoid any ambiguities that likely introduce weaknesses.

### Author

David Pointcheval is senior researcher at CNRS, head of the crypto team at INRIA, and chair of the ENS computer science department, Paris, France.

### Footnotes

To view the accompanying paper, visit doi.acm.org/10.1145/3592835