# Communications of the ACM

Review articles

## The GCT Program Toward the P vs. NP Problem

Geometric complexity theory (GCT) is an approach via algebraic geometry and representation theory toward the P vs. NP and related problems.9,13,15,29 It was proposed in a series of papers4,18,19,20,21,22,24,25,26 and was developed further in Bürgisser and Ikenmeyer,7 Bürgisser et al.,8 and Landsberg et al.14 This article gives an informal overview of GCT. It is meant to be an update on the status of the P vs. NP problem reported in Fortnow.11 See Mulmuley23 for a more detailed, formal overview of GCT.

### Key Insights

Let us begin by recalling an algebraic variant of the P vs. NP problem introduced in a seminal paper.29 It can be formulated in a very concrete form as the permanent vs. determinant problem. Here the permanent of an n x n variable matrix X is defined just like the determinant but without signs.

Specifically,

where xij's denote the entries of X and σ ranges over all permutations of the integers from 1 to n. Let K, the base field or ring of computation, be , , , or a finite field Fp of p elements, p being an odd prime. We say that perm(X) can be linearly represented as the determinant of an m x m matrix if perm(X) = det(Y) for some m x m matrix Y whose entries are linear combinations (possibly nonhomogeneous) over K of the variable entries of X. The permanent vs. determinant conjecture in Valiant29 is that perm(X) cannot be linearly represented as the determinant of an m x m matrix when m is small, that is, when m is polynomial in n, or more generally, when it is O(2logan) for some constant a > 0.

It is known3,29 that this conjecture, when K is or Fp, and m is polynomial in n, is implied by a stronger (nonuniform) versiona of the PNP conjecture or even the weaker #PNC conjecture. Here, #P denotes the class of functions,b like the number of satisfying assignments of a Boolean formula, that count the number of solutions of the problems in NP, and NC denotes the class of functions,b like the determinant, that can be computed efficiently in parallel in polylogarithmic time using polynomially many processors. The implication of the permanent vs. determinant conjecture from the (nonuniform) #P vs. NC conjecture is based on the fact that the permanent is #P-complete29 (in the spirit of the well-known NP-completeness) and that the determinant is (almost) NC-complete. It is also known that the permanent vs. determinant conjecture, when K is a large enough finite field Fp and m = O(2clog2n) for some large enough constant c > 0, implies the #P NC conjecture. As such, the permanent vs. determinant conjecture is, strictly speaking, an algebraic analogue of the #P vs. NC conjecture, not the P vs. NP conjecture. There is also an analogous algebraic analogue of the P vs. NP conjecture21,25 which, when K is a large enough finite field, implies the usual PNP conjecture. But its story is similar to that of the permanent vs. determinant conjecture. Hence, for simplicity, we only focus on the permanent vs. determinant conjecture here.

By the arithmetic case of this conjecture, we mean the case when K = , , or . This case for K = is implied by the case when K = Fp and also, as already mentioned, by the (nonuniform) #P vs. NC conjecture. The arithmetic case is easier than the case when K = Fp, because it avoids complications in algebra that arise in the case of finite fields.

Hence, let us first discuss the arithmetic case when K = , which implies the cases when K = or . The advantage of dealing with the arithmetic conjecture over , in contrast to the original Boolean conjectures, is that this arithmetic conjecture is a statement about multivariate polynomials over . Hence, we can use techniques from algebraic geometry, which is the study of the common zeroes of sets of multivariate polynomials. These techniques work best when the base field is algebraically closed of characteristic zero, such as . Since the permanent and the determinant are characterized by their symmetries, we can also use techniques from representation theory, which is the study of groups of symmetries. As such, the GCT approach that goes via algebraic geometry and representation theory is very natural in the arithmetic setting.

The articles by Mulmuley and Sohoni25,26 reduce the arithmetic permanent vs. determinant conjecture to proving existence of geometric obstructions that are proof certificates of hardness of the permanent. The very existence of these obstructions for given n and m implies that the permanent of an n x n variable matrix cannot be linearly represented as the determinant of an m x m matrix. The geometric obstructions are objects that live in the world of algebraic geometry and representation theory. Their dimensions can be large, exponential in n and m. But they have short classifying labels. The basic strategy of GCT, called the flip,21,22 is to construct the classifying label of some geometric obstruction explicitly in time polynomial in n and m when m is small. It is called the flip because it reduces the lower bound problem under consideration to the upper bound problem of constructing a geometric obstruction label efficiently. The flip basically means proving lower bounds using upper bounds. Its basic idea in a nutshell is (1) to understand the theory of upper bounds (algorithms) first and (2) to use this theory to prove lower bounds later. But one may wonder why we are going for explicit construction of obstructions, when proving existence of an obstruction even nonconstructively suffices in principle. This is because of the flip theorem in Mulmuley,21,23 which says that in the problem under consideration we are essentially forced to construct some proof certificate of hardness explicitly.

The upper bound problems that arise in the context of the flip turn out to be formidable problems at the frontier of algebraic geometry. The flip theorem mentioned above also says that stronger versions of the permanent vs. determinant conjecture and a standard derandomization conjecture12 in complexity theory imply together solutions to the upper bound problems in algebraic geometry that are akin to the ones that arise in the flip. Furthermore the article by Mulmuley22 gives evidence that even the upper bound problems that arise in the flip may be essentially implications of these conjectures in complexity theory. This suggests a law of conservation of difficulty, namely, that problems comparable in difficulty to the ones encountered in GCT would be encountered in any approach to the (nonuniform) P vs. NP problem (of which the arithmetic permanent vs. determinant conjecture over is an implication). This does not say that any approach to the P vs. NP problem has to necessarily go via algebraic geometry. But it does suggest that avoiding algebraic geometry may not be pragmatic since it would essentially amount to reinventing in some guise the wheels of this difficult field that have been developed over centuries.

There is also another reason why the explicit construction of geometric obstruction labels turns out to be hard. At the surface it seems that for such efficient construction one may need to compute the permanent itself efficiently, thereby contradicting the very hardness of the permanent that we are trying to prove. By the flip theorem in Mulmuley,21,23 this self-referential difficulty akin to that in Gödel's Incompleteness Theorem is also not specific to GCT. Any approach would have to cope with it. The article by Mulmuley22 shows how it can be tackled in GCT by decomposing the lower bound problem under consideration into subproblems without this difficulty. Conceptually, this is the main result of GCT in the arithmetic setting.

Finally, let us discuss the permanent vs. determinant conjecture over finite fields that implies the #P NC conjecture, the story for the algebraic variant of the P vs. NP problem in Mulmuley and Sohoni21,25 that implies the usual (Boolean) P vs. NP conjecture being similar. Here, the GCT plan is to prove the arithmetic case via algebraic geometry over as outlined above first, and then extend this proof to finite fields by proving additional results in algebraic geometry over , or rather, algebraically closed fields of characteristic zero such as . At the surface, this plan may seem counter-intuitive. After all, how can one hope to prove statements about finite fields using algebraic geometry over ? A basic prototype for this plan is the analogue of the usual Riemann hypothesis for finite fields proved in Deligne10 using algebraic geometry over algebraically closed fields of characteristic zero such as . The proof of this result, a crowning achievement in mathematics, shows that difficult statements about finite fields can be proved using algebraic geometry over algebraically closed fields of characteristic zero. In the same spirit, the GCT approach in the arithmetic setting can be extended so that it applies to the usual (Boolean) #P vs. NC and P vs. NP conjectures. But this story is beyond the scope of this article. It will be described in a later paper.17 In this article, we confine ourselves to the arithmetic permanent vs. determinant problem, which captures the crux of the P vs. NP problem.

The rest of this article is organized as follows. We describe the notion of geometric obstructions for the arithmetic permanent vs. determinant problem. This is followed by a description of the flip strategy that goes toward explicit construction of geometric obstruction labels in polynomial time. We state the upper bound problems in algebraic geometry that arise in this context. We also describe the self-referential difficulty in the problem under consideration and how GCT tackles it by decomposing the problem into subproblems without this difficulty.

### Geometric Obstructions

We now describe the GCT approach to the arithmetic permanent vs. determinant problem29 over based on the notion of geometric obstructions (proof certificates of hardness).

The starting point of the approach is the classical result that the permanent and determinant are completely characterized by their symmetries in the following sense.25

(D): Let Y be a variable m x m matrix. Then, det(Y) is the unique polynomial (up to a constant multiple) of degree m in the variable entries of Y such that, for any m x m invertible complex matrices A and B with det(A) det(B) = 1, det(Y) = det(AY* B), where Y* is Y or its transpose.

(P): Let X be a variable n x n matrix. Then, perm(X) is the unique polynomial (up to a constant multiple) of degree n in the variable entries of X such that, for any diagonal or permutation matrices A and B, perm(X) = perm(AX* B), where X* is X or its transpose, and the product of the entries of A is one, when A is diagonal, and similarly for B.

The goal is to solve the problem under consideration by exploiting these properties. Toward this end, the article25 constructs algebraic varieties Δ[perm, n, m] and Δ[det, m] such that if perm(X), where X is an n x n variable matrix, can be linearly represented as the determinant of an m x m matrix, then

Here, by an algebraic variety we mean the set of common solutions of a system of multivariate polynomial equations over . These are generalizations of the usual curves and surfaces. For example, the set of common solutions in 4 of two polynomial equations

is a two-dimensional variety Z formed by intersecting the three-dimensional ellipsoid corresponding to the first equation with the three-dimensional paraboloid corresponding to the second equation. By the coordinate ring of a variety we mean the space of polynomial functions on it. This is obtained by restricting the polynomial functions on the ambient vector space containing the variety to the variety. For example, the coordinate ring of Z here is the space of polynomial functions on C4 restricted to Z.

The varieties Δ[det, m] and Δ[perm, n, m] are formally defined later. Intuitively, the points in the variety Δ[det, m] correspond to the functions in the arithmetic analogue of NC called VP29 or the "limits" of such functions, and the points in Δ[perm, n,m] correspond to the functions in the arithmetic analogue of #P called VNP29 or the "limits" of such functions. Since the permanent vs. determinant conjecture is the arithmetic analogue of the #P vs. NC conjecture, it thus suffices to show that the inclusion (1) does not hold when m is small.

The goal is to show using algebraic geometry and representation theory that the inclusion (1) is impossible, as conjectured in Mulmuley and Sohoni,25 when m is polynomial in n. We call this the strong permanent vs. determinant conjecture. It implies the original conjecture and is almost equivalent to it in the sense that if (1) holds then perm(X) can be approximated infinitesimally closely by a linear representation of the form det(Y'), with dim(Y') = m. The following is a partial result toward the above stronger conjecture.

Theorem14 The inclusion (1) is impossible if mn2/2.

This implies the earlier quadratic lower bound16 for the permanent but is a bit stronger.

As an aid to prove the strong permanent vs. determinant conjecture in general, Mulmuley and Sohoni26 define the notion of a geometric obstruction to the inclusion (1). Informally, a geometric obstruction is a representation-theoretic object that lives on Δ[perm, n, m] but not on Δ[det, m]; see Figure 1. The very existence of such an obstruction serves as a guarantee that the inclusion as in (1) is not possible, because otherwise the obstruction would be living on Δ[det, m] as well.

To define geometric obstructions precisely, we need to recall some basic facts from representation theory. Let G = GLk() be the general linear group of k x k complex invertible matrices. We call a vector space W a representation of G if there is a homomorphism from G to the group of invertible linear transformations of W. For example, k with the usual action of G is its standard representation. There are, of course, far more complex representations of G. Their building blocks were classified by Hermann Weyl.30 He showed that irreducible (polynomial) representations of G are in one-to-one correspondence with nonnegative integer sequences (called partitions) λ = (λ1,...,λl), where λ1 ≥ λ2 ≥ ... λl ≥ 0, and lk. An irreducible representation of G in correspondence with λ is denoted by Vλ(G). It is called a Weyl module of G. For example, the standard representation k of G mentioned above is the Weyl module corresponding to the partition (1) consisting of just one integer 1. The Weyl module Vλ(G), when λ = r, is simply the space Symr(z1,..., zk) of all homogeneous polynomials of degree r in the variables z1,..., zk with the following action of G. Given a polynomial f() = f(z1,..., zk) ε Symr(z1,..., zk) and σ ε G, map f() to

Each finite-dimensional representation of G is like a complex building that can be decomposed into the building blocksthe Weyl modules. Fundamental significance of Weyl's classification results from the complexity theoretic perspective is the following. The dimension of each Weyl module Vλ(K) is in general exponential in the bit length of λ. But it has a compact (polynomial size) specification, namely, the labeling partition λ. Existence of such compact specifications of irreducible representations of G plays a crucial role in what follows.

If W is a representation of G, then the elements of G act on W, moving its points around via invertible linear transformations. More generally, a group can similarly act on a variety too. As a simple example, consider the ellipsoid E3 with the equation x21 + x22 + x33/a = 0, a > 0. Let U be the unit circle. It becomes an additive group if we identify each point in U with its polar coordinate θ and let the usual addition of angles play the role of the group composition. The group U has a natural action on E: let θ ε U act on E by rotating E around the x3 axis by the angle θ; see Figure 2. Let [E] be the coordinate ring of E. This is the space of polynomial functions on 3 restricted to E. Then, this action of U on E also makes [E] a representation of U: given θ ε U just map any polynomial function f() = f(x1, x2, x3) on E to f(θ · ), where θ · E denotes the point obtained by rotating ε E around the x3 axis by the angle θ.

Similarly, the group G = GLk, with k = m2, acts on the varieties Δ[det, m] and Δ[perm, n, m] moving their points around (Figure 1), and this action of G on the varieties makes their coordinate rings (the spaces of polynomial functions on them) representations of G. A formal definition of the action of G and the representation structures of the coordinate rings of Δ[det, m] and Δ[perm, n, m] are discussed later.

These representation structures turn out26 to depend critically on the properties (D) and (P), respectively. Specifically, the properties (D) and (P) put strong restrictions as to which irreducible representations of G can occur as G-subrepresentations of these coordinate rings.

Formally, a geometric obstruction to the inclusion (1) for given n and m is an irreducible representation Vλ(G) of G (a Weyl module) that occurs as a G-subrepresentation in the coordinate ring of Δ[perm, n, m] but not in the coordinate ring of Δ[det, m]c; see Figure 1. The partition λ here is called a geometric obstruction label. The existence of such an obstruction guarantees that the inclusion as in (1) is impossible, because otherwise the obstruction would occur as a G-subrepresentation in the coordinate ring of Δ[det, m] as well.

Thus, to solve the (strong) permanent vs. determinant conjecture, it suffices to show the following:

Geometric obstruction hypothesis (GOH).26 A geometric obstruction exists when m is polynomial in n.

It is conjectured in Mulmuley22 that GOH, or rather its slightly relaxed form, is equivalent to the strong permanent vs. determinant conjecture.

### The Flip

With the help of GOH, we have reduced the nonexistence problem under consideration to an existence problem. For general varieties, such an existence problem is hopeless. But we can hope to prove existence of a geometric obstruction using the characterization by symmetries provided by the properties (P) and (D). We turn to this story here.

The strategy is to construct, for any n and m polynomial in n, a geometric obstruction label λ explicitly in time polynominal in n and m by exploiting the properties (P) and (D). We call this strategy the flip, because it reduces the nonexistence problem under consideration to the problem of proving existence of a geometric obstruction, and furthermore, the lower bound problem is reduced to the upper bound problem of constructing a geometric obstruction label in polynomial time.

The following is a stronger and precise explicit form of GOH that says geometric obstructions can indeed be constructed explicitly.

Flip Hypothesis (FH).22,23 The geometric obstruction family is explicit in the sense that it satisfies the following properties:

FH[Short]: A short geometric obstruction label λ, with bit length polynomial in n and m, exists if m is polynomial in n.

FH[Verification]: Given n, m, and a partition λ, it can be verified whether λ is a valid geometric obstruction label in time polynomial in n, m and the bit length of λ.

FH[Discovery and construction]: Given n and m, it can be decided whether a geometric obstruction exists in time polynomial in n and m. If an obstruction exists, one such geometric obstruction label λ can also be constructed in the same time. By FH[short], this discovery algorithm always succeeds if m is polynomial in n.

FH[Det]: For given m and λ, it can be verified whether Vλ(G) occurs as a G-subrepresentation in the coordinate ringd of Δ[det, m] in time polynomial in m and the bit length of λ.

FH[Perm]: For given n, m and λ, it can be verified whether Vλ(G) occurs as a G-subrepresentation in the coordinate ring of Δ[perm, n, m] in time polynomial in n, m and the bit length of λ.

The flip strategy can now be elaborated further in three steps: (1) Prove FH[Det] and FH[Perm]. This clearly implies an efficient criterion for verifying a geometric obstruction label as in FH[Verification]. (2) Use this criterion to design an efficient algorithm for discovering an obstruction as in FH[Discovery]. (3) Prove that this discovery algorithm always succeeds if m is a polynomial in n. For this strategy to succeed, it is not enough if the verification and discovery algorithms are efficient only in theory. They should also have simple enough mathematical structure to carry out step (3). Otherwise, they have to be made simpler and simpler until (3) succeeds.

The best algorithms for verification and construction of a geometric obstruction label based on general-purpose algorithms in algebraic geometry and representation theory take triple exponential time in n and m in the worst case.

Later, we discuss why FH should hold. There is a huge gap between FH and what can be proved at present. Currently, the best algorithms for verification and construction of a geometric obstruction label based on general-purpose algorithms in algebraic geometry and representation theory take triple exponential time in n and m in the worst case. FH says this time bound can be brought down to a polynomial. This may seem impossible.

### Why Go for Explicit Proofs?

If so, one may ask as to why we should go for explicit construction of obstructions when proving existence of obstructions even nonconstructively suffices in principle. The reason is provided by the strong flip theorem.21 It says that any proof of the arithmetic (strong) permanent vs. determinant conjecture can be converted into an explicit proof assuming a stronger form of a standard derandomization hypothesis12 in complexity theory that is generally regarded as easier than the target lower bound. By an explicit proof, we mean that the proof also yields an algorithm for efficient construction of some proof certificate of hardness of the permanent, called an obstruction, that is analogous to the geometric obstruction above in the following sense: (1) its very existence for given n and m guarantees that the inclusion (1) is impossible, and (2) the family of obstructions satisfies analogues of FH[short], FH[verify], and FH. Thus, by the strong flip theorem, the strong permanent vs. determinant conjecture essentially forces an explicit proof, modulo derandomization.

There are similar flip theorems21 for other lower bound problems, such as the usual permanent vs. determinant and the arithmetic P vs. NP problems, and a certain average case stronger form of the Boolean P vs. NP problem. These results are the main reason why we are going toward explicit proofs, that is, toward explicit construction of obstructions, right from the beginning.

The derandomization hypothesis mentioned here is the following. Its importance is based on the fundamental result in Kabanets and Impagliazzo12 that derandomization means proving circuit lower bounds. Let Y(X) be an m x m matrix, whose each entry is a complex linear combination (possibly nonhomogeneous) of the variable entries of X. The problem is to decide if det(Y(X)), for given Y(X), is an identically zero polynomial in the variable entries of X. There is a simple and efficient randomized algorithm for this test. Let A be a matrix obtained from X by substituting for each entry of X a large enough random integer of bit length polynomial in n and m. Evaluate det(Y(A)) modulo a large enough random integer b. If it is nonzero, then det(Y(X)) is certainly a nonzero polynomial. If it is zero, then det(Y(X)) is an identically zero polynomial with a high probability. This randomized test is a black-box test in the sense it only needs to know the value of det(Y(X)) for a given specialization of X to A. It does not need to know Y(X). The derandomization hypothesis mentioned earlier is essentially that this randomized black-box determinant identity test can be efficiently derandomized so as to get an efficiently deterministic black-box determinant identity testing algorithm. (The required hypothesis is actually a bit stronger.21) This derandomization hypothesis, which is somewhat different from the one in Kabanets and Impagliazzo,12 is essentially equivalent to proving a determinantal lower bound for a multilinear function that can be evaluated in exponential time1. This is generally regarded as easier than proving a determinantal lower bound for the permanent since #P is conjecturally smaller than EXP, the class of functions that can be computed in exponential time.

### Why Should GOH and FH Hold?

The strong flip theorem21,23 actually shows something much more. It shows that stronger forms of the permanent vs. determinant and derandomization conjectures together imply an analogue of FH in algebraic geometry of comparable difficulty. This reveals that formidable upper bound problems in algebraic geometry are hidden underneath the fundamental hardness and derandomization conjectures in complexity theory. This may explain why these conjectures, which look so elementary at the surface, have turned out to be so formidable. In view of the strong flip theorem, problems of comparable difficulty can be expected in any approach, even if the approach does not go via algebraic geometry. We refer to this as the "law of conservation of difficulty."

The article by Mulmuley22 gives evidence based on the strong flip theorem and additional results in algebraic geometry, which suggests that FH itself may be in essence an implication of the strong permanent vs. determinant and derandomization conjectures together. At present, this is the main evidence for FH and, hence, GOH. Further evidence is provided by a recent article,7 which constructs explicit geometric obstructions in the analogous setting for the lower bound problem for matrix multiplication, albeit for a problem of very modest size. Explicit computation for any larger example is difficult at present due to the difficulty of the problems that arise.

The strong flip theorem for the permanent vs. determinant conjecture and analogous results in Mulmuley21 for other fundamental hardness conjectures in complexity theory, such as the arithmetic P vs. NP conjectures, show a fundamental difference between such hardness conjectures that are at least as hard as the derandomization conjectures and the known lower bound results in the restricted models of computation such as constant depth5 or monotone27 circuits. The lower bounds in these restricted models are statements about the weakness of these models. In contrast, by the strong flip theorem, the permanent vs. determinant problem is a statement about the strength of the complexity class NC (or rather its arithmetic analogue29 VP) for which the determinant is essentially complete. It does not say that NC (or rather VP) is small and weak, but rather that it is big and strongstrong enough to assert that "I am different from #P" (or rather its arithmetic analogue VNP29), for the permanent is complete. Similarly, by an analogous flip theorem for the (arithmetic) P vs. NP problem, this problem is a statement about the strength of the complexity class P. It does not say that P is weak and small but rather that it is big and strongstrong enough to assert that "I am different from NP."

It should also be remarked that FH will almost never hold for functions not characterized by their symmetries (in place of the determinant and the permanent), since the characterization by symmetries plays a crucial role in the proof of the strong flip theorem that forms the crux of the justification of FH. This is why the characterization by symmetries is so crucial for the flip strategy. It is indeed a fortunate coincidence that the fundamental complexity classes such as #P and NC have complete functions characterized by their symmetries.

### Frequently Asked Questions

Can GCT be used to prove some modest lower bounds first?

Given the difficulty of the fundamental hardness conjectures, one may ask if GCT can be used to prove some modest lower bounds first. That is indeed so. Currently, the best known lower bounds in the context of the P vs. NC and strong permanent vs. determinant problems are both based on GCT. The first lower bound is a special case of the PNC conjecture proved in Mulmuley.18 It says that the P-complete max-flow problem cannot be solved in polylogarithmic time using polynomially many processors in the PRAM model without bit operations. This model is quite realistic and natural in contrast to the constant depth5 or monotone27 circuit models used for proving lower bounds earlier. This lower bound is currently the only known super-polynomial lower bound that is a nontrivial implication of a fundamental separation conjecture like the PNC conjecture and holds unconditionally in a natural and realistic model of computation. Its proof is geometric and quasi-explicit. No combinatorial or elementary proof is known so far. This result was the beginning of the GCT approach to the fundamental hardness conjectures. The second lower bound based on GCT constructions, specifically the varieties Δ[det, m] and Δ[perm, n, m], is the quadratic lower bound14 stated earlier in the context of the strong permanent vs. determinant conjecture. It is a stronger form of the earlier quadratic lower bound16 for the usual permanent vs. determinant problem. The proof in Mignon and Ressayre16 is elementary and does not need GCT. The difference between the strong and usual versions of the permanent vs. determinant problem in Landsberg et al.14 and Mignon and Ressayre16 is akin to the difference between the tensor rank and usual versions of the lower bound problem for matrix multiplication.6

See also the lower bounds for matrix multiplication based on the fundamental work28 that introduced invariant theory in complexity theory.

Are explicit proofs necessary?

By the strong flip theorem, we know that any proof of the strong permanent vs. determinant conjecture leads to an explicit proof modulo derandomization. This does not say that explicit proofs are necessary. There may be nonexplicit proofs that avoid derandomization altogether. But this does suggest that if derandomization is indeed easier than the fundamental hardness conjectures12 as the complexity theory suggests, then even such nonexplicit proofs would essentially have the necessary mathematical ingredients to construct proof-certificates of hardness efficiently a posteriori. If so, it makes sense to go toward this efficient construction right from the beginning. This allows us to use the theory of algorithmsthe main tool of complexity theoryin the study of the fundamental lower bounds. Indeed, it is unrealistic to expect that we can prove PNP without understanding the complexity class P and the theory of algorithms in depth first, as the flip strategy suggests.

The situation here may be compared to that for the well-known four color theorem.2 In principle, this theorem may be proved nonconstructively. Yet, the fact remains that all known proofs of this theorem are explicit in the sense that they also yield efficient algorithms for finding a four coloring as a byproduct. The flip theorem suggests that the story of the fundamental hardness conjectures in complexity theory may be similar.

In this sense, these conjectures are fundamentally different from other conjectures in mathematics such as the Riemann Hypothesis. Since there is no analogous flip theorem for the Riemann Hypothesis, it may have a nonconstructive proof that gives no hint on how to test efficiently if the n-th zero of the Riemann zeta function lies on the critical line.

Is algebraic geometry necessary?

According to Valiant,29 the arithmetic permanent vs. determinant conjecture over is implied by the #P vs. NC conjecture. By the strong flip theorem,21,23 stronger forms of the fundamental hardness and derandomization hypotheses in the arithmetic setting imply an analogue of FH in algebraic geometry of comparable difficulty. We have already argued on the basis of these results as to why it is not pragmatic to avoid algebraic geometry, even though it is not formally necessary.

Another concrete evidence for the power of algebraic geometry even in the Boolean setting is provided by the proof of the special case of the PNC conjecture.18 It has to be emphasized here that, unlike the earlier lower bounds in the algebraic model,6 this lower bound is Boolean, not algebraic. This is because it is in terms of the bit length of the input, though the PRAM model in Mulmuley18 does not allow bit operations. At present, to our knowledge, this is the only nontrivial implication of a fundamental hardness conjecture that can be proved unconditionally in a natural and realistic model of computation. If we cannot prove even this easier implication of the PNC conjecture by elementary techniques, it seems unrealistic to expect that we can prove the far harder PNC (or NP) conjecture by elementary techniques.

When can we expect a hard lower bound?

The modest lower bounds based on GCT and the earlier modest lower bounds3,6 are separated from the fundamental hardness conjectures that are at least as hard as derandomization by the circle of self-referential difficulty, see Figure 3. To break into this circle, we have to show that P contains formidable explicit construction problems in algebraic geometry and representation theory, such as the ones that arise in the strong flip theorem or FH. By the law of conservation of difficulty based on the strong flip theorem, comparable understanding of P is needed in any approach. Unfortunately, our current understanding of P is very modest. Until we understand P (the theory of algorithms) and geometry in the required depth, we may not expect any further lower bounds that are fundamentally different from the modest lower bounds discussed previously.

Conclusion

GCT has broken the circle of self-reference around the fundamental hardness conjectures in the arithmetic setting and, in the process, has revealed deep explicit construction and positivity problems at the crossroads of algebraic geometry, representation theory, and complexity theory hidden underneath the fundamental hardness conjectures in complexity theory. Given the formidable nature of these problems, this is undoubtedly only the beginning.

### Acknowledgments

The author is grateful to Josh Grochow, Jimmy Qiao, Janos Simon, and the referees for helpful comments. The work on this paper was supported by NSF grant CCF-1017760.

### References

1. Agrawal, M. Proving lower bounds via pseudo-random generators. In Proceedings of the FSTTCS (2005), Springer-Verlag, Berlin, Germany, 92105.

2. Appel, K., Haken, W., Koch, J. Every planar map is four colourable. Illinois J. Math. 21 (1977), 439567.

3. Arora, S., Barak, B. Computation Complexity: a Modern Approach, Cambridge University Press, Cambridge, England, 2009.

4. Blasiak, J., Mulmuley, K., Sohoni, M. Geometric complexity theory IV: nonstandard quantum group for the Kronecker problem. cs. ArXiv preprint cs. CC/0703110 (2011).

5. Boppana, R., Sipser, M. The complexity of finite functions. Handbook of Theoretical Computer Science. J. van Leeuwen, ed. Volume A, MIT press, 1990, 757804.

6. Bürgisser, P., Clausen, M., Shokrollahi, M. Algebraic Complexity Theory, Springer-Verlag, 1997.

7. Bürgisser, P. and Ikenmeyer, C. Geometric complexity theory and tensor rank. arXiv:1011.1350v1 (2010).

8. Bürgisser, P., Landsberg, J., Manivel, L., Weyman, J. An overview of mathematical issues arising in the geometric complexity theory approach to VPVNP. arXiv:0907.2850v1 [cs.CC] (2009).

9. Cook, S. The complexity of theorem-proving procedures. In Proceedings of the third annual ACM Symposium on Theory of Computing (1971), ACM, New York, NY, 151158.

10. Deligne, P. La conjecture de weil II. Publ. Math. Inst. Hau. Étud. Sci. 52 (1980), 137252.

11. Fortnow, L. The status of the P versus NP problem. CACM 52, 9 (2009), 7886.

12. Kabanets, V., Impagliazzo, R. Derandomizing polynomial identity tests means proving circuit lower bounds. Comput. Complex. 13 (2004), 146.

13. Karp, R. Reducibility among combinatorial problems. Complexity of Computer Computations. R. Miller and J. Thatcher, eds. Plenum Press, New York, 1972, 85103.

14. Landsberg, J., Manivel, L., Ressayre, N. Hypersurfaces with degenerate duals and the geometric complexity theory program. arXiv:1004.4802v1 [math.AG] (2010).

15. Levin, L. Universal sequential search problems. Probl. Inform. transm. 9 (1973), 115116.

16. Mignon, T., Ressayre, N. A quadratic bound for the determinant and permanent problem. Int. Math. Res. Not. 79 (2004), 42414253.

17. Mulmuley, K. Geometric Complexity Theory IX. Technical report, under preparation.

18. Mulmuley, K. Lower bounds in a parallel model without bit operations. SIAM J. Comput. 28, 4 (1999), 14601509.

19. Mulmuley, K. Geometric Complexity Theory VII: Nonstandard Quantum Group for the Plethysm Problem. Technical Report TR-2007-14, Computer Science Department, The University of Chicago, 2007.

20. Mulmuley, K. Geometric Complexity Theory VIII: On Canonical Bases for the Nonstandard Quantum Groups. Technical Report TR 2007-15, Computer Science Department, The University of Chicago, 2007.

21. Mulmuley, K. Explicit proofs and the flip. arXiv:1009.0246 v1 [cs.CC] (2010).

22. Mulmuley, K. Geometric Complexity Theory VI: The Flip via Positivity. Technical report, The Computer Science Department, The University of Chicago, 2010.

23. Mulmuley, K. On P vs. NP and geometric complexity theory. JACM 58, 2 (2011).

24. Mulmuley, K., Narayanan, H., Sohoni, M. Geometric complexity theory III: on deciding nonvanishing of a Littlewood-Richardson coefficient. J. Algebr. Combinator. (Nov. 2011), 18.

25. Mulmuley, K., Sohoni, M. Geometric complexity theory I: an approach to the P vs. NP and related problems. SIAM J. Comput. 31, 2 (2001), 496526.

26. Mulmuley, K., Sohoni, M. Geometric complexity theory II: towards explicit obstructions for embeddings among class varieties. SIAM J. Comput. 38, 3 (2008), 11751206.

27. Razborov, A. Lower bounds on the monotone complexity of some Boolean functions. Dokl. Akad. Nauk SSSR 281 (1985), 798801.

28. Strassen, V. Rank and optimal computation of generic tensors. Lin. Algebra Appl. 53 (1983), 645685.

29. Valiant, L. The complexity of computing the permanent. TCS 8 (1979), 189201.

30. Weyl, H. Classical Groups, Princeton University Press, Princeton, NJ, 1946.

### Author

Ketan D. Mulmuley (mulmulay@cs.uchicago.edu) is a professor of computer science at the University of Chicago, IL.

### Footnotes

a. This version says that NP contains functions that cannot be computed by polynomial size circuits.

b. This definition of NC is broader than the usual definition that allows only 01 functions.

c. Strictly speaking, we have to use the duals of the coordinate rings here.

d. Actually its dual; and similarly in FH[Perm].

### Figures

Figure 1. A geometric obstruction.

Figure 2. An ellipsoid.

Figure 3. Division in the world of lower bounds by the circle of self-reference.

©2012 ACM  0001-0782/12/0600  \$10.00

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from permissions@acm.org or fax (212) 869-0481.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2012 ACM, Inc.

No entries found