P ≠ NP — A Philosophical Perspective

Is P equal to NP?

It has been called the biggest open question in theoretical computer science, with a formal proof carrying a prize of $1,000,000 from the Clay Mathematics Institue. What interest might this seemingly technical problem have for a philosopher or classicist?

After listening to Melvyn Bragg’s excellent In Our Time dialogue on the question, I have some philosophical intuitions about this topic I would like to develop. While philosophical thought has traditionally preceded mathematical expression, we now promote the advancement of mathematics unaccompanied by a wider metaphysical vision. But this need not be so. How might this mathematical problem be translated into a readily grasped ontological principle, captured in natural language?


The P vs. NP problem may be reduced to an investigation of two fundamental mathematical operations — solving problems and verifying solutions. In natural circumstances, making this distinction would seem very pedantic and unnecessary. To solve a problem is to see the verification of its solution. The effective treatment is proven by its cures. The correct strategy is proven by victory. To use an old proverb, the proof is in the pudding.

Yet computers do not enjoy (or, to put it positively, are not limited by) this immediate feedback loop between attempted solutions and achieved results. When programmed to solve problems involving massive computation, a computer will use an algorithm to attempt millions of solutions before finding the correct one. A ‘smart’ algorithm may focus on specific sets of numbers programmers have deemed more likely to yield a solution, but nonetheless, the algorithm will only find an answer by mining a vast number of solution candidates.

The computer ‘knows’ it has solved the problem only when one of the solutions mined by the algorithm is verified as correct through a separate ‘verification’ algorithm.  Once all solutions are found, the program ends.

Since these processes occur almost simultaneously from a human perspective, the average computer user does not consider how these processes are logically separated in a programmer’s code. Yet computer scientists have exploited the separation and distinction of these processes to create modern cryptography. The apparently effortless calculating speed and efficiency of computing power breaks down when it is tested by very large numbers. For instance, a computer may be asked to solve the prime factorization for the number RSA-300.

No supercomputer has yet been able to offer a prime factorization for this number, despite the efforts of many laboratories. It is certainly well beyond the computational capability of any normal commercial device. Yet, if one day a solution were to be found, it would be very easy for a computer program to verify the solution by multiplying the given primes and coming to this integer result.

Thus, there is a gap between the problems computers can efficiently solve (called the set P) and the problems they can cannot efficiently solve but can check, or verify (called the set NP).

When you have a password, you basically have the solution to a factorization problem nearly impossible for a computer to solve in a reasonable amount of time. The huge gap in efficiency between finding a solution and verifying it provides the basis for all internet security. By multiplying prime numbers together into new massive products unable to be efficiently factored, we can transfer information hidden in a computational lock, to which only we have the key.

Now, in theory, algorithms could become more efficient, so that NP problems collapse into the set of P, or P=NP.  Nobody has been able to offer a decisive proof regarding this hypothetical, though there is a broad consensus that any such proof will prove the negative, that P ≠ NP.


What does this mean in human terms?

First, it is interesting to note that, in this instance, the epistemological debate has moved away from the traditional problem of the possibility or limits of knowledge, and moved towards a new problem arising out of computer science — what is the efficiency of various forms of knowledge? In principle, we understand how to achieve the factorization of RSA-300, but the process is far beyond the computational capacity of any human being or human-designed computer.

NP problems would, in theory, be solved in time if the algorithm was allowed to run indefinitely. We could also easily verify any factors suggested as an answer. The factors of RSA-300 are only unknown because of their sheer size, which distorts our application of basic mathematical principles. This shift moves us from the classical epistemological problem of what knowledge is possible towards the computational problem of what knowledge is achievable. 

Yet, crucially, this failure of efficiency occurs only in a very specific realm, in the calculation of very large numbers. The science of the 20th century has taught us to use caution when attempting to transfer principles between macro and microscopic frames of reference. In a similar shift of perspective between macro and microcosm, what is true in classical mechanics is not true in quantum mechanics. 

The size of the large primes forces us to move upward in our basic mathematical units, to move from numbers themselves as objects of interest towards the algorithms we use to find the factors of large primes. In traditional mathematics, we directly operate upon numbers with the various tools of arithmetic,  while the algorithmic math of computer science operates upon the operators. We are not trying to solve a problem in principle, as in pure math, but to test the processes by which a machine may solve this problem quickly. Our ability to efficiently find the proverbial needle in a haystack has become the central question of knowledge.

Beginning with Plato’s eternal forms, mathematical truth has always been taken as the quintessential example of timeless knowledge. Yet this atemporal approach to mathematical knowledge always referred to the results of mathematical inquiry, rather than the process. If we discover the factorization of RSA-300 after running an algorithm for three millennia, that solution will be timeless, but surely we cannot ignore the method by which it was found. In fact, such a solution may contribute nothing to our ability to solve similar problems for similar numbers, so that no general principle has been found.

It is as if mathematics can now be seen through an inverse mirror. The millions of timeless, eternal, logically unassailable solutions to large factorization problems mean nothing — it is only time, represented in the mathematical “efficiency” of the algorithms, that matters. If we were to state the P vs. NP problem as a general philosophy, it may be as follows — truth endures eternally, but knowledge of it is achieved only momentarily.

One may object that mathematics has always been about discovering how to do things more quickly, and that mathematical abstraction itself removes the clumsy estimation of beginning every enterprise de novo. This may be true, but this discussion has not been central to the the philosophy of mathematics, which has more focused on the ontological status of mathematical objects. As the computer scientist Scott Aaronson suggests, the nature of computation, in and of itself, may be much more philosophically relevant topic than philosophers realize.


But we should not stop here. What can we learn about reality in general by looking at computation itself, at the differences in principle between multiplication and division?

Let’s take a more metaphorical look at the factorization problem. By multiplying very large prime numbers together, cryptographers are able to generate a secure ‘space’ for information. The large multiples cannot be so easily divided back into their component parts as they were generated. This newly generated information space operates beyond any reasonably efficient attempt to divide it back down into pieces. We might say that multiplication is a creative, generative process while division is an attempt to limit and control the information space that multiplication yielded. For sufficiently large numbers, division will never catch up to multiplication if it can be shown that P ≠ NP.

If multiplication creates a whole and division reduces into parts, it seems like the whole is stronger than the part. Now we return to a familiar theme in the philosophical tradition. Philosophers have long held that the whole is greater than the part, and a philosophical study of computation reveals one sense in which this may be immediately, dramatically true. While it may be possible to divide all the RSA numbers in theory, it will require much more effort than it did to generate them. Any two large primes may be quickly and effortlessly multiplied to create a number which is, in its resistance to division, almost self-preserving.

This fairly obscure, technical mechanism of cryptography can extend into wider domains of inquiry. Science is based on measurement, and measurement depends upon division. Division, as a process, is closely associated with the progress of modern science. When the science is critiqued, it is critiqued precisely for a poor application of division, for “reducing” wholes to incoherent parts in a crude redutionism. 

Appropriately, one of the most important achievements of science in the 20st century was a monumental feat of division, “splitting” the atom. As the factorization problem would predict, this division was extremely difficult and inefficient, requiring great amounts of concentrated force. Atoms are like large prime multiples, capable of being split by deliberate effort, but otherwise existing in a basically secure state as a unified whole.

This wider sense of what it means to be an atom goes back to Greek atomic theory. The word atom itself comes from ἄτομος, ‘unable to be split’. In this sense, atoms are not really atoms, as they are able to be split. Nor are subatomic particles true atoms, as they can also be split. The ‘atomic’ state is rather a kind of provisional wholeness analogous to the mathematical status of ultra-large prime numbers. An entity may be said to atomic to the extent that its components (mathematical divisors) fit into it without them being easily split into separate parts (factorization). 


When we conceive of wholeness as a wider property of all being, as an enduring ontological/mathematical resistance to division, we have a new perspective on the problem of parts and wholes which has plagued philosophy since its inception. Wholeness is not absolute, but relative to the efficiency of division. 

And because division will always be less efficient than the generation of new wholes (we can always multiply large primes together more easily than we can crack the code and divide them), wholeness becomes an ontological space, operating on its own terms, defined by its own properties, comprised of yet effectively beyond its component parts.

The idea of secured internet ‘space’ derives from the enduring power of the whole over its parts. For society to evolve along with its dominant technology, it should adopt a view of itself and a view of nature equally respectful of the unique reality of wholes, which have a distinct ontological status even if we abandon old Greek and German attempts at a fully transcendental monism. An epistemology of comprised solely of discrete division and analysis should be seen as an ineffective ‘hack’ of holistic thought, which is the product of a productive, generative view of the world — the product of creativity and synthesis.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s