Complexity as Proof

Complexity is often taught and discussed within computer science as a method of evaluating the performance of algorithms, data structures, and more generally programs. Undergrads and competitive programmers learn about the scaling resource usages of common algorithms, how they compare to one another, and how to determine the bounds of new algorithms, and once they can pass the software engineering interview the work is considered done.

A notion that is less commonly explored is the original intention and use of the concept of complexity: the fact that complexity is a fundamental mathematical property of an object, not a yardstick. This means that by reasoning about the complexity of an algorithm or string, we can use this information to prove further properties about its behaviour. To illustrate this concept, we will explore a simple intuitive explanation of Gregory Chaitin's proof that there are infinite primes, which is really cool and elucidating.

Complexity of Integers

Consider positive integers. As they increase in size, by how much does the spatial complexity of the integer increase? A simple visual way to reason about this is to consider how many characters a string representing the number needs to have. This is analogous to the amount of information needed in order to completely represent that number. For example, in base 10, all numbers lower than 10 require a single digit: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9. Numbers lower than 100 require 2: 10, 11, 12... We can represent this mathematically - take the actual value of the integer log10 plus 1 rounded down, and you have the number of characters you need in order to represent that value in base 10. This applies across all bases, and we can say that in general the representational complexity, or spatial complexity, of any integer n is on the order O(log(n)).

Importantly, this is the upper bound - there are some exceptions which we can represent in a much more compressed form. For example, powers of 2 can all be represented as the exponent of 2 that they are, so that 4294967296 can be written as (2 ^) 32 (2 characters instead of 10). In general, writing a power of 2 in this form requires a complexity of O(log(log2(n))).

However, if we don't want to only represent the powers of 2, our pow2 compression isn't very helpful. There will always be integers that require the full log(n) characters in order to represent.

To illustrate this, let's reverse the question. Consider a number written in base 2. Say you have a string of 0s of any length, and I tell you I have flipped a random bit to a 1. How difficult is it to find that bit in the string? This is the same question as asking you to find the value of the string, since you need to know where the 1 bit is to calculate the value. The fact is that in the worst case, the 1 bit is the last bit that you check, and therefore in order to know the value of the binary number, you need to check every digit. This is more clear when we flip a random number of random positions - in order to be sure that you have not missed any 1 bits, you need to check every bit in the string.

Given this, to say that using m bits, we can represent up to 2^m integers, is another way to say that given an integer n we require at least log2(n) bits to always be able to accurately represent any integer. When using higher bases, where each symbol represents more than 1 bit of information, this means we require at least log(n) units of information, such as numeric symbols.

Complexity-based Proof for Infinite Primes

Now we've begun the fundamental shift from thinking about complexity as a way of analysing the relative cost of using a data structure or function to thinking about it as a property we can leverage to prove things about the behaviour of the underlying object.

If we can find two representations that are equivalent (can represent the same set), they must have the same spatial complexity. For example, we have already implicitly shown that the complexity of integers written in base 2 are the same as integers written in base 10. The number of digits required to represent in base 2 are on the order of O(log2(n)) and in base 10 it is O(log10(n)), but they are both O(log(n)), because these representations can both describe any positive integer.

Another way we can represent any positive integer is using prime factorisation. This is taken for granted here, but make sure you believe that this is the case before continuing. If any integer can be written as a product of its prime factors, then any fact about complexity that we have shown about must also apply to this representation as well.

We notice that this is a kind of expansion of our earlier power of 2 compression, but instead of limiting ourselves to only 2, we can use any prime number, and therefore can represent many more numbers than we could with only 2 and an exponent.

Now lets assume there are only finitely many primes. How does this affect the computational complexity of the representation? Let's call the "final" prime X, denoting the prime factorisation of an integer n as p(n), and follow how the complexity grows as n increases towards and away from X. As we increase n towards X, the spatial complexity of p(n) scales as expected - in order to represent an integer of value n, we require on the order of O(log(n)) units of information.

However, as we move beyond X and "run out" of new prime numbers and n hurtles towards infinity, the complexity of p(n) scales much more slowly than our digits representations for n. All of the further numbers can be described as the product of the power of a finite set. To show by exactly how much slower p(n) scales, assume that X, the largest prime, is 3. Similarly to the pow2 compression earlier, any number we can represent using the products of these numbers can be written as 2^a * 3^b. In other words, only a and b scale with n, and a and b are exponents, so the total complexity is on the order O(log(log2(n)) + log(log3(n))). What is crucial is that we are now making the additional claim that any integer n can be represented with just these exponents a and b. Expanding this to many prime factors, with k prime numbers, p(n) only requires a complexity of O(k * log(log(n))) for positive integers.

If this is true, we have a crazy powerful compression algorithm that can represent any integer with a much less spatially complex representation. However, as we explored above - a representational complexity of O(log(n)) for all integers is not simply nice to have, it is a fundamental requirement. If we have less information than this, there will always be integers we cannot represent, and therefore since finite primes gives a representational complexity of O(log(log(n))), there must be infinite primes.

Further Reading

Naturally, Chaitin's actual proof is much more rigorous than the above (because the above is not rigorous at all). You can read a brief description of the formal rendition here. The book Metamaths referenced at that link is where I first found the proof, and is highly recommended for the original mathematical (read: real computer science) perspective on complexity applied to everything, and even moreso the feeling of the force of the creative process. Chaitin's book is best enjoyed and appreciated under the playful assumption that he is a lunatic in the most free and wonderful sense of the word, which surely all truly creative and colourful men must be.