Quantcast
Channel: Riemann Hypothesis – Gödel’s Lost Letter and P=NP
Viewing all articles
Browse latest Browse all 34

The Depth Of The Möbius Function

$
0
0


A striking connection between complexity theory and number theory

Peter Sarnak is an eminent number theorist, who has appointments at both Princeton University Mathematics Department and the Institute for Advanced Studies. He is also the editor of the Annals of Mathematics—one of the most prestigious journals in all of mathematics. And he is an expert on The Riemann Hypothesis.

Today I plan on discussing a recent post Gil Kalai on a conjecture of Peter about the Möbius function.

Gil’s post, on his terrific blog, is based on a talk Peter gave on connections between number theory and complexity theory. You can skip the rest of this and just read Gil’s post: he stated the ideas probably clearer than I will. But I have my own take and perhaps you should read both. Gil thinks Peter’s conjecture looks attackable and I agree. If you solve it, please thank both of us. Alternatively, pick a random large integer {n}. If {n} is square-free, meaning no prime number squared divides {n}, then thank him; else thank me.

The Möbius Function

The conjecture is about the Möbius function which encodes information about the prime number structure of a natural number. The function is denoted by {\mu(n)} and takes on only values in the set {\{-1, 0, +1 \}}. It is defined by:

  1. Define {\mu(n) = 1}, if {n} is a square-free positive integer with an even number of prime factors.
  2. Define {\mu(n) = -1}, if {n} is a square-free positive integer with an odd number of prime factors.
  3. Define {\mu(n) = 0}, if {n} is not square-free.

Some examples: {\mu(1)=1}, {\mu(p)=-1}, and {\mu(p^2) = 0} for any prime {p}; and

\displaystyle  \mu(30) = -1,

since {30 = 2 \cdot 3 \cdot 5}.

There are more-general kinds of Möbius functions that arise from partial orders. The above classic one actually arises from the partial order {x \prec y} if {x} divides {y}.

Why It Is Important?

The Möbius function clearly encodes the multiplicative structure of an integer, but that does not explain its great importance. In particular, why is the right choice to make {\mu(k)} zero for numbers {k} that have a repeated factor? There are many reasons why the current definition is the right one. Here are two basic ones:

{\bullet } Prime Number Theorem: This is the statement that

\displaystyle  \pi(x) = \frac{x}{\ln x} + o(x/\ln x).

Here \pi(x) is the number of primes less than x . There are many equivalent versions of this key theorem, one is that it is equivalent to the statement:

\displaystyle  \sum_{k \le x} \mu(k) = o(x).

{\bullet } Riemann Hypothesis: This is the statement that the nontrivial zeroes of the Riemann zeta function all have real part 1/2. It is the open problem in number theory and perhaps all of mathematics. Again there are many equivalent versions, one is that it is equivalent to the statement:

\displaystyle  \sum_{k \le x} \mu(k) = O(x^{1/2+\epsilon}),

for any {\epsilon>0}. Note the gap between what is known and what is believed: the sum

\displaystyle  \sum_{k \le x} \mu(k)

is known to be {o(x)}, but is believed to be much smaller—just above the square root of {x}.

One intuition about the Möbius function is that it behaves like a random variable: the values seem to be very erratic and unpredictable. This is the intuition, I believe, behind the belief that the sum

\displaystyle  \sum_{k \le x} \mu(k)

grows much slower than {o(x)}. If the values were really random—they are not of course, but imagine that they were—then the sum would be closer to the square root of {x}. This would be a theorem if the values were truly random. Of course the values of {\mu(k)} are not random, since they satisfy many interesting properties. For example

\displaystyle  \mu(a \cdot b) = \mu(a) \cdot \mu(b)

provided {a} and {b} are relatively prime. Another important property of the Möbius function is:

\displaystyle  \sum_{k | x} \mu(k) = 0,

for {x>1}.

But in any event a good intuition is that {\mu(x)} does behave in a very erratic and unpredictable manner. This is reasoning, I believe, behind Peter’s conjecture.

The Conjecture

Peter’s Conjecture is the following: Suppose that {f(k)} is some function that is simple, then the sum

\displaystyle  \sum_{k \le x} \mu(k) \cdot f(k) = o(x).

Gil suggests that we make “simple” precise by using complexity theory. Think of {k} as written in binary

\displaystyle  k = 2^{n-1}k_{n-1} + \dots + 2k_1 + k_0.

Then insist that {f(k)} is defined by an boolean function

\displaystyle  f^*(k_{n-1},\dots,k_1,k_0)

which is in {\mathsf{AC^0}}. What we mean is that for each {n} the value of {f(k)} on {n} bit binary numbers is computable by a constant depth and polynomial size circuit.

Let’s agree to call this the Sarnak-Kalai Conjecture (SKC). The conjecture is plausible, especially if one believes that the Möbius function is essentially a random one. The conjecture is deep, since it includes the Prime Number Theorem as a special case: just let {f(k)=1} for all {k}.

Even using a tiny power of {\mathsf{AC^0}} yields interesting results: For example, SKC implies that

\displaystyle  \sum_{k \le x \text{ and } k \equiv 3 \bmod 32} \mu(k) = o(x).

This seems non-trivial to me.

SKC seems possible to solve, in the sense that there may be enough known about {\mathsf{AC^0}} to resolve it. But I do not see it as a trivial conjecture. It seems like a lovely conjecture that David Hilbert would like: recall he once said that

{\dots} I should still more demand for a mathematical problem if it is to be perfect; for what is clear and easily comprehended attracts, the complicated repels us.

Factoring

My favorite problem after {\mathsf{SAT}} and after graph isomorphism and after factoring—wait that is a loop. Well I like them all.

It is interesting that Peter does not believe that his conjecture could be

\displaystyle  \sum_{k \le x} \mu(k)\cdot f(k) = o(x),

where {f(k)} is any polynomial time computable function. The reason is simple: he thinks that factoring is in polynomial time. It is good to know that I am not alone in this belief—pretty neat that a great number theorist believes factoring is in polynomial time too. If it is then {f(k)} could be {\mu(k)} and the sum would become

\displaystyle  \sum_{k \le x} \mu(k)^2,

which is the number of square-free numbers below {x}. This is long known to be {\Omega(x)}: actually it is {\frac{6}{\pi^2}x + o(x)}. So if factoring is in polynomial time, then {f(k)} must be restricted to not be below polynomial time in complexity. This implies the following simple theorem:

Theorem: If the complexity class {\mathcal F} satisfies Peter’s Conjecture, then factoring is not in the class {\mathcal F}.

I find this connection quite interesting.

Open Problems

Solve the SKC. Or failing that at least show that some non-trivial subclass of functions satisfy the conjecture. I note that this conjecture seems related to work of Eric Allender, Michael Saks, and Igor Shparlinski that I have previously discussed here.

[fixed typos noted by first two commenters]



Viewing all articles
Browse latest Browse all 34

Trending Articles