Quantcast
Channel: Riemann Hypothesis – Gödel’s Lost Letter and P=NP
Viewing all articles
Browse latest Browse all 34

The Specter of Simple(r) Proofs

$
0
0


When has a strikingly simple proof come first?

Cropped from London Times 2017 source

Michael Atiyah is giving a lecture next Monday morning at the Heidelberg Laureate Forum (HLF). It is titled, simply, “The Riemann Hypothesis.” An unsourced image of his abstract says what his title does not: that he is claiming not only a proof of Riemann but a “simple proof using a radically new approach.”

Today we discuss cases where theorems had radically simpler proofs than were first contemplated.

Sir Michael’s talk is at 9:45am Central European time, which is 3:45am Eastern. It will be live-streamed from the HLF site and will appear later on HLF’s YouTube channel. We have not found any other hard information. The titles of all talks are clickable on the program, but abstracts are not yet posted there.

Preceding him from 9:00 to 9:45am, the opening talk of the week’s proceedings, is John Hopcroft whom we know so well. If you’ve heard of the expression “a hard act to follow,” John has the opposite challenge. He is speaking on “An Introduction to AI and Deep Learning.”

Although all speakers are past winners of major prizes—Abel, Fields, Nevanlinna, Turing, and the ACM Prize in Computing—the forum is oriented forward to inspire young researchers. Our very own graduate Michael Wehar received support to attend in 2016, and one of this year’s young attendees is John Urschel, whom we profiled here.

Simpler Proofs

Naturally, in a blog named for Kurt Gödel we should lead with him as an example. There is an often-overlooked component to the title of his famous paper on incompleteness theorems in logic:

Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I.

The component does not need to be translated. It is the “I” at the end. It does not mean “System 1”—it means that this is Part One of a longer intended paper. Gödel wrote the following at the very end on page 26—OK, now we will translate:

In this work we’ve essentially limited attention to System {P} and have only hinted at how it applies to other systems. The results in full generality will be enunciated and proved in the next installment. That work will also give a fully detailed presentation of Theorem 11, whose proof was only sketched here.

Gödel’s “Part II” was slated to stretch over 100 pages. It never appeared because the essence of Gödel’s argument was perceived and accepted quickly, details notwithstanding. Once Alan Turing’s theory of computation emerged six years later, it became possible to convey the essence in just one page:

  1. There are explicitly-defined sets {D} that are not computably enumerable.
  2. For any effective formal system, the set {E} of cases {x} such that it proves “{x \in D}is computably enumerable.
  3. Hence {E} cannot equal {D}.
  4. If {E} contains a case {y} that is not in {D}, then the system proves “{y \in D}” but that is false.
  5. Thus if the system is sound, {E} must be a proper subset of {D}.
  6. Then any {z \in D \setminus E} yields a statement “{z \in D}” that is true but not provable in the system.

The condition that the system is sound is technically stronger than needed, but Gödel didn’t optimize his conditions either—Barkley Rosser found a simple trick by which it suffices that the system be consistent. The Turing-based rollout can be optimized similarly. For one, we can define {D} in such a way that the falseness of “{y \in D}” has a finite demonstration, and can specify “effective” so that exceptional cases {z} are computed. For a fully sharp treatment see pages 34 and 35 of these notes by Arun Debray—scribing Ryan Williams—which I am using this term.

The relevant point of comparison is that Gödel’s essence lay unperceived for decades while the leading lights believed in the full mechanizability of mathematics. Had it slept five years more, the lectures by Max Newman that Turing attended in the spring of 1935 might have ended not with Gödel but with more on the effort to build computing machines—which Newman later joined. See how David Hilbert, more than Gödel, animates Andrew Hodges’s description of Turing’s machines on pages 96–107 of his famous biography, and see what Hodges writes in his second paragraph here. The lack of Gödel might have held Alonzo Church back more.

Then Turing would have burst out with a “simple” refutation of Hilbert’s programme via “a radically new approach.” Equating “simple” with “one page” as above is unfair—Turing computability needs development that includes Gödel’s pioneering idea of encoding programs and logical formulas. But the analogy with Atiyah still works insofar as his use of “simple” evidently bases on the following works:

  • A 1954 paper on algebraic varieties by Friedrich Hirzebruch;
  • Paul Dirac’s seminal 1928 paper, “The Quantum Theory of the Electron”; and
  • A 1936 paper by John von Neumann, “On an algebraic generalization of the quantum mechanical formalism (Part I).” (This “Part I” was actually a “Part II” to an earlier paper by von Neumann with two others.)

We have on several occasions noted the appreciation that methods originating in quantum computing have gained wide application in areas of complexity theory that seem wholly apart from quantum. The quantum flow may run even deeper than anyone has thought.

Other Examples

{\bullet} Abel-Ruffini Theorem. Whether fifth-degree polynomial equations have a closed-form solution had been open far longer than Riemann has been. Paolo Ruffini in 1799 circulated a proof manuscript that ran over 500 pages and yet was incomplete. Niels Abel, whom we just mentioned, in 1824 gave a full sketch in only six pages, which were later expanded to twenty. This was still six years before the nature of the impossibility was enduringly expounded by Évariste Galois.

{\bullet} Lasker-Noether Theorem. Emanuel Lasker was not only the world chess champion but also a PhD graduate in 1900 of Max Noether, who was Emmy Noether’s father. In 1905, Lasker published a 97-page paper proving the primary decomposition theorem for polynomial ideals. Sixteen years later, Emmy Noether proved a more powerful theorem in a paper of 43 pages. But as Wikipedia’s list of long proofs states,

Lasker’s [proof] has since been simplified: modern proofs are less than a page long.

We invite readers to suggest further favorite examples. This MathOverflow thread lists many cases of finding shorter proofs, but we want to distinguish cases where the later simpler proof ushered in a powerful new theory.

The flip side is when the simpler proof comes first. These are harder to judge especially in regard to whether longer “elementary” methods could have succeeded. The Prime Number Theorem could be regarded as an example in that the original analytic proofs are elegant and came with the great 19th-century tide of using analysis on problems in number theory.

Open Problems

Of course the main open problem will be whether Sir Michael’s claims are correct. Even if he walks them back to saying just that he has a new approach, its viability will merit further investigation.

Coincidentally, we note today’s—now yesterday’s—Quanta Magazine article on mounting doubt about the correctness of Shinichi Mochizuki’s claimed 500-page proof of the ABC Conjecture. The challenge comes from Peter Scholze of Bonn and Jacob Stix of Frankfurt. Luboš Motl today has coverage of both stories. Scholze, a newly minted Fields Medalist, will also attend in Heidelberg next week.

Updates 9/21: Stories by IFLScience! (which I neglected to link above) and by the New Scientist. The latter has new quotes by Atiyah. John Cook has more details about both Riemann and ABC. A MathOverflow thread has been started.


Viewing all articles
Browse latest Browse all 34

Trending Articles