Can we have overlooked short solutions to major problems?
src |
Efim Geller was a Soviet chess grandmaster, author, and teacher. Between 1953 and 1973 he reached the late stages of contention for the world championship many times but was stopped short of a match for the title. The Italian-American grandmaster Fabiano Caruana was similarly stopped last week in the World Chess Federation Candidates Tournament in Moscow. He was beaten in the last round by Sergey Karjakin of Russia, who will challenge world champion Magnus Carlsen of Norway in a title match that is scheduled for November 11–30 in New York City.
Today we salute a famous move by Geller that was missed by an entire team of analysts preparing for the world championship in 1955, and ask how often similar things happen in mathematics and theory.
The 1955 Interzonal Tournament in Gothenburg, Sweden, included three players from Argentina: Miguel Najdorf, Oscar Panno, and Hermann Pilnik. In the fourteenth round they all had Black against the Soviets Paul Keres, Geller, and Boris Spassky, respectively. The Argentines all played a Sicilian Defense variation named for Najdorf and sprung a pawn sacrifice on move 9 that they knew would induce the Soviets to counter-sacrifice a Knight, leading to the following position after Black’s 12th move in all three games:
Chessgames.com source |
As related by former US Champion Lubomir Kavalek, who contested four Interzonals between 1967 and 1987, Najdorf indecorously walked up to Geller after Panno had left his chair and declared,
“Your game is lost. We analyzed it all.”
Unfazed, Geller thought for thirty more minutes and improvised 13. Bb5!!, a shocking second sacrifice that the Argentines had not considered. The Bishop cannot be taken right away because White threatens to castle with check and soon mate. The unsuspected point is that after Black’s defensive Knight moves to the central post e5 and is challenged by White’s other Bishop moving to g3, the other Black Knight on b8 cannot reinforce it from c6 or d7 because the rogue Bishop can take it. The Bishop also X-rays the back-row square e8 which Black’s Queen could use.
Whereas Najdorf’s speech was in-delicto, no rule prevented Keres and Spassky from walking over and noticing and “cribbing” Geller’s move. It is not known if they got it that way—some say Keres already knew the move—but both played it after twenty-plus more minutes of reflection. Panno perished ten moves later and the other Argentines were equally dead after failing to find the lone reply that lets Black live. Though there is stronger indication that Keres noted the draw-saving reply 13…Rh7! then or shortly afterward, the first time it ever was played on a board was in the next Interzonal three years later, by the hand of Bobby Fischer.
The Easiest Riemann Proof?
Bernhard Riemann’s famous hypothesis has been in the news a lot recently. The second half of November saw one proof claim in Nigeria and another by Louis de Branges, who acts as a periodic function in this regard but has scored some other hits. We just covered some other news about the primes.
Then last week this paper came to our attention. It is titled, “A Direct Proof for Riemann Hypothesis Based on Jacobi Functional Equation and Schwarz Reflection Principle” by Xiang Liu, Ekaterina Rybachuk, and Fasheng Liu. The paper is short. My reaction from a quick look was,
It’s like saying that in an opening that champions have played for decades they all missed a mate in ten.
I must admit that I’ve spent more time thinking of a real missing mate-in-ten case in chess than probing the paper. The above is the closest famous case I could think of, and it wasn’t like the Argentines had been analyzing for the 157 years that Riemann has been open—they had only been doing it during the weeks of the tournament. Without taking time to find errors in the paper, let’s ask some existential questions:
Is it even possible to have missed so short a proof? What things like that have happened in mathematical history?
And even more existential, what is the easiest possible kind of proof of Riemann that we might not know about? This is a vastly different question from assessing Shinichi Mochizuki’s claimed proof of the ABC Conjecture. No one would be surprised at Riemann yielding to such complexity. Reader comments are welcome and invited.
P = NP Status and Gearshifts
Dick had already been intending to make an update post on what is going on with the famous problem. We know that most, if not almost all, of our colleagues believe on clear principles that . One of us, Dick, has repeatedly argued that they might be different but it is no so clear. Recently Donald Knuth has voiced some opinions along those lines.
Major math problems do get solved from time to time. Rarely, however, do the solutions go “from zero to a hundred.” That is there often are partial or intermediate results—like gearshifts as a car accelerates. For example, the famous Fermat’s Last Theorem was proved for many primes until Andrew Wiles proved it for all cases, and Wiles built on promising advances by Ken Ribet and others. En-passant, we congratulate Sir Andrew on winning this year’s Abel Prize.
The recent breakthrough on the Twin Prime Problem by Yitang Zhang is another brilliant example of partial progress. Although his step from a prime gap that was near logarithmic to a constant—initially a huge constant—was unexpected yet obtained by mostly-known techniques, it needed pedal-to-the-metal on those techniques.
One might expect a similar situation with and . If they really are not equal then perhaps we would be able first to prove that SAT requires super-linear time; then prove a higher bound; and finally prove that and are not equal. Yet this seems not to be happening.
Results and Partial Challenges
There are two kinds of “results” to report on about versus . We just recently again mentioned Gerhard Woeginger’s page with a clearinghouse of over a hundred proof attempts.
On the side, the usual idea is one that has been tried for decades. Take an -complete problem, such as TSP, and supply an algorithm that solves it. Often the “algorithm” uses Linear Programming as a subroutine, but some do use other methods.
There is the issue that certain barriers exemplified by this may prevent large classes of algorithms from possibly succeeding. So we can say at least that a proof might have an intermediate stage of saying why certain barriers do not apply. Otherwise, however, a proof of by algorithm is bound to be pretty direct. Plausibly it would have one new and pivotal algorithmic idea, one that might of itself furnish an explanation of why it was missed.
On the side, however, there are several concrete intermediate challenges on which one should be able to demonstrate progress to support one’s belief.
A third of a century ago, Wolfgang Paul, Nick Pippenger, Endre Szemerédi, and William Trotter showed that for the standard multitape Turing machine model,
This proves a sense in which guessing is more powerful than no guessing. Yet a result like
appears hopeless. Nor have we succeeded in transferring the result to other natural machine models, such as Turing machines with one or more planar tapes.
How about proving that SAT cannot be done in time and space for particular fixed and reasonable space functions ? There do exist a few results of this kind—can they be extended? How come we cannot prove that SAT is not in linear time? This however also seems hopeless today.
A related example is, can we prove that SAT needs Boolean circuits of size at least , let alone super-linear circuits? Can we prove that some natural problems cannot be solved by quadratic or nearly-linear sized circuits of depth?
Open Problems
Is it worth making a clearinghouse—even more than a survey—of attempts on these intermediate challenges?
What are some possible kinds of mathematical proof elements that we might be missing?