Saturday, May 2, 2009

The non-Gödel papers

As you probably know, I think theory is in a deep crisis: our obsession to turn into philosophy (prividing unappreciated service to disciplines we do not understand), stemming from our deep commitment to become as ignorant and irrelevant as possible inside the CS department; the influx of people who like to pretend we're Mathematics, and have no idea what computation means; the obsession with the "polynomial" versus "non-polynomial" and it's many consequences; etc etc etc. Since I am not inclined to write novels, I have never been able to put all these arguments inside a coherent text.


Fortunately, every once in a while, the ball is raised at the net and I can say something short that expresses my feelings. Richard Lipton talks about the Gödel prize, and some papers that could also have won it. His examples are all in complexity, which is reasonable since he's a complexity theorist. But looking at the list of winners on Wikipedia, what it lacks more is algorithms — you know, papers that try to understand how fast some problems can be solved (the thing non-theorists believe we're studying).

Here is my own "forgotten list," from which I am certain to have forgotten some papers. Please add your favorite examples in the comments — but note that I am deliberately discarding old papers that were inelligible, like splay trees, classic flow algorithms, persistence, etc. 

In no particular order:
  • [Fredman-Willard] on the fusion tree. This kick-started a major field (integer search) and directly led to at least two dozen follow-up papers in STOC-FOCS-SODA. Candidates for the Gödel Prize do not get any better.

    This paper has passed its eligibility period, unfortunately. Another good candidate in this line of work would be [Thorup] solving undirected shortest paths in O(n) time.

  • Karger's papers on mincut, which are very influential on the use of randomization in graph algorithms.

  • Clarkson's introduction of randomization to geometry (thanks, Suresh!).

  • Chazelle's series on "Lower Bounds for Orthogonal Range Searching" is a remarkable contribution. However, Chazelle should probably just win the Knuth Prize since his contributions are too varied (MST, fractional cascading, triangulation, etc).

  • the power of two choices is an influential and popular idea. I'm not sure who should get the award though; as with PCPs, it can perhaps be shared by a couple of papers.

  • dynamic graph algorithms are a broad, deep, and popular research topic. [Henzinger-King] and [Holm-de Lichtenberg-Thorup] could win a joint award for polylogarithmic connectivity. These papers basically restarted the whole area and led to maybe 2 dozen publications in STOC-FOCS-SODA. The latter is also a popular data structure to teach, for its basic beauty.

    Another candidate pair for a joint award would be [Demetrescu-Italiano] and [Sankowski], which give remarkable solutions to all-pairs shortest paths and transitive closure.

Add your own favorite examples in the comments.

So, am I claiming an anti-algorithmic bias because these papers did not win? No, all sorts of randomness is involved in prizes. I am claiming an anti-algoritmic bias because, in the current environment, it seems such papers are not even viable candidates.

13 comments:

Anonymous said...

Karger did get a Tucker prize, which is probably not the same as the Fulkerson/Goedel prize, but it is recognition. Also, getting an MIT prof/tenure is also recognition. I think it is the papers that do not get *any* prize and/or author(s) who do not get recognition that can be a problem.

Anonymous said...

Do the so called applied theory come into any of these algorithmic break throughs. Classic examples might be
the Lamports solutions to the byzantine agreement.

Anonymous said...

It seesm to me that

1. Shor's Quantum factorization.

2. Aggrawal, Saxena, Biswas's primality test

3. Alon, Matias, and Szegedy's frequency moment estimation

4. Freund and Shapire's Adaboost

and even

5. Vardi and Wolper's model checking

were all papers whose main contributions were the algorithms that they produced. Even Spielman and Teng's definitions were primarily interesting because of the new way of thinking about algorithms that they introduced.

To ignore these is to have a pretty narrow view of what constitutes algorithms.

What makes it tough for some of the papers you cite is the view that shaving off log factors is often viewed as much less interesting than larger improvements. The standards required for judging papers that shave off log factors (or sub-logarithmic factors) seems different - if you are going to do this then, as Michael Mitzenmacher keeps suggesting, you ought to implement your new algorithm and show how much better it is in practice. I think that there is skepticism in general that improvements come at the expense of more complex algorithms that are worse in practice.

Clearly some of the randomized algorithms you cite like Clarkson's or Karger's don't suffer this problem in general and the power of 2-choices work has been widely implemented.

The polylogarithmic dynamic connectivity and min spanning trees of Thorup was a huge surprise. Maybe a better question than your comment about algorithms in general would be: Can a paper on data structures win the Godel prize?

Mihai said...

Karger did get a Tucker prizeOk, I was not aware of this. It probably seems fair, since all the work was in his PhD. I agree with the general principle that it doesn't make sense to give 2 prizes for the same work.

Michael Mitzenmacher said...

Mihai,

While I share some of both your general and specific concerns raised in this post, I have to admit that on reflection, I'm not sure that one can glean too much by looking at prizes in general (and the Godel Prize in particular). Perhaps the Godel Prize is somewhat biased toward the more theoretical, as there's also the Kannelakis award for "theory and practice"; also, more practical theoretical work has more potential play in other venues. (The work on Tornado codes I was involved in won an IEEE Information Theory Society Prize, though its first incarnation was a FOCS/STOC paper.) Perhaps it's that the theory community has unconsciously decided Jon Kleinberg should be our default "practical-theory" prize winner; he's certainly not under-represented for prizes and is an outstanding ambassador for theory (both inside and outside CS departments).

The theory community, historically, has been shy about prizes, from my understanding. Perhaps we've been too much so, particularly in algorithms and data structures.

Mihai said...

Anon 3: Look, you can't simply call something "algorithms" if it has algorithmic content. By this criterion, half the winners of the Fields medal are algorithmists. The community that makes progress by reading the paper matters.

By this account, I cannot consider:

2. Aggrawal, Saxena, Biswas's primality test 5. Vardi and Wolper's model checking... as algorithms. They are in their own specialized community, and the general algorithmic community does not learn anything from those papers.

The PRIMES in P paper also has a rather strong complexity flavor --- we knew long enough how to test primality by a randomized algorithm.

1. Shor's Quantum factorization.A distinguishing feature of algorithmic research is that it is done in real models. For now, Shor's algorithm belongs to quantum computing, not algorithms.

But I will certainly count as algorithms:

3. Alon, Matias, and Szegedy's frequency moment estimation4. Freund and Shapire's Adaboost*

*though it won the prize flying the machine learning flag...

[6.] Even Spielman and Teng's ...*

*though it doesn't address the concern about a polynomial / nonpolynomial obsession.

I will also add:

7. Jerrum and Sinclair on approximating #P problemswith the same caveat as Speilman-Teng.

Anonymous said...

I understand what you mean about "turning into philosophy," but I don't understand what you mean by "the influx of people who like to pretend we're Mathematics, and have no idea what computation means" or by "the obsession with the 'polynomial' versus 'non-polynomial' and it's many consequences."

I'm not asking for a novel, or for naming names, but could you elaborate on these two points a bit?

Anonymous said...

Add your own favorite examples in the comments.Alan Siegel, "On Universal Classes of Fast High Performance Hash Functions, Their Time-Space Tradeoff, and Their Applications"

Anonymous said...

"I agree with the general principle that it doesn't make sense to give 2 prizes for the same work."

I think it was sort of dumb to give "Primes is in P" both the Goedel and the Fulkerson prize. I should have gotten (deservedly) one or the other, but not both. It is sort of laziness on the part of the commmittee.

noam said...

Mihai,

Why don't you consider the PCP paper as algorithms? Its main result is "apx-3-sat is NP-hard (and so are 100s of other algorithmic problems)". This result was certainly used heavily by the algorithms community -- in fact, it is probably the most influential result in ALGORITHMS in the last 20 years...

Should its complexity ancestry disqualify it?

Mihai said...

Michael:

I'm not sure that one can glean too much by looking at prizes in general
Clearly. The sample size is too small, there's too much randomness etc. Like I said, my beef is not that there papers didn't win, it's the (perceived) impression that they don't stand a chance in the first place.

It is, of course, excellent that TCS papers keep winning prizes in information theory, OR, etc. We should keep those in mind.

Jon Kleinberg and Peter Shor are the first examples everybody mentions when talking a "poor use" of prizes. A lower prize after you've won a bigger one seems totally useless (like an anonymous said: "laziness on the part of the committee"

Perhaps the Godel Prize is somewhat biased toward the more theoretical, as there's also the Kannelakis award for "theory and practice"
I see an appalling double standard in the community: an O(lg^3 n) approximation algorithm is great theoretical work, which is not totally disconnected from reality because "insights" from these algorithms sometimes help in practice (an assertion nobody is going to test).

An algorithm that reduces the running of an important problem is ok, but you need it to be really practical before it becomes cool (it should be backed up by a full set of experiments, and maybe a start-up using the algorithm).

Mihai said...

Noam:

Why don't you consider the PCP paper as algorithms?
I can agree that it qualifies as an approximation-algorithmic paper.

But whenever I say algorithms it should read "algorithms, minus whatever the TCS community is doing in approximation algorithms".

I have serious concerns about our field of approximation algorithms (with important exceptions), and I think it's better to treat it as an entirely separate field.

Not really a theory person said...

Have you nominated any of these papers to the Goedel prize?
Because, as you know, you have to buy a ticket in order to win the lottery...

Nominations take a lot of effort and are a thankless job (often none even knows you organized the nomination).
But if you want your favorite results / people to win, you should make the effort and nominate them.

You'd be surprised that it sometimes succeeds.