Preface: This position page is intended to explain my current feelings, views and actions regarding STOC/FOCS. That is, my primary motivation here is not to convince others that my views are correct, but rather just to explain my own views and actions to people who do not agree to them. Needless to say, I will be delighted if other share my views, because this will provide basis for a change.
Postscript (2012): My later essay On Struggle and Competition in Scientific Fields is very related to the issues discussed in the current webpage. In fact, Sec. 2.2 details the foregoing discussion, after a theoretical framework is presented in Sec 2.1.
For sake of clarity, I will use various methodological dichotomies, which confront notions that do not exist in reality in a pure form. In reality, the conflicting notions are mixed, and the issue is one of balance between them.
My first distinction is between competition and contents. Indeed, any competition refers to some contents, and any human activity can be viewed through the lens of competition. Still, there are contexts were the competition aspect is more dominant and others where is its less dominant. My starting point is my subjective feeling that, in recent years, the competition aspect became more dominant in STOC/FOCS.
Note that I'm not saying that the competition aspect was not present in STOC/FOCS in the past. On the contrary, my own experience (which dates to the early 1980s) is that STOC/FOCS were always marked by a (flavor of a) competition. But, in the recent years, I see a greater obsession with the competition aspects and a decline of interest in anything that is not competition-like including the actual contents. Concretely, I hear and overhear more discussions of which paper "got in" and which did not "get in", which paper got which awards, and which is better than which. And, hear and overhear less discussions of the contents of various works, what makes them interesting, and what can be "carried home" from them.
Can I prove, using hard facts, that the above is the case? Certainly no. But, assuming the distinction between competition and contents, one can easily imagine circumstances in which the competition aspect is less dominant and others where it is more dominant. Thus, the question of whether this aspect can be made less dominant in STOC/FOCS makes sense in any case. It is also common sense that making the competition aspect less dominant provides room for other aspects; specifically, to keen interest in the actual contents of scientific works.
An alternative way of demonstrating what is currently wrong with STOC/FOCS is to make a distinction between the interests of the authors/speakers and the interests of the readers/attendees. In PCs (and in outside references to the PCs), you often hear references to the legitimate (or illegitimate) interests of the authors, and the entire discourse of fairness evolves around these interests. In contrast, you rarely hear discussion of the interests of the conference's attendees, which is indeed very odd since the conference is supposed to serve the attendees.
Competition promotes superficial measures and attitudes, and thus the dominance of the competition aspect in STOC/FOCS (as well as the decline of interest in the contents itself) yields disregard of conceptual aspects and focus on technical ones. This leads to the second distinction that I wish to make: a distinction between the conceptual and the technical.
Each real scientific work exhibit a mix of a conceptual message and technical details that support and instantiate this message. I wish to stress that the conceptual message need not be a novel model or definition; this message may be very "technical" in nature (e.g., a novel technique, or even merely the fact that the complexity bound on some problem was improved). Still, there is always a conceptual message (which may be less interesting or more interesting), or else the technical details are meaningless (at the last account). On the other hand, a conceptual message with no technical details does not appear in the domain of science.
Likewise, the evaluation of scientific work (see related posting) is typically a non-trivial function of both its conceptual message and its technical details; that is, in typical cases, none of these two aspects should dominate the evaluation. Unfortunately (at least according to my feeling), recent PCs of STOC/FOCS have shown little interest in the conceptual messages of the various submissions, and have confined their attention to evaluating the technical details (and specifically their "difficulty"). This is related, in my opinion, to the current dominance of the competition aspect, because technical difficulty is easier to evaluate (and argue about) than the importance of a conceptual message. Indeed, most reviews and discussion tend to focus on the technical difficulty of the various submissions. In general, I note an increase in the fraction of superficial reviews, which is to be expected when the focus of the review process is to serve a competition.
To demonstrate my claim regarding the bias of recent PCs towards technical difficulty, let me consider the profile of typical papers accepted at STOC/FOCS (i.e., the bulk of the program). Recalling that the bulk of the accepted papers are not significantly better than many of the non-accepted submissions (see related posting), I note that submissions of average technical difficulty are prefered to submissions of lesser technical difficulty that carry a much more important/interesting conceptual message. That is, an "advantage" in technical difficulty carries much more weight than a significant advantage in the conceptual message (which, at best, is discarded as having secondary importance).
[Indeed, a new venue in which I'm involved, called Innovations in Computer Science (ICS), intends to give higher weight to the conceptual message of the submissions. The name chosen for ICS is supposed to reflect this commitment.]
As for the disregard for the conceptual message, this is evident in almost every STOC/FOCS review that I saw in recent years. I wish to stress that I am referring to the reviews themselves and not to the PC decision (which may be unrelated to all the reviews that one may see from outside the PC). My point is that almost all the reviews that I saw showed no interest at all in the conceptual message and were focused mainly on evaluating the novelty and difficulty of the technical development. I wish to stress that I don't dismiss technical novelty, but I reject it as a SOLE criteria for evaluation of submissions.
Getting back to ICS, my reading of its texts is more general; that is, I read that ICS claims that there is a value also to conceptual steps that are not coupled with hard technical difficulty. We (ICS) do not dismiss the latter, but are concerned at the common dismissal of the former!
In any case, I see a world of difference between trying to force one vision of what TOC should be doing (e.g., application-driven research) versus arguing for pluralism (in which both conceptual and technical steps are appreciated)!
As a final comment regarding technical difficulty, let me note that technical difficulty is often misinterpreted as using and/or referring to "non-elementary mathematics". Although there is a positive correlation between the two, they are not identical. Consider, for example, Ran Raz's proof of the Parallel Repetition Theorem or Johan Hastad's proof of the Switching Lemma.
Additional discussion of the recent preoccupation of STOC/FOCS's PC with technical difficulty appears in Salil Vadhan's comment [June 12, 2009]). Below, I reproduce extracts from his comment.
... there are many different kinds of [scientific] contributions: making progress on known important problems, introducing new models and questions, developing new techniques, bringing simplicity and clarity to previously complex/confused areas, drawing new connections between topics, etc. ...N.B.: Even if one may claim a positive correlation between learning from a result or a proof and the difficulty of the proof, the two things are not identical. Thus, we should consider what we learned from the paper and not whether the paper is difficult.
Note that some of the above kinds of contributions might be called "technical" (e.g., making progress on existing important problems, developing new techniques) and others "conceptual". But purposely missing from my list is the question of how "difficult" or "easy" the paper is. Indeed, I feel that a paper's "difficulty" is orthogonal to its value, and should not be a significant criterion in deciding whether to accept it. Instead, we should be trying to assess how much we *learn* (or will learn) from a paper, how it contributes to advancing the state of knowledge in the field. We may learn a lot from a difficult paper because of the significance of the final result or because of techniques developed along the way, but either way, the paper's difficulty is not the *reason* for its value. To some extent, the same holds for simplicity - if we prefer simpler solutions (when they exist), it is because they tend to clarify our understanding, tend to be more efficient/practical, etc.
In the essay I study the status of intellectual values in the TOC community during the last three decades. Specifically, analyzing the motivational parts of papers that appeared in several STOC proceedings, I found evidence to my feeling that the importance attributed to intellectual values has declined in the last decade (or so). The said evidence is conditioned on a number of assumptions, which are spelled out in the essay. I then discuss three theories that may be used to explain the decline of intellectual values in TOC (or rather three phenomena that may cause this decline).
Most relevant here is a sociological theory regarding the evolution of scientific fields and the competition in them. It asserts that as a field become more successful (or, actually, is considered so from the outside), the competition within the field intensifies, and this creates pressures towards ``objective'' measures of accomplishment that can be reviewed from the outside. Such measures are typically oblivious of intellectual contents. Thus, under the reign of (externally monitored) competition, intellectual values decline.
Back to Oded's page of essays and opinions or to Oded's homepage.