On TCS Conferences [fragments, 1997]

by Oded Goldreich


The following fragmented thoughts regarding TCS conferences were drafted by me in 1997. I just discovered them now (in 2009...). Needless to say, some of these ideas have appeared in other statements of my opinions. See for example on the role of program committees [1999] and on "rejection" [2006].


Instead of an Introduction

Following is a common complaint regarding TCS conferences
... we received only a generic comment explaining why the paper was rejected for STOC: The results were considered not to be interesting enough for STOC.
The above quotation, taken from an actual mail I've received, reflects some common misconceptions about the TCS conferences. The following text, edited out from my response mail, represent the way I view things.

My main point

The above is a wrong way of looking at things. Rather than saying that it was decided to reject your paper from STOC, I would say that the STOC committee decided not to include your work in the program. My choice of words is not merely a matter of semantics. It reflects a totally different way of looking at things.

A common mistake is to view conferences as competitions among papers, where acceptance (and other awards) are being distributed according to some (objectively absolute) scale. In contrast, my view is that conferences are a vehicle for fast spreading of scientific work. Their program committees receive many submissions out of which they need to form a good program. There is no ``best'' choice, only good ones. Likewise, it makes little sense to attach rationale to the decision of a program committee, or a committee in general. A committee decision is the ``average'' (not necessarily in the strict mathematical sense) of the opinions of its individual members. Thus, it is wrong to attribute rationales to the average, although rationales (right or wrong) may be attributed to the individuals. Furthermore, what is really important about a committee (e.g., a program committee) is its output (e.g., the program), not the specific opinions expressed in the process. (Here and below, I am assuming that the committees apply a reasonable decision process. Still, this does not guarantee good decisions. However, unreasonable processes are bound to result in bad decisions, but this is a separate important issue.)

There are many involved considerations towards achieving the goal of forming a good program. In particular, there are lower and upper bounds on the number of papers in the program, and the selection has to respect these. (The rational behind these bounds, as well as the mere existence of a selection at all, is a topic for a separate debate.) Given this fact, my main claim is that choices made around the upper bound do not reflect a ranking of the papers. Let me elaborate. If STOC is to accept, say, 65 papers, then the way I view things is that) the rationale for this figure is the desire to ensure that the, say, 10--25 papers of the widest interest/importance be accepted. Note that the quoted range is fairly wide and depends on the specific submissions. In case the program committee (PC) misses one of these papers this is a major mistake. The other 50 (or so) papers are to be accepted in order to achieve two goals: (1) Provide margins of error so to ensure that all the 10--25 papers above are accepted; and (2) Accept some of the other works which are considered to be of interest. Typically, the number of ``works of interest'' is well above the total number of possible acceptances (let's say that for STOC it is above 100).

The PC then tries to make the ``best choice it can'', being aware that it is not possible to do the ``best choice''. One thing to bear in mind is that the importance of papers is typically understood better a few years after they appear. The PC, however, has to take a decision before that date... Secondly, good fine-grained decisions are infeasible to make, whereas good coarse decisions are feasible. Thus, even assuming an objective and absolute ranking of all submissions (which is indeed a great illusion), I claim that accepting the ``best'' 10--25 papers when accepting a total of 65 papers is a feasible goal, whereas accepting exactly the best 65 papers is an infeasible one. Furthermore, the difference in quality between the, so called, best 65th paper and the best 66th one is negligible (if existing). Thus, it is ridiculous to attach any value-judgment to the fact that one was accepted while the other was not.

Comment added later: I'm not saying that it does not matter whether your paper appears in STOC or not. Presentation in a conference (and specifically in a major one), provides an excellent way to communicate your ideas and results. It gives your work (and yourself) a stage - an opportunity to ``get across''. There is also a reality by which superficial people evaluate other people based on how frequent their papers appear in these conferences. What I'm saying is that all this does not mean that one should take the bit-decisions of the PC as a valid evaluation of scientific work (i.e., ``accepted'' meaning good work, and otherwise bad work). This is not the role of PC. Their role is to select a program! Hopefully, this note may help to make everybody (including the superficial people mentioned above), understand this basic thing.

Short statements of some other related opinions

Here are some other related opinions expressed in the same mail.

On evaluating one's own work. In contrast to common sayings, I do believe authors can make reasonable judgments also on their own work. (Indeed, one should be aware of one's feelings, but that's true in general.)

On evaluating scientific work. Indeed, a wide perspective of the area is always necessary when evaluating scientific (or artistic...) work.

On comments by PC. I've always objected the practice of some PC to send the authors unscreened comments (and evaluations) by individual PC members. This is annoying the authors and contributing nothing. My opinion (which is typically not accepted) is that PC should compile a list of constructive comments (not evaluations or value-judgment), and pass these (if at all) to the authors.

On the nature of a PC. As much as the community is not uniform, so are the members of program committees (and other committees). There are more wise people and less wise ones, some who are more knowledgeable, some with better taste, and some with stronger technical taste. Some are more honest, some are more careful, and some more thorough, etc, etc. Thus, it is surprising that when faced with decisions of such committees some people behave as if these decisions were taken by people of utmost wisdom, absolute knowledge, etc, etc.

Still there are better committees and lesser ones. A good committee should be selected by the criteria suggested above. The scientific achievements per se of candidate members are irrelevant. Indeed, such achievements are correlated to desired properties and may be used as indication (in lack of more reliable information), but what is stressed here is that the mere fact that person X has made great contributions to science does not mean that he/she will make a good PC member.


Back to Oded's page of essays and opinions or to Oded's homepage.