Suggestions to program committees

by Oded Goldreich

The (only) role of a program committee (PC) for a conference is to select a program for that conference. (See further discussion of this tautology.) In contrast, the current web-page makes some specific and non-trivial suggestions regarding the selection process.


Using numerical scores with clear verbal interpretation

Program committees use numerical scores assigned by the PC to the various submissions as basis for an initial ranking of the submissions. This ranking is used to assist the actual process of discussing and selecting the program, where most of the discussion refers to submissions that are typically ranked around the imaginary threshold point. For example, in FOCS/STOC, the PC may have 250 submissions, and may want to allocate most of its time to the discussion of about 100-150 submissions, where typically 60-80 are accepted. Thus, having a good initial ranking is of great benefit to the PC.

In order to have a good initial ranking, which is produced automatically based on scores given to submissions, it is important to have a clear scale system (regarding such score). In particular, it is important that the scaling system used by all PC members is the same and is understood in the same way by all the members. Thus, I suggest to annotate the possible scores with clear interpretations stated in concrete and universal terms. A concrete suggestion, which is tailored towards FOCS/STOC follows:

Suggestion for an interpretation of score ratings
10: Seminal paper. A paper for the ages.
I'll resign if this is not accepted.
9: Excellent paper. One of the top 3-5 papers accepted to the conference.
I'll consider resigning if this is not accepted.
8: Very good paper. A strong acceptance: One of the top 10-15 papers.
I will fight fiecefully for acceptance.
7: Good paper. A clear accept: One of the top 25-35 papers.
I vote and argue for acceptance.
6: Marginally above the acceptance threshold.
I tend to vote for accepting it, but leaving it out of the program would be no great loss.
5.5: A borderline case.
I really cannot make up my mind on it.
5: Marginally below the acceptance threshold.
I tend to vote for rejecting it, but having it in the program would not be that bad.
4: An OK/fine paper, but not good enough. A clear rejection.
I vote and argue for rejecting it.
3: A very clear rejection. I'm surprised it was submitted to this conference.
I will fight fiecefully for rejecting it.
2: Trivial or wrong or known. I'm surprised anybody wrote such a paper.
I'll consider resigning if this is accepted.
1: Absolute stupidity. I'm surprised anybody had such thoughts.
I'll resign if this is accepted.

Needless to say, the forementioned pharses are in my style, and one can easily find less "excentric" ones... These phrases are tailured for FOCS/STOC, and should be adapted when used for other conferences (especially, the pharses used for 7-10). Note that I have refrained from providing a list of concrete reasons as to why a specific score is used, and have confined myself to universal terms and to specific actions (or reactions) of the PC members. Also note that the interpretation is associated with a single score rather than with an interval, because the meaning of scores is more clear this way.

Also, in my opinion there is no need to force people to use integer votes. One only loss information by forcing people to round their votes. Furthermore, imposing integer votes forces people to waste time and emotional energy in a futile exerecise (because the vote is merely an initial input into a communal evaluation procedure).

Suggestion for an interpretation of confidence ratings
0 - Pass (expressing no opinion):
I didn't read the paper, or I have no opinion, or I have a conflict of interest on this paper.
1 - An educated guess:
I have some idea of what this paper is about, but I'm not all that confident of my judgment on it.
2 - Fairly confident:
I am fairly familiar with the area of this paper, and have read the paper closely enough to be reasonably confident of my judgment.
3 - Expert opinion:
I understand the work and its context in detail.

Use of ex-submission information

There are people (myself strongly excluded!) that believe that no "ex-submissional information" should be allowed. One reason they quote is that such information may cause unfairness between authors who happen to be geographically close or feel comfortable to send email to a PC member and those who are not.

Needless to say, the no "ex-submissional information" policy is not really implementable, becuase one cannot fully screen things one may hear at various occations; for example, a talk one hears after the submission deadline or even half a year before it (about a submission or work related to it).

Anyhow, I strongly disagree with the above policy, and am actually on the extreme opposite. I think one may and should get hold of any information available. I even advocate actively seeking such information. At times, I have even contacted authors asking for clarifications on their submissions. I reject the claim that this is unfair; this claim presupposes that additional information always improves the submission's chances, whereas I see no justification to this assumption. In general, one should seek and/or get additional information but use it critically toward one's evaluation of the merits of submissions. That is, the extra information is used to evaluate the submission as is, it is not replacing the submission (e.g., improvements can only be considered as evidence to potential improvemenmts, new applications can only be considered as evidence to potential applications, corrections to minor errors are viewed as a proof that these errors can be eliminated, but something foundamentally new can not be considered as establishing a claim in the submission that was supposedly established in it). In general, "ex-submissional information" should be viewed as an additional clarifications regarding the original submission. But if the "ex-submissional information" introduces a new idea needed for establishing the results claimed in the submission (which otherwise do not follow) then it cannot change the status of the original submission from claiming some result and proposing a feasible approach (now demonsterated to be feasible) to establishing these results.

On by-products of the selection process

Typical by-product of the selection process are "reports" or "reviews" written by the PC members and/or their sub-referees. It is indeed a question what to do with such by-product. My own suggestion to a PC would be:

Following the suggestions, such "comments to authors" may be useful (while minimizing the side effect of annoying the authors). I'd also suggest to explain the above guidelines in the preamble sent by the PC chair to the authors (explicitly telling them that the comments are not supposed to provide a justification for the PC decision). In my opinion, in typical cases, no justification can be provided for the PC decisions beyond the tautological fact that the PC (or a majority in it) decided this way.


Back to Oded's page of essays and opinions or to Oded's homepage.