The (only) role of a program committee (PC) for a conference is to select a program for that conference. (See further discussion of this tautology.) In contrast, the current web-page makes some specific and non-trivial suggestions regarding the selection process.
In order to have a good initial ranking, which is produced automatically based on scores given to submissions, it is important to have a clear scale system (regarding such score). In particular, it is important that the scaling system used by all PC members is the same and is understood in the same way by all the members. Thus, I suggest to annotate the possible scores with clear interpretations stated in concrete and universal terms. A concrete suggestion, which is tailored towards FOCS/STOC follows:
Suggestion for an interpretation of score ratings
Needless to say, the forementioned pharses are in my style, and one can easily find less "excentric" ones... These phrases are tailured for FOCS/STOC, and should be adapted when used for other conferences (especially, the pharses used for 7-10). Note that I have refrained from providing a list of concrete reasons as to why a specific score is used, and have confined myself to universal terms and to specific actions (or reactions) of the PC members. Also note that the interpretation is associated with a single score rather than with an interval, because the meaning of scores is more clear this way.
Also, in my opinion there is no need to force people to use integer votes. One only loss information by forcing people to round their votes. Furthermore, imposing integer votes forces people to waste time and emotional energy in a futile exerecise (because the vote is merely an initial input into a communal evaluation procedure).
Suggestion for an interpretation of confidence ratings
Needless to say, the no "ex-submissional information" policy is not really implementable, becuase one cannot fully screen things one may hear at various occations; for example, a talk one hears after the submission deadline or even half a year before it (about a submission or work related to it).
Anyhow, I strongly disagree with the above policy, and am actually on the extreme opposite. I think one may and should get hold of any information available. I even advocate actively seeking such information. At times, I have even contacted authors asking for clarifications on their submissions. I reject the claim that this is unfair; this claim presupposes that additional information always improves the submission's chances, whereas I see no justification to this assumption. In general, one should seek and/or get additional information but use it critically toward one's evaluation of the merits of submissions. That is, the extra information is used to evaluate the submission as is, it is not replacing the submission (e.g., improvements can only be considered as evidence to potential improvemenmts, new applications can only be considered as evidence to potential applications, corrections to minor errors are viewed as a proof that these errors can be eliminated, but something foundamentally new can not be considered as establishing a claim in the submission that was supposedly established in it). In general, "ex-submissional information" should be viewed as an additional clarifications regarding the original submission. But if the "ex-submissional information" introduces a new idea needed for establishing the results claimed in the submission (which otherwise do not follow) then it cannot change the status of the original submission from claiming some result and proposing a feasible approach (now demonsterated to be feasible) to establishing these results.
Typical by-product of the selection process are "reports" or "reviews" written by the PC members and/or their sub-referees. It is indeed a question what to do with such by-product. My own suggestion to a PC would be:
Back to Oded's page of essays and opinions or to Oded's homepage.