Why is anonymous submission a bad idea [2017]

by Oded Goldreich

[draft: Sept 2017]

I was told that the idea of anonymous submissions to conferences has been gaining some support lately. Having opposed this idea for ages (see my 1991 essay Critique of some trends in the TCS community), I wish to articulate my objection once again. I will try to be more gentle towards the current advocates of anonymous submissions, because, unlike in the past, they seem to be relatively non-senior.


The idea of anonymous submissions to conferences (a.k.a ``double blind reviews''1 for conferences) is often proposed as a remedy to a genuine concern by which PC operate under a significant bias towards submissions that are co-authored by better known researchers (i.e., that the mere name of a better known researcher improves the acceptance probability of the submission, regardless of its merits). In my opinion, as far as reviewing conference submission in TOC is concerned, this concern is grossly overstated, and I am far more concerned of much larger biases and bad attitudes that spread in TOC and are far more dominant and harmful.

(One may say that it is to be expected that I'd not be concerned of a bias that operates in my favor (as a better known researcher). However, in my opinion, it is a fact that my own submissions do not enjoy any positive bias, to say the least. In any case, I have served on many PCs and have seen many reviews written by (or for) PCs, and I do believe that such a bias if at all existing is not very significant.)

Yet, generally speaking, a bias in favor of the well-known and/or famous and senior is a well-documented phenomenon, which sociologists have been studies for decades. In 1968, Robert Merton (the founder of the Sociology of Science) termed it the Matthew Effect. Now, when faced with a harmful social phenomenon, one should ask:

  1. In which social group is the phenomenon manifested, to what extent, and why?
  2. How can one protect against it?
Failing to ask these questions and rushing to a ``solution'' is not a good basis for action. It tends to lead to adopting superficial ``solutions'' that, while bestowing their promoters with an aura of good will, fail to address the issue at hand. Furthermore, such ``solutions'' create an illusion that the issue was addressed, and may actually make the issue more acute.

The Matthew Effect was recently demonstrated in an experiment conducted at a CS conference (see report on reviewing at WSDM 2017). However, it is extremely naive to deduce from what happens in one research community to what happens in another, when the communities are vastly different. In particular, it would have been impossible to conduct such an experiment in a reasonable TOC venue (for reasons outlined below), and for sure the results would have been different.

(Let me also stress that purely academic CS conferences (esp., TOC ones) are quite unique in the academia world viz-a-viz the importance attached to the conferences, and consequently the amount of work put into composing their programs.)

Since I have no real proof, beyond my own impressions, to support the claim that the Matthew Effect is not very significant viz-a-viz conference reviewing in TOC, let me try to speculate on why this is the case. I believe that the main reason is that TOC is far less hierarchical than other disciplines. (You can easily sense it when you meet scientists from other disciplines or attend their meetings.)

(Of course, one may ask why is TOC less hierarchical than other disciplines, and I guess I will not be pardoned if I don't speculate on this too. I believe that the reasons are rooted in the history of the discipline; specifically, being relatively young, having been born in the sixties and seventies (and being influenced directly and indirectly by its Zeitgeist), and being influenced by conflicting traditions (e.g., EE and Mathematics, or more broadly engineering and science). In addition, its present conditions also do not facilitate the creation of highly hierarchical power strictures, where here I refer to the relatively small size of TOC, and to the fact that it is neither poor nor filthy rich (i.e., it does offer decent living but no empire-building opportunities).)

Getting back to the topic, as I said, I think the more severe biases in TOC are related to the contents and appearance of papers, and less to the identity of the authors. I am talking about the bias in favor of works that appear to be technically hard and ``mathematically deep'' (and the bias against works that lack these features). As to the identity-based bias, I think that, within TOC, this bias will not be reduced by anonymous submissions, but rather increased by it. Firstly, because anonymous submissions will not be effective in hiding the identity of authors (especially not of the more famous ones), and secondly because they will create an illusion of identity-blindness which will only facilitate sinful actions of favoritisms. Lastly and most importantly, in my opinion, this practice will increase the importance given to the mere acceptance of a work to a conference, and will reduce the keen interest in the actual contents. (That is, it will amplify the perception of conferences as competitions and hurt their function as an educational meeting aimed at the exchange and dissemination of inspiring ideas and results.) Let me elaborate on these aspects, which are related to the direct damages of anonymous submissions.

The first issue refers to the reality in which researchers post their work on their homepages and/or on various depositories prior to or soon after submitting them to conferences. This comes on top of giving talks on their work, with announcements sent over large distribution lists and/or posted publicly on various websites. In such a reality, anonymous submission is a joke if not worse than that (and forbidding such posting and/or talks is unthinkable if one recalls that the aim of research is sharing its outcomes).

Concretely, suppose that you review a submission that seems related to a talk you heard or looks similar to a paper you saw on some archive. Will it not be important to find out whether these are two independent works or the same work? (This is beneficial not only for handling potential conflicts, but also for saving time in case you did see and understand the work before.2) Or suppose you want to consult an expert about a submission. Do you want to risk asking the (anonymous) author; that is, risk your own anonymity as reviewer, let alone the embarrassment? Or suppose you found a problem in the submission and want to contact the authors with questions. (OK, there could be an exception for some or all of these cases, but then we have an anonymity rule that is often violated such that the more cases one handles the more violations ones gets.)

I wish to stress that the foregoing situations are not imaginary. The point is that TOC is a relatively small discipline and its research community is extremely vibrant. Members of the community, especially experts in a sub-area, tend to hear of new works before they are submitted to conferences or at about the same time. This holds especially with respect to the work of more famous researchers.

In general, anonymous submission adds difficulties to the review process, which is difficult enough as it is. As illustrated above, it creates uncertainties regarding whether or not the result is known (i.e., whether or not it is identical to a result one has seen or heard), it creates difficulties regarding consulting external experts, and it creates uncertainties that require resources to resolve.

Furthermore, anonymity create a gap of knowledge between the reviewers who happen to identify the authors and those who don't. (Should I remind that knowledge is power?) Moreover, if reviewers are dishonest, then they can take advantage of this gap, pretend that they don't know who the authors are, while promoting the authors' "interests" (possiblly after being privately told of the submission by the authors).

A different consideration is that adopting anonymous submission sends a message by which the reviewers' integrity and/or judgement cannot be trusted, and that the community cannot cope with this phenomenon. In contrast, I think that the community can and should deal with dishonesty and poor judgement, by proclaiming that these will not be tolerated (and acting accordingly). (In fact, to some extent our community does deal with these problems, although much is to be desired on this front; I will definitely support mechanisms for dealing with these problems, but I think we should start with a clear message by which dishonesty is not tolerated and good judgement is a prerequisite for serving as on a PC.)

To summarize, I think the idea of anonymous submissions, when raised innocently, represents dangerous nativity and a desire to be ultra-honest. Both nativity and ultra-honesty happen to lead to dishonesty, because if one cannot distinguish a real crime from a potential violation of perfect fairness (which could and should be carefully addressed rather than ignored or pseudo-addressed by superficial and formal rules), then one is on the way to hell...


1. I prefer the term anonymous submissions over double blind reviews, since I believe that the latter term reflects fundamental misconceptions regarding the review process. I believe that the rationale underlying the anonymity of the reviewers requires not elaboration, whereas I dispute the benefits of making the authors anonymous and believe that the damages of this practice go unnoticed.
2. I am aware of the fact that relying on extra-submission information is also controversial, but believe those opposing this practice are naive and mistaken. Again, the existence of extra-submission information is unavoidable (unless one is as insane as to expect experts not to follow research in their area in the period before and during their service on a PC), and the affect of such information is inherent to it (i.e., once you understand something, you cannot go back to your prior state of knowledge). Furthermore, obtaining as much information as possible is the best way to reach educated decisions. Note that additional information about a work does not necessarily increase its value; it may well decrease it. Hence, works for which extra-submission information is available are not necessarily favored by this information; they merely get a better chance of being properly evaluated. Hence, I advocate obtaining as much information as possible (including extra-submission information) about a work before evaluating it.


Back to Oded's page of essays and opinions or to Oded's homepage.