Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

>Computer Science conference reviewer committees and I have seen no filtering done whatsoever.

The published collection of papers from a conference are more like a anthology of the talks given (Springer is common example publisher) rather than a quality curation via rigorous peer-review. A bunch of experts wasting time with unfiltered crap is probably the norm. Virtually none of those conference papers collections have reputations to accumulate "impact".

The prestigious journals like "Cell", "Lancet", or "Journal of the American College of Cardiology" do not forward unfiltered junk papers to reviewers.

Your experience with conferences is a different situation.



sort by: page size:

Honestly, I agree with every word here. I'd rather see conferences as networking events than quality filters. Conferences should invite published works. The conference system just adds noise, and like you are suggesting, the single round results in a zero sum game and no advocates for the paper to be accepted/improved. Thus reviewers optimize their strategy: reject everything.

While I still have problems with peer review and still advocate for open publication, I do think many the the specific issues of ML could be resolved by focusing on journals rather than conferences.


The computer science practice you describe is the exception, not the norm. It causes a lot of trouble when evaluating the merits of researchers, because most people in the academia are not familiar with it. In many places, conference papers don't even count as real publications, putting CS researchers at a disadvantage.

From my point of view, the biggest issue is accepting/rejecting papers based on first impressions. Because there is often only one round of reviews, you can't ask the authors for clarifications, and they can't try to fix the issues you have identified. Conferences tend to follow fashionable topics, and they are often narrower in scope than what they claim to be, because it's easier to evaluate papers on topics the program committee is familiar with.

The work done by the program committee was not even supposed to be proper peer review but only the first filter. Old conference papers often call themselves extended abstracts, and they don't contain all the details you would expect in the full paper. For example, a theoretical paper may omit key proofs. Once the program committee has determined that the results look interesting and plausible and the authors have presented them in a conference, the authors are supposed to write the full paper and submit it to a journal for peer review. Of course, this doesn't always happen, for a number of reasons.


Except you are missing a huge huge part of what you linked:

> We conclude that the reviewing process for the 2014 conference was good for identifying poor papers

When it comes to peer review that is the actual goal. Have a filter that prevents bad things from getting published.

In an ideal world we would have a process that allows good papers to be published as well.

However I think we can all agree that is a secondary concern. Especially since pre-publish announcements are common anyway, so it isn't like no one is looking at papers that aren't published.


I think this is good, and may also improve reviews. I doubt that Nature has an issue with substandard reviews, but I have definitely seen it in some of the computer science conferences I submit to and have served on [1]. If reviewers know that their review may be published, I think they may turn in more substantial reviews.

[1] In computer science, conference publications are peer-reviewed and highly competitive. We also have journals, but most novel work appears in conferences. The more theory oriented parts of computer science will tend to publish in journals more, but they still publish in conferences, too.


Oooh.. this sounds like a great computer science problem.

"How to get an objective rating in the presence of adversaries"

It is probably extensible to generic reviews as well... so things like the Amazon scam. But in contrast to Amazon, conference participants are motivated to review.

I honestly don't see why all participants can't be considered as part of the peer review pool and everybody votes. I'd guess you run a risk of being scooped but maybe a conference should consist of all papers with the top N being considered worthy of publication. Maybe the remaining could be considered pre-publication... I mean everything is on ArviX anyways.

So instead of bids you have randomization. Kahneman's latest book talks about this and it's been making the rounds on NPR, NyTimes etc...

https://www.amazon.com/Noise-Human-Judgment-Daniel-Kahneman/...


I am also in a conference-centric CS subfield, but I have published quite a lot in journals as well (both because of multidisciplinary collaborations and because my country has coarse-grained metric-based evaluation systems were conferences don't count much, so at least a part of my work has to go to journals to appease the system).

In my experience, journal reviewing is much worse, and especially much more corrupt, than conference reviewing.

At conferences you get bad actors with almost no oversight or accountability, true. But at least they typically don't have an axe to grind, because accept/reject decisions tend to be binary (there is no "revise and resubmit", at most optional recommendations for the final version that aren't effectively enforced).

At journals, the "revise and resubmit" option gives reviewers and editors a lot of leverage to push papers their way, and I have very often seen reviewers tacitly and subtly hint that the paper would easily be accepted if the authors included three or four references to papers by Author X (which are irrelevant to the subject matter, but of course, Author X will be the reviewer or a friend). Sometimes it's clear that the reviewers didn't read the paper, their sole interest on it is that you include their citations, that's the reason why they accepted to review and all they look at. Editors could in theory prevent this, but often they just echo reviewer comments, and sometimes they are complicit or even the originators of the corruption (it's also typical that the editor asks to cite a bunch of papers from the same journal, even if they're not related, because that inflates impact factor). In any of these cases, since the author is in a position of weakness and there's always the career of some PhD student or other down the line, one doesn't typically dare to complain unless the case is really blatant (which is typically not, because they know how to drive the point subtly). It's easier to go along with it, add the citations (knowing that they will lead to acceptance) and move on.

This has happened to me very often and I'm not talking about shady special issues in journals that print papers like sausages, I'm talking about the typical well-regarded journals from major traditional publishers. In conferences I've gotten all sorts of unfair rejections, of course, but at least the papers I've published accurately reflect my views, I can stand behind them. In journals, maybe half of my papers have this material that doesn't make sense and was added to appease a corrupt reviewer.

I find that many CS authors who haven't had to publish in journals have a "grass is always greener" mentality, and expect that if we moved to journals we would find a fairer review process... and if at some point we do so, they will receive a blow of reality (not saying it's your case, of course. There can also be people who have published in journals and disagree with me due to different experiences! And there are some journals that don't tend to engage in that kind of corruption, only that there are not many of them).


The quality of reviews of conference papers is mostly terrible, though. I've been on both sides of it, here are some major complaints:

* There is only one round of reviews, with a binary accept/reject outcome. In case of rejection, feedback cannot be incorporated. In case of acceptance, there is basically no incentive for incorporating feedback, as the paper is already "through" and putting in any more time is considered a waste.

* Assigning papers to reviewers is very arbitrary and often the reviewers are not at all familiar with the field. Getting a paper reassigned to another reviewer is often so complicated and time-consuming that reviewers just do the bare minimum (if asked to rate novelty/originality etc., on a scale 1-5 they give it a "3", which given the usual acceptance rate of <50% means the paper will get rejected).

* Reviewers usually get a lot of papers to review. They don't have time (who does?) but have to finish by a two-week deadline. If one of the papers is relevant to the research of the reviewer or has been written by colleagues they know (double-blind does not help, you recognize who wrote it..), that paper will receive attention; the other papers are seen as distraction and are dealt with as outlined above.

Peer-review is important, though. Some fields of research, particularly those where a lot of research is done by actual practitioners and not just pure research scientists, are mostly driven by conferences with short (one page or less) abstracts without full papers. There are often very few citeable sources that have been peer reviewed; conference websites and abstract books vanish over the years. An arxiv-like repository can help with that, but it does not solve the problem of peer review. Community-run open-access journals seem to provide a solution, but rarely take off.


peer review at CS conferences is normally anonymous and double-blind to reviewers and authors. however, the conference organizers themselves know who the reviewers and authors are, and the conference management websites are set up to avoid assigning reviewers to papers where they have a conflict of interest with the author.

this is usually something like: anyone whose email address has the same domain as yours, plus anyone you have coauthored any paper with, both of these going back a few years. If you are ethical you will report any additional conflicts that this doesn't catch.

so if a cabal wants to make sure they can positively review each others papers, they have to make sure they don't trip those automated filters. this means they must have been at separate institutions for a while, and must avoid publishing any papers together.

then, during the review process, they can "bid on" (put themselves forward as a reviewer for) each other's papers and ensure they give only positive reviews.


> peer review at CS conferences is normally anonymous and double-blind to reviewers and authors.

tbh, it's really not, this is extremely rare (although increasing).


But isn't the peer-review process the filter?

Yeah, but good CS conferences have diverse reviewer pools, including ones from industry or different subdisciplines. Such reviewers often haven't heard of the lab, or paid attention to which grants are currently funded, the movement of students into postdoc positions at other labs, etc, etc.

I agree completely. In my experience as a PhD student (comp sci), all conferences make reviewers anonymous. As a result, there is very little accountability regarding the reviews you receive for your work. More than once, I have had papers rejected simply because a single reviewer barely read the paper and dismissed it. These kinds of reviews are very frustrating to receive, not only because they failed to understand the basic premises of your paper, but because these reviews contain no useful information on how to make your paper better for future submissions.

Of course, the opposite can happen where mediocre works slips through, and the reviewers that allowed that should be held accountable too. Its painful to me that so much of the acceptance process for research papers (in my field at least) is based on luck.

Moral of the story: reviewer anonymity is good but it comes at the expensive of accountability.


The CS field gets a lot of bashing for gravitating a lot around conferences rather than journals. Because, you know, journals are supposed to be the serious venue for the grown-ups. But actually, CS conferences (at least the ones I've published in) have a double-blind review system that feels much fairer than the single-blind in the top journals. Of course it's far from perfect (more often than not the reviewers can guess the affiliation of the authors anyway) but things like almost needing to talk to the editor to publish papers, or the editor using author name as an important acceptance criterion, do not happen AFAIK. In general my experience with reviews has felt much fairer in conferences with double blind system than in the typical journals with editorial boards full of sacred cows. And I don't say that out of spite for rejection, because in fact I've had more rejections in conferences than in journals.

A pity that in my country (Spain) the bureaucratic requirements for funding, tenure, etc. are one-size-fits-all and basically conferences count almost nothing and journals are everything, even if in my particular subfield no one cares about journals. So I end up playing a double game: publishing some papers where I know I should to find the right audience, and others where I am forced to to survive.


I can tell you one impact -- there are huge amounts of ChatGPT helped, or entirely written, junk papers appearing at conferences, which is causing serious problems getting reviewers. Some large events have introduced a pre-review phase to filter out junk, but it's getting harder, as LLMs are good at producing plausable looking nonsense.

Honestly, I'm not sure of the long term solution here. Conferences and journals may have to introduce a system where you need an existing member of the community to "vouch" for your work to be allowed to submit (they don't have to say it should be published, just worth reviewing).


Would you mind explaining the double-blind review in CS conferences? I'm a doctor by trade so am well acquainted with the way peer-review works but I've never heard of conferences being reviewed, either before-the-fact as an acceptance criterion or after-the-fact as a kind of rating.

Can you please explain?

In CS conferences there is generally a small pool of reviewers for a paper in a given sub-area (ex at a conference on systems some people might focus on file systems). These reviewers tend to review each others’ papers since to be a reviewer you also must be an active contributor.

I don’t see how any algorithm might detect this pattern of fraud without taking into account the merit of the papers.


Double blind reviewing is mostly just a computer science conference thing. Very few journals do it.

I don't think conferences have the capacity to do this. Journals, yeah, conferences no.

The difference is that in a conference you're in a zero sum game and there is no chance for iteration and the framing is opposition of reviewer/author rather than seeing as being on the same team. Yes, every work can be improved, but the process is far too noisy and we can't rely on that iteration happening between conferences.

From personal experience, I've had very few reviews that have meaningful and actionable feedback. Far more frequently I've gotten ones that friends joke that GPT could have done better. My last one I had a strong reject with high confidence by a reviewer's who's only notes were about a "missing" link (we redacted our github link) and a broken citation leading to the appendix. That's it. We reported them, then they got a chance to write a new angry review which seemed to convince the other two borderline reviewers. Most frequently I get "not novel" or "needs more datasets" without references to similar work (or references that are wildly off base) and without explanation as to what datasets they'd like to see and/or why. Most of my reviews are from reviewers reporting 3/5 confidence levels and are essentially always giving weak or borderline scores (always bias towards reject). It is more common for me to see a review that is worse that the example of a bad review in a conference's own guidelines than one that is better.

As a reviewer, I've often defended papers that were more than sufficient and I could tell were making rounds. I had to recently defend a paper for a workshop that was clearly a paper that made a turnaround form the main conference (was 10 pages + appendix when most workshop papers were ~5) and the other two reviewers admitted to not knowing the subject matter but made similar generic statements about "more datasets would make this more convincing." I don't think this is helping anyone. Even now, I've been handed a paper that's not in my domain for god knows why (others in domain). (I do know, it's because there's >13k submissions and not enough reviewers)

I've only seen these problems continue to grow and silly bandaids attempt to be applied. Like the social media ban, which had the direct opposite result of what they were attempting to do and was quite obvious that that would happen. The new CVPR LLM ban is equally silly because it just states that you shouldn't do what was already known as unethical and shifts the responsibility to the authors to prove that an LLM gave the review (which is a tall order). It is like proving to a cop that you've been shot for them only to ask that you identify the caliber of the bullet and the gun that was used. Not an effective strategy to someone bleeding out. It's a laughable solution to the clear underlying problem of low quality reviews. But that won't happen until ACs and metas actually read the reviews too. And that won't happen until we admit that there's too many low quality reviews and no clear mechanism to incentivize high quality ones https://twitter.com/CVPR/status/1722384482261508498


> Heck, there have been automatically-generated conference papers that have been accepted and went through peer review

Those papers were used to prove that those conferences weren t actually peer reviewed.

next

Legal | privacy