0% found this document useful (0 votes)
36 views3 pages

O'Peer: by Jess H. Brewer 20 February 2012

The document proposes a new system called O'Peer for open peer review of scientific literature. It would allow anyone to review papers and rate papers based on the credibility of the reviewers. The system aims to make peer review more open and democratic while still maintaining quality. It would build on existing preprint archives by allowing open reviews after authors finalize a paper version.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views3 pages

O'Peer: by Jess H. Brewer 20 February 2012

The document proposes a new system called O'Peer for open peer review of scientific literature. It would allow anyone to review papers and rate papers based on the credibility of the reviewers. The system aims to make peer review more open and democratic while still maintaining quality. It would build on existing preprint archives by allowing open reviews after authors finalize a paper version.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

O’Peer

by Jess H. Brewer

20 February 2012

Following a Peer Review session at the archival capacity of journals. No more;


AAAS meeting this week, I am going to record we can easily archive every word writ-
my thoughts for posterity, proselytizing shame- ten on any subject, regardless of worthi-
lessly about my vision for the future of peer ness. I need not list examples demon-
review. strating this. The problem today is
identifying which are worth investing the
So. . . let me begin with the catchy name.
reader’s limited time/effort resources in
The system I propose is entirely reliant on the
reading; this limitation actually imposes
Internet, and everyone knows that the first re-
much tighter constraints on what should
quirement for success of any new Internet entity
be published within this binary system.
is a catchy name. I trust (especially in con-
text) that the intended connotations are obvi- 3. The present system fails miserably to
ous: Peer needs no explanation; the O’ prefix achieve the above goals. As many peo-
stands variously for of or by peers and for a ple observed in the session, everything
shortening of Open, which you will see is a key gets published somewhere, and it prob-
feature. That being said, if you want to call it ably should! After all, someone went to
something else, go for it! This is only a sugges- a lot of trouble to do the research and
tion. write the paper; it might even be found
Now down to serious business. Here are some later to be not as silly as the reviewers
axioms I hold as self-evident: first thought. But we not only fail to en-
sure that all serious efforts get published,
1. The purpose of publishing scientific liter- we also fail the reader by offering only the
ature in general is crudest form of advice on which papers
she should read.
(a) to distinguish erroneous drivel from
genuine progress worth reading; 4. The only guidance one can get on what
to believe comes in the form of elitism:
(b) to safely archive the latter so that it
only read the most prestigious journals,
may be referenced and consulted in
because they are the ones that reject all
perpetuity without revision.
but the “best” papers. Do they? Really?
The problem is that, for any given jour- Well, they try. . . .
nal, this is a binary decision: either the
5. Despite their best efforts, the most presti-
paper is published (in that journal) or it
gious journals are prey to politics. While
is consigned to obscurity (from that jour-
the perceptions of rejected authors (that
nal’s point of view). Binary choices are
the Editors employ a cadre of referees who
tidy, but they leave no room for subtlety.
work together to preserve the exclusivity
2. In the past, limited reserves of paper of their “old boys’ club”) may be slightly
and shelf space dictated limits on the paranoid, “Just because you’re paranoid

1
O’Peer

doesn’t mean no one is out to get you.” Upon submitting a new paper to arXiv, its
Furthermore, since funding and promo- author(s) grant passive read access to everyone
tion hang in the balance, many authors in the world, while retaining the right to re-
will stop at nothing to pressure Editors vise their submission. At this stage, reviews
into accepting their papers. Thus the are neither solicited nor accepted by arXiv; of
present system encourages bad attitudes course, anyone with comments or suggestions
and bad behaviour in both directions. may send them to the author(s) directly (not
anonymously). This much is just a recapitula-
6. To paraphrase Churchill, “Democracy is
tion of the existing arXiv system.
the worst possible form of government,
except for all the rest.” We despair of wis- When the author(s) are satisfied that their
dom in democratic government, but we paper has converged to a version they would
really like the idea. The same goes for be happy to have published, that version is
peer review. Everyone wishes peer review “frozen” (no more revisions of that version) and
could be “Open” (i.e. accessible to every- the paper is open for review.
one), but if cranks and ignorant fanatics
Review proceeds as follows: anyone who has
had the same “clout” as established schol-
registered with the system as a referee (let’s
ars, we would have. . . well, Twitter!
call her reviewer i) can write a review of pa-
per j and fill in an online form evaluating its
OK, I have started to mix my conclusions in i . Based on some formula Q
various qualities qjk
with my “axioms”. Sorry. I will come back and i , paper j is assigned a “score”
in terms of qjk
edit this into a more rigourous form later; right
now I just want to get it written down. Qij for reviewer i. After N reviews (some min-
imum number may need to be required), paper
What we really need is a (multiparameter) j is assigned a net “quality factor”
“credibility profile” for each reviewer of any pa-
per. If every would-be referee were thus rated, 1 XN

it might be feasible to Open up peer review Qj = Qi Ci


N i=1 j
without erasing its effectiveness.
where Ci is the credibility of reviewer i, which
The solution I propose goes something like
is an absolutely essential feature of this system.
this:
Like the “quality function” Q(qjk i ), the
Consider arXiv. Most papers in certain fields,
“credibility algorithm” C may involve a com-
and many papers in other fields, are placed
plicated evaluation of the reviewer’s past per-
on the arXiv site before they are accepted for
formance, both as an author and as a referee,
publication in a journal; this has led to cer-
yielding the credibility Ci of referee i. It is
tain conflicts because (in my opinion) the es-
an algorithm rather than a function, because
tablished journals rightly see arXiv as a poten-
it evolves with time. In the absence of a priori
tial threat to their hegemony. Knowing this,
input (which I will discuss later), Ci is initially
arXiv has avoided implementing open peer re-
zero, or perhaps some small value, to indicate
view, even though it would be a simple mat-
that the opinion of an unknown referee carries
ter to implement, partly to avoid an immediate
minimal weight.
confrontation with powerful forces and partly
because it would not be quite such a simple Once registered, reviewer i accumulates
matter to implement effectively. The following credibility Ci based on number and quality of
is my prescription for such an implementation papers published in the field, number and qual-
— perhaps not through arXiv, but using some ity of reviews of others’ papers (yes, reviews can
equivalent online archive of preprints; I’ll keep be reviewed too; why not?) and perhaps other
speaking of arXiv with this understanding. criteria to be determined by public debate.

2
O’Peer

Needless to say, Ci is apt to become a num- editors. I wish. . . .


ber (or, more likely, an array of numbers; but
The above description of factors, functions
let’s keep it simple for now) of considerable im-
and algorithms is unsatisfactory, even to me.
pact on referee i’s career, assuming the identity
Credibility is many-faceted and topical; qual-
of referee i can be obtained. Anonymity, pri-
ity is also multidimensional. I am tempted
vacy and security are the most uncertain issues
to use vector notation for both, but I despair
in my proposal, so I will avoid them for now.
of inventing orthonormal bases for their vector
Meanwhile, however, we can deal with the spaces (although one might dream of such re-
inevitable furor over the proper form of al- finements). Nevertheless I want to put this idea
gorithm C: we should embrace it (the furor) forward in this crude form just to get a discus-
gladly, because it is, in fact, the appropriate sion going.
topic of debate. I believe a great deal can
I believe that such a system, by generat-
be learned from a continuing debate over the
ing continuous ranges of paper quality and ref-
proper form of the credibility algorithm — and
eree credibility, would serve authors, referees
the wonderful thing about this debate is that
and readers much better than the present dog’s
it converges at each step to substantive action,
breakfast of cutthroat politics and hit-or-miss
unlike most hotly contested political debates.
evaluations. And it would be utterly demo-
Numerous journals already possess huge cratic, which is always nice.
databases of referees’ credibility profiles. This
So there you have it. Feel free to take this
is perhaps their most valuable possession.
ball and run with it, adapt it, convert it, do
When I was a Divisional Associate Editor for
whatever you like with it. I don’t want credit
Physical Review Letters I used to go to all
(what would I do with it at this stage of my
the Editors’ meetings at APS conferences and
career?); I just want peer review to work better.
urge them to “capture” the Editors’ expertise
in knowing whom to ask (and whom to trust)
in an accessible (but “anonymized”) database
that could be used as a starting point for the
idea outlined above. They seemed to think it
was a bad idea, although I notice that Phys.
Rev. X has now made an appearance. . . .
One way to approach the aforementioned de-
bate constructively would be to test the cred-
ibility algorithm against such existing credibil-
ity databases maintained by journal editors. Of
course, they would have to cooperate. If they
were even more cooperative, they could supply
their data to the initial system so that it can
start off with a refined list of Ci (which would
then begin to evolve further according to the
algorithm) rather that a perfectly neutral (and
uninformative) initial state. I wish I could deem
this likely.
Similar research could be performed compar-
ing the conclusions of the quality function with
journal decisions on existing publications. This
again would require the active cooperation of

You might also like