LIBLICENSE-L Archives

LibLicense-L Discussion Forum

LIBLICENSE-L@LISTSERV.CRL.EDU

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
LIBLICENSE <[log in to unmask]>
Reply To:
LibLicense-L Discussion Forum <[log in to unmask]>
Date:
Tue, 5 Jun 2012 17:07:20 -0400
Content-Type:
text/plain
Parts/Attachments:
text/plain (121 lines)
From: Sean Andrews <[log in to unmask]>
Date: Tue, 5 Jun 2012 09:30:48 -0500

I can see the utility of this information - and in some ways this
transparency is, in principle, what a more open peer review process
allows, something along the lines of the post publication peer review
(or Peer-to-peer review) that Kathleen Fitzpatrick talks about

http://www.tandfonline.com/doi/abs/10.1080/02691728.2010.498929

and the Kent Anderson finds problematic

http://scholarlykitchen.sspnet.org/2012/03/26/the-problems-with-calling-comments-post-publication-peer-review/

In a sense what this would show is the conversation that went on
between the reviewers, the editor and the author, and, therefore, the
process through which that knowledge was produced.  In some cases, you
may not want to see the sausage being made; in others it gives a
window into the other possible ways an idea could have been developed.
 This is what I find useful about authors who use footnotes for side
conversations: it often shows tangents that wouldn't fit into the
current articulation of the paper, but which might have appeared in
another tack.

But the other thing I hear you saying is that it would be useful to
have some metric of what those acceptance and rejection rates mean -
some more qualitative information that would give some insight into
what the process of peer review means in each case.

In a way I can see these two converging in an interesting framework
which might help make sense of what is an emerging diversity of peer
review processes - both within journal editorial boards and on the
open web.  The language of "badges" is a bit overdone, but using
something like a certification of peer review would allow readers to
know something about the processes that go on behind the scenes - and
to begin to have an alternative metric for items that are mostly
subject to "post publication peer review." I don't know how the
authority could be established to undertake this, but it might be
something individual disciplines would try to do.

For journals, this might mean allowing an audit of their processes.
Not only information about acceptance and rejection, but actually
digging into the files on the editors' hard drive or Scholar One
account, using some metric to give general snapshot of what the
process looks like.  So while it might not tell you what any
particular article went through (e.g. the whether, how often and by
whom a piece had been rejected) it would give you a sense of the
legitimacy of this process. In establishing what the guidelines are
for peer review in journals - i.e. what would merit a certification -
this could then be transferred to other forms of scholarship, like
that on the open web (or in emergent open access publications).

If the metric was that it should be looked at by at least two major
people in the field, this could more easily measure up to the standard
Anderson poses for comments as peer review. In many cases, an
acceptance is not all that much longer than a lengthy comment on a
blog post.  Rejection or revise/resubmit might be a longer comment and
a required revision of the post to meet some of those critiques - or
at least a lengthy defense by the author.  Likewise, if an author was
interested in seeking certification, they could solicit comments from
disciplinary listserves or other spaces to ensure that they got
quality feedback and spurred some conversation.

In any case, once an author felt they had fulfilled the peer review
process requirements, they could then apply for some sort of
certification by the peer review board of their discipline - or
something like this.

Maybe this is overdoing it - since the main issue for peer review is
often it's place in tenure and promotion procedures, having the
certification be something more internal to departments or colleges
might make more sense. Either way, it is an interesting question and I
think it points to the emergent disintegration of systems of
legitimacy in scholarly communication - and the need to shore them up
in creative ways that take into account the diversity of work and
venues in which it is being produced.

And perhaps I am totally misunderstanding your question.

best,

Sean Johnson Andrews
[log in to unmask]
Assistant Professor of Cultural Studies
Columbia College, Chicago
2011-2013 ACLS Public Fellow
Program Officer
The National Institute for Technology in Liberal Education
http://www.nitle.org | tel. 703-597-6948 | fax 512 819-7684


On Sun, Jun 3, 2012 at 6:36 PM, LIBLICENSE <[log in to unmask]> wrote:

> From: "James J. O'Donnell" <[log in to unmask]>
> Date: Sun, 3 Jun 2012 14:39:56 -0400
>
> There is one experiment with transparency in scholarly communication
> that I have not seen.  I'd be glad to hear if there are any cases where it
> has been tried and to hear comments on the possibility.
>
> The most confidential part of the process of "public"ation is peer
> review.  An author submits an article to a journal and it is accepted
> or rejected; if rejected, the author goes elsewhere and repeats the
> effort to win acceptance.  Journals boast of their acceptance (i.e.,
> rejection) rates.  Something I would like to know - but now cannot
> find out, when I read an article - is whether and how often and by whom
> the same piece has been rejected.  Many editors would be glad to have
> that information about individual items and "average prior
> rejections/article" would be an interesting metric of the quality of a
> journal.
>
> Publishing this information would also allow for validation of the
> peer review system:  articles with high citation counts and multiple
> rejections would be interesting in one way, but it's likely in most
> fields that the reverse would be the near-universal norm.  Who
> would not benefit from such transparency?  If we are to mandate
> access to results of research -- is this not one of the results?
>
> Jim O'Donnell
> Georgetown

ATOM RSS1 RSS2