LIBLICENSE-L Archives

LibLicense-L Discussion Forum

LIBLICENSE-L@LISTSERV.CRL.EDU

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
LIBLICENSE <[log in to unmask]>
Reply To:
LibLicense-L Discussion Forum <[log in to unmask]>
Date:
Sun, 25 Nov 2018 18:53:05 -0500
Content-Type:
text/plain
Parts/Attachments:
text/plain (118 lines)
From: Guédon Jean-Claude <[log in to unmask]>
Date: Sun, Nov 25, 2018 at 2:21 PM

What I am saying is that Plan S forces researchers to re-examine how
they intend to prioritize their journal choices, and suddenly the
dilemma is between the journal with the most desirable JIF, or the
journal (or other compliant solution) that may help getting more
grants from a number of funders. Interesting dilemma: instead of
playing the game of publishers, you are invited to play the game of
funders.

Not publishing in the highest impact factor becomes a handicap only if
evaluation blindly follows the JIF. So the "disadvantage" has to be
examined carefully in each case.

I do not understand the paragraph starting with: "The argument I’m
hearing here ..."

The "unilateral disarmament" argument - in passing what a terrible
metphor - assumes that evaluation works absolutely the same way
everywhere. With cOAlition funders, one may expect a revisiting of the
ways in which evaluation is or should be conducted, or they will look
totally incoherent.

Regarding the big paragraph about metrics, what is a "market for
ideas". The JIF does not address any "market for ideas", but rather a
market for journals, which is entirely different.

Regarding quality, the JIF does not refer to quality (of what, anyway?
journals? Based on the average value of a counting exercise done
across a highly skewed distribution of citations from article to
article within the same journal), but rather to the ability of being
cited, whatever the means used to achieve this goal. It vaguely refers
to visibility, but Aileen Fyfe is spot on when she says that the IF
refers to mere citability. The logic is that of Nielsen ratings for TV
shows that had nothing ever to do with the quality of the TV
broadcasts. If they did, television would not be as terrible as it has
been for the last few decades.

Furthermore, when your indicator allows rankings, you are no longer
speaking about quality, but rather excellence: here is the game we are
going to make you compete with and, through rankings, we are going to
award the traditional Gold, Silver and Bronze medals.

In short, lots of confusions there...

Jean-Claude Guédon
________________________________________

De : Glenn Hampson [[log in to unmask]]
Envoyé : vendredi 23 novembre 2018 11:50

Hi Everyone,

This is an interesting conversation for sure. I wish we weren’t
discussing this under duress---under what many see as an impending
threat as opposed to still being in the deliberative stage of policy
making. If anyone on this list needs a neutral primer on Plan S, Rob
Johnson wrote a good piece here
(http://osiglobal.org/2018/10/26/plan-s-explained-part-2/), including
links to some of the less neutral pieces. It’s pretty easy to see from
this that “academic freedom” is just one of a number of concerns being
raised.

With regard to impact factors, Jean-Claude, I’m not clear on how you
see Plan S helping. Can you elaborate?

The argument I’m hearing here (and this may be totally wrong) is that
authors will still be “satisfied” that they’ve been published
regardless of impact. Impact factors won’t be fixed, they’ll just be
“ignored.” Is this correct?

If so, what might happen then is that EU researchers won’t be able to
publish in the highest impact journals but their colleagues in the US
will, which will put EU researchers at a disadvantage, no? This is an
oversimplification of course---impact will shift over time ceteris
paribus---but the short-term impacts of “unilateral disarmament” might
be real. And what happens to the academic freedom argument if/when
researchers are unable to publish in journals that are most widely
read by their colleagues? What kind of noncompliance issues arise
then? Some work has already been produced on this topic (again by Rob
Johnson: https://re.ukri.org/documents/2018/research-england-open-access-report-pdf/
).

We talk a lot about the distortions caused by impact factors. On the
one hand, metrics are here to stay and have real value for authors and
funders. On the other hand, they create an uneven marketplace for
ideas, which is fine if you’re okay with an uneven marketplace, but
horrible if you think knowledge should exist on its merits and not the
merits of its container. The “open information” direction is where
we’re heading in research and society, but we won’t get there by
simply “banning” metrics, which is akin to “banning” quality, desire,
pride and accomplishment---i.e., in some cases (maybe many), metrics
may be signaling something real (some have argued that journal impact
factors garbage too---statistical fallacies). In any case, these
signals distort the choices authors would otherwise make in a
marketplace we’re trying to flatten. So, I think we’re all agreed
(except for the folks in the analytics business) that we need to come
up with some way to rebalance incentives in the scholarly publishing
system, which means figuring out how to deal with impact factors. If
you think, Jean-Claude, that Plan S will do this, I’d be curious to
hear how?

Thanks everyone---happy Thanksgiving holidays for those celebrating.
Take a break from email today, watch some football, and get your
Christmas shopping started!

Best regards,

Glenn


Glenn Hampson
Executive Director
Science Communication Institute (SCI) <sci.institute>
Program Director
Open Scholarship Initiative (OSI) <osiglobal.org>

ATOM RSS1 RSS2