LIBLICENSE-L Archives

LibLicense-L Discussion Forum

LIBLICENSE-L@LISTSERV.CRL.EDU

Options: Use Forum View

Use Monospaced Font
Show HTML Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
LIBLICENSE <[log in to unmask]>
Reply To:
LibLicense-L Discussion Forum <[log in to unmask]>
Date:
Tue, 19 Nov 2019 12:29:00 -0500
Content-Type:
multipart/alternative
Parts/Attachments:
text/plain (3836 bytes) , text/html (6 kB)
From: Lorraine Estelle <[log in to unmask]>
Date: Tue, 19 Nov 2019 09:57:59 +0000

Dear Ted

Thank you for an interesting post about the tool you are developing. I
would like to chip in on one point your raise about the value of download
statistics for cross-publisher comparisons.



The article you mention, "Do Download counts reliably measure journal
usage: Trusting the fox to count your hens", highlighted an important issue
with Release 4 of the COUNTER Code of Practice. This is sometimes called
the ‘platform effect’, whereby platforms that take a user directly to the
HTML full text of an article, were likely to report more successful
requests that platforms which take a user first to an article abstract.
This is because if the user viewed the HTML of an article and then
downloaded a PDF of the same article, it counted as *two* ‘Successful
Full-Text Article Requests’.



Release 5 of the COUNTER Code of Practice, effective from 1 January this
year, deals with this issue through a new metric; ‘Unique_Items’.

If for example, a user in the same session accesses the full text HTML of
an article *and* also downloads the PDF of the same article, this counts as
*one* Unique_Item_Request. This metric ensures cross-publisher comparisons
however the platform is configured.



Kind regards

Lorraine Estelle
Project Director
COUNTER






From: Ted Bergstrom <[log in to unmask]>

Date: Sun, 17 Nov 2019 13:10:17 -0800

Negotiations between Elsevier and the University of California system over
open access and pricing seem to have reached a stalemate, and the UC no
longer has the Elsevier Big Deal.   Currently,  no UC campus  subscribes to
any Elsevier journals. If the UC chooses not to reenter the Big Deal, the
UC campus libraries will probably find it worthwhile to subscribe to some
Elsevier journals.  Which ones should they choose?



A UCSB student, Zhiyao Ma, and I are developing a little tool that we hope
will  help UC librarians in  making cost-effective selections of Elsevier
journals for subscription.  The UC has   download statistics for each
Elsevier journal at each  of its campuses.  Elsevier posts *a la carte*
subscription prices for each of its journals.  Our tool allows one to
select a cost per download threshold and obtain a list of journals that
meet this criterion, along with their total cost.  It also allows for
separate thresholds to be used for different disciplines.  You can check
out the current version at  https://yaoma.shinyapps.io/Elsevier-Project/



Since this project is still under way, we would be interested in any
suggestions from librarians about how to make this tool more broadly
useful.  Extending this tool to make comparisons among journals from
multiple publishers is an obvious step. However, we are dubious about the
value of download statistics for cross-publisher comparisons.  There is
evidence that download counts substantially overstate usage, because of
repeated downloads of the same article by the same users, and that the
amount of double-counting varies systematically by publisher.  This is
discussed in  a couple of papers of which I am a coauthor.



"Looking under the Counter for Overcounted Downloads" (with Kristin
Antelman and Richard Uhrig)

https://escholarship.org/uc/item/0vf2k2p0



and



"Do Download counts reliably measure journal usage: Trusting the fox to
count your hens". (with Alex Wood-Doughty and Doug Steigerwald)

https://crl.acrl.org/index.php/crl/article/view/17824/19653



Instead of using download data, we could construct a similar calculator
using price per recent citation as a measure of cost-effectiveness.  We
have found that the ratio of downloads to citations differ significantly
between disciplines.    So it is probably appropriate for cost per citation
thresholds to  differ among disciplines.



At any rate, we would value suggestions.



Ted Bergstrom


ATOM RSS1 RSS2