From: Sandy Thatcher <[log in to unmask]>
Date: Mon, 22 Dec 2014 22:57:01 -0600
Granted, a library has several important constituencies to serve,
among which faculty are only one. BUT, if the question is value for
long-term use, surely faculty who tend to stay at universities longer
than most graduate students will have needs for particular journals
over a longer span of time. If grad students are working on
dissertations for which they need journals, it may well be that their
needs could be satisfied equally well by pay-as-you-go services like
the CCC's Get It Now, without committing to a long-term subscription.
In a survey faculty could also be asked to indicate what journals they
personally subscribe to, if any, so that perhaps the library would not
need to subscribe to journals to which faculty are already subscribing
in a certain discipline.
> From: "Hinchliffe, Lisa W" <[log in to unmask]>
> Date: Mon, 22 Dec 2014 02:21:17 +0000
> A few more thoughts on why surveys may be mis-matched to documenting
> use and offering an attempt at clarity on my access log and
> self-perception bias comments...
> Whether you can determine the kind of person who is accessing the
> content via access logs will be determined by how you have set-up
> access and what you track. If you require logins and capture patron
> type, you will be able to determine use by patron group. If not, yes -
> you'll have everyone's use. I'm probably on the side of thinking that
> postdocs and graduate students are pretty important users of research
> content and, honestly, undergraduates too. But, even if not
> undergrads, I hope we'd find middle ground on thinking that ALL of
> those with research responsibilities (i.e., driving research
> productivity at the University) are key constituents here.
> I'm skeptical on the response rate you'd get - I think it would be
> much lower than you'd like. But, let's say it is high ... I think the
> data itself would be really difficult to rely upon. The
> self-perception bias phrase I used is probably more accurately labeled
> "set of cognitive biases"
> (http://en.wikipedia.org/wiki/List_of_cognitive_biases). In this case,
> a number of memory biases seem likely to be an issue. Remembering
> accurately how often one has done anything over the past year, for
> example, is not easy much less remembering at the level of journal
> title when presented with a long list. And, there is also the
> complexity here that there would be motivation for a faculty member to
> over-report use because of the perception he/she has of the importance
> of the journal in the field ("of course I use this journal - it is an
> important one!").
> Having said all this - it would be wonderful if someone had empirical
> data to test this with. A relatively straightforward design would be
> to compare a faculty members reports of use with the sources they cite
> in publications. This design has obvious limitations but it would
> allow comparison of faculty member reported use with documented use.
> Better of course would be correlating access log use with faculty
> member reported use, but that would require tracking not just patron
> type but actual individual patrons. In the US, I suspect few libraries
> would have that data readily available but it might be possible in
> Australia I believe if I correctly understand their access logging
> Lisa Janicke Hinchliffe
> Professor/Coordinator for Strategic Planning
> Coordinator for Information Literacy Services and Instruction
> University Library, University of Illinois at Urbana-Champaign
> [log in to unmask]
> From: Sandy Thatcher <[log in to unmask]>
> Date: Fri, 19 Dec 2014 09:22:57 -0600
> Sure, if you got a very low response rate, but I'm guessing enough
> faculty would find this a very important survey to respond to. And it
> would reveal a lot more than usage counts, which presumably include
> uses by everyone, not just faculty. (I'm not sure what access logs
> are, but do they reveal who is doing the accessing? And how does
> "self-perception bias," whatever that is, enter into the equation?) If
> faculty regularly use a journal instead of just very occasionally
> using an article from it, isn't that a very important piece of
> information? It seems like a pretty straightforward question to ask,
> particularly if you define what "regularly" means.
> Sandy Thatcher
>> From: "Hinchliffe, Lisa W" <[log in to unmask]>
>> Date: Fri, 19 Dec 2014 04:10:38 +0000
>> A survey seems mis-matched to this. Response rates, self-perception
>> bias, etc. Why not just use the access logs?
>> On the original question - I would question why libraries would have
>> to review/compare listings themselves? Seems like a pretty standard
>> thing to expect in making a purchase would be "here's a list of what
>> you are buying and here's how it differs from your last list"? Doesn't
>> seem like something libraries should have to devote staff time to
>> compiling? Now, as to whether such lists get reviewed ... well, I hope
>> so and annually!
>> Lisa Janicke Hinchliffe
>> Professor; Coordinator for Strategic Planning;
>> Coordinator for Information Literacy Services and Instruction
>> University Library, University of Illinois at Urbana-Champaign
>> [log in to unmask]
>> From: Sandy Thatcher <[log in to unmask]>
>> Date: Wed, 17 Dec 2014 20:56:58 -0600
>> Would it be all that difficult to do a survey of faculty and ask them
>> which of the journals in a package they use regularly, or not at all?
>> Does any library now do this?
>> Sandy Thatcher
>>> From: Karin Wikoff <[log in to unmask]>
>>> Date: Wed, 17 Dec 2014 07:35:58 -0500
>>> For us, we'd only be likely to notice if something our users use
>>> regularly disappears. 20 journals no one uses could disappear, and we
>>> probably wouldn't notice. But if the one journal some faculty member
>>> uses all the time disappeared, then there'd be a big problem. It's a
>>> good question, and not one I'd given a lot of thought. I'll be
>>> interested to read other folks' replies.
>>> Karin Wikoff
>>> Electronic and Technical Services Librarian
>>> Ithaca College Library
>>> 953 Danby Rd
>>> Ithaca, NY 14850
>>> Phone: 1-607-274-1364
>>> Fax: 1-607-274-1539
>>> Email: [log in to unmask]
>>> On 12/16/2014 8:17 PM, LIBLICENSE wrote:
>>> From: Ann Shumelda Okerson <[log in to unmask]>
>>> Date: Tue, 16 Dec 2014 20:15:48 -0500
>>> Dear liblicense-l readers. Your listowner/moderator (me) has a
>>> question for you. I would very much welcome the views of anyone on
>>> this list, whether publisher or librarian or someone in the scholarly
>>> communications chain. There's no right answer; in fact, I'm not sure
>>> there is even an answer, but I was in a group that started discussing
>>> this matter and we felt caught short. And we felt we should have a
>>> reasoned opinion, when we did not. Please read on.
>>> Most many big deal journal packages contain language [such as that
>>> below] re. modification to "portions of the Licensed Materials." The
>>> contracts say that if any of the changes make the materials less
>>> useful, the institutions may seek to terminate this agreement for
>>> breach. And, there will likely be language of this sort: "If any such
>>> withdrawal renders the Licensed Materials less useful to Licensee or
>>> its Authorised Users, Licensor shall reimburse XX for the withdrawal
>>> in an amount proportional to the total Fees owed."
>>> My question is this: if my library has a "big [or medium] deal,"
>>> let's pretend it's 300 or 500 or 1000 or 2000 titles, what is a
>>> reasonable expectation for the numbers or percentage of content that
>>> will leave the package before the library or consortium would either
>> > seek reimbursement (more likely) or total termination (less likely)?
>>> Do libraries (or consortia) review the big-deal lists each year to look
>>> for changes? Every 3 years? If there were a loss of previous titles
>>> in the amount of 5%, would it be a concern? How about 10%?
>>> Of if not a percentage "bright line," then what would cause a review
> >> of the list and a concerned conversation with the big deal publisher?
>>> Would it be the loss of a couple of absolutely key titles? the loss
>>> of a particular smaller publisher's journals list? a disciplinary
>>> impact? a dollar impact? If "it depends," what does it depend on?
>>> Do libraries care very much about what's actually in these large
>>> packages, or are we too busy to pay attention to their changes? What
>>> would it take to get libraries' attention?
>>> Thank you, Ann Okerson