From: Jan Velterop <[log in to unmask]>
Date: Mon, 20 Oct 2014 10:21:25 +0200
Could it not simply be that researchers find it easier now to identify
'gems' in the general matrix of journal literature, worthy of being
cited, rather than focus as strongly as in the past on the
'phenocrysts' of elite journals?
Jan Velterop
On 19 Oct 2014, at 21:05, LIBLICENSE <[log in to unmask]> wrote:
> From: Sally Morris <[log in to unmask]>
> Date: Fri, 17 Oct 2014 11:04:50 +0100
>
> I concur with what Anthony and Bill have said. Surely, though, the
> interesting point in Anurag's findings is that the distribution (of journals
> among the top 1000 cited articles) has changed.
>
> If anything, if the total number of published articles is growing, might one
> not expect the change to be in the other direction (as 1000 is a smaller
> percentage of the total)?
>
> What do you all think is happening here?
>
> Sally
>
> Sally Morris
> Email: [log in to unmask]
>
> -----Original Message-----
>
> From: Anthony Watkinson <[log in to unmask]>
> Date: Thu, 16 Oct 2014 10:36:28 +0100
>
> I do completely agree with Bill. As I see it, researchers have a complex
> hierarchy of journals. Almost all in science would place Science or Nature
> at the top. However below that there is a hierarchy in their own discipline,
> which is not just a matter of the impact factor but is a more complex
> judgement passed down from mentor to mentor. Sometimes work being reported
> on is perceived as of wider interest - to the whole scientific community
> even - or to the whole discipline.
>
> Sometimes work being reported is more specialised and is of interest
> (only) to a niche community with its own journal. The research is not
> necessarily inferior (look at the work on transposons which led eventually
> to a Nobel prize) but sometimes it is not actually very good and one knows
> it. There are special complications with interdisciplinary work. If two
> groups, one in physics and one in clinical medicine, are writing a paper
> together the physics component will be trusted by the clinicians with
> submitting to an appropriate physics journal and vice versa. I know there is
> a lot of literature about how groups work but I am not sure how much
> research there is on how groups decide where to submit and why. I am sure
> there is some literature but I have not time to look for it.
>
> I am writing here as a researcher as well as a former publisher and am
> familiar with behaviour Bill and I describe from both standpoints.
>
> Anthony
>
> -----Original Message-----
> From: <[log in to unmask]>
> Date: Wed, 15 Oct 2014 08:37:31 +0100
>
> As well as the slots in elite journals being constant, this story seems to
> me not to be news because it just reflects longstanding researcher
> behaviour: as well as wanting to publish in elite journals, a lot of
> researchers also want to engage with the (often
> tiny) community that is specifically interested in their own topic, a
> community which can offer advice, criticism, avenues for further research,
> collaboration etc etc. So they also publish in what they regard as the most
> appropriate journal, elite or not. Of course anyone working in acoustics,
> say, wants the prestige of being published in the Journal of the Acoustical
> Society of America; but, if his particular field is building acoustics, and
> if feedback is one of the things he is looking for, s/he won't get much from
> that journal whose content is mostly concerned with speech, hearing,
> pyschological aspects, animal noise etc, presumably reflecting readership
> interest.
>
> Bill Hughes
> Director
> Multi-Science Publishing Co Ltd
>
>
> ----- Original Message -----
>
>> From: Joseph Esposito <[log in to unmask]>
>> Date: Mon, 13 Oct 2014 21:18:06 -0400
>>
>> Not persuasive. The number of articles continues to grow, the number
>> of slots in the so-called elite journals is pretty much constant. If
>> all the seats are taken at Harvard, Princeton, and Yale, do we expect
>> parents to tell their kids not to go to college at all? Would we
>> expect that someone who attends the U. of Michigan or Villanova has no
>> economic contribution to make? The question about this article is why
>> anyone thinks it is newsworthy. Where was it published again?
>>
>> Joe Esposito
>>
>> On Mon, Oct 13, 2014 at 8:17 PM, LIBLICENSE <[log in to unmask]> wrote:
>>>
>>>
>>> From: John Sack <[log in to unmask]>
>>> Date: Mon, 13 Oct 2014 05:49:53 -0700
>>>
>>> I am forwarding this response on behalf of Anurag Acharya at Google
>>>
>>> John Sack
>>> Founding Director
>>> HighWire Press
>>>
>>> -----
>>>
>>> I would like to clarify couple of things about our paper. My comments
>>> are inline below,
>>>
>>> cheers,
>>> anurag
>>>
>>> Corey Murata writes:
>>>
>>> The basic flaw in the research is centered around how they identify
>>> 'elite journals.'
>>>
>>> First, they are using incredibly broad disciplinary groupings from
>>> Google Scholar Metrics:
>>>
>>> http://scholar.google.com/citations?view_op=top_venues
>>>
>>> Economics, for example is lumped in with Business and Management, and
>>> if you look at the top ten journals in that broad group the only
>>> management journal is MIS Quarterly, all the rest are Economics and
>>> Finance.
>>>
>>> [[ANURAG]] As described in the Methods section of the paper, elite
>>> journals are identified for each of the 261 specific subject
>>> categories (eg Immunology or Accounting & Taxation or Gender Studies
>>> or Finance) and NOT at the level of broad areas (eg Health & Medical
>>> Sciences or Business, Economics & Management).
>>>
>>> To get an overview of changes within each broad area, we determined
>>> the median, the 25th, and the 75th percentile subject categories
>>> within each area. We then picked the median subject category in each
>>> broad area as the representative for the area and plotted data for
>>> all three of median/25th-percentile/75th-percentile categories in the
>>> per-area graphs in Figure 2. The median/25th/75th percentile
>>> categories were computed afresh for every year to ensure that they
>>> remain representative of the area (details are in the Methods
>>> section).
>>>
>>> Second, they ignore the increase in the number and specialization of
>>> journals over the period of the study. This increasing availability
>>> of journals that are 'core' to a sub-disciplinary group of scholars
>>> would naturally lead to more high-quality articles being published
>>> outside of the 'elite' journals as defined by the authors of this
>>> paper. The increasing number of journals also means that the ten
>>> 'elite' journals becomes a progressively smaller percentage of the
>>> total scholarly output over time.
>>>
>>> [[ANURAG]] As mentioned above, the list of elite journals was
>>> computed separately for each of the 261 specific subject categories.
>>> Which means there are over 2500 journals that are considered elite
>>> each year. As mentioned in the Methods section, the list of elite
>>> (and
>>> non-elite) journals for each subject category was recomputed for each
>>> year. So shifts in the focus of a subject category or new journals
>>> that become a part of the "core" set would be reflected.
>>>
>>> The Methods section of the paper also mentions that the number of
>>> articles considered top-cited each year in a subject category was
>>> fixed at 1000. Therefore, growth in the total number of articles
>>> published isn't a significant factor. The top ten journals in a
>>> subject category, as a group, publish more than 1000 articles per
>>> year.
|