insight Archives - Digital Science https://www.digital-science.com/tags/insight/ Advancing the Research Ecosystem Mon, 02 Aug 2021 09:08:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 Increasing the Scope of Researcher Engagement Through Technology https://www.digital-science.com/blog/2015/11/let-me-put-it-another-way-increasing-the-scope-of-researcher-engagement-through-technology/ Wed, 25 Nov 2015 13:41:36 +0000 https://www.digital-science.com/?p=15622 It’s an accepted reality that the role of scholarly publishers with respect to their ultimate customers, researchers, is changing. In 2012, Annette Thomas, then CEO of Macmillan Science and Education, gave the opening keynote at Charleston library conference where she outlined what she called the ‘Publishers New Job’ . In it, she outlined her personal […]

The post Increasing the Scope of Researcher Engagement Through Technology appeared first on Digital Science.

]]>
e-reader

It’s an accepted reality that the role of scholarly publishers with respect to their ultimate customers, researchers, is changing. In 2012, Annette Thomas, then CEO of Macmillan Science and Education, gave the opening keynote at Charleston library conference where she outlined what she called the ‘Publishers New Job’ . In it, she outlined her personal vision of the ways in which publishing, as we know it, should adapt to the changing face of digital scholarship, providing research and communication tools directly to academics at every phase of the research cycle.

But how exactly can publishers go about doing that? What can publishers do to accelerate the flow of information in a way that is consistent with their business aims? The answer lies in building infrastructure that enables information flow and then offering the resulting tools as a service and learning from the resultant data and meta-data. Put simply, publishers can and must increase the scope of their interaction with researchers.

“To manage a company well is to manage its future, to manage the future is to manage information” – Marion Harper

Some large commercial publishers are already doing this, take for example, Elsevier. In a recent post on Elsevier Connect, Tom Reller wrote

“By all accounts, Elsevier’s 2013 acquisition of academic social network Mendeley has helped make us a more technology-oriented, big data company. It’s had a huge impact on the tools we can provide to digitally enhance the performance of scientists and health professionals.”

It’s certainly clear that Elsevier see the future of academic publishing as being focused on providing tools and enhancing academic workflows. I also think it’s telling that Reller refers to Mendeley as a social network, rather than a reference manager. If we look at the business models of some of the most successful internet companies, like Google and Facebook, their business models hinge on understanding the behaviours of users and monetising that understanding.

Clearly Elsevier aren’t the only one’s focused on information rather than selling licensed content. EBSCO, ProQuest, and of course Thomson Reuters, to name just three, are among companies that seem to all be thinking along similar lines.

Keeping in contact both downstream and up

At this stage, it’s clear that successful publishers are expanding downstream into how both researchers and institutions consume, use, share, and repurpose information, as well as monitoring and maximising the impact that it has. The rise of altmetrics, research management software, as well as the open science and open data movements are all testimony to that. More recently, we’re seeing an increased interest in helping researchers and institutions build their reputations and collate evidence of their impact. The most talked about company working in this space is, of course, Kudos, but there are other examples such as ImpactStory. Monitoring impact is important to institutions as well as individuals, Digital Science portfolio companies Symplectic and Altmetric work in those fields as well as Plum Analytics, which is owned by EBSCO.

Increasing upstream engagement is less well talked about, but is an inevitable continuation of this trend. We’re already seeing a rise in the number of companies offering some kind of collaborative authoring system, from startup Authorea, to Dartmouth Journal Services, to IEEE’s Collabratec, which incorporates Overleaf’s technology

What’s next?

In the publishing technology sector, we’re constantly asking ourselves and each other what the next big thing will be. Right now, it’s reputation management, collaboration, data sharing and authorship that are grabbing people’s attention. It seems to me that these aren’t isolated pockets of innovation but that there is a common theme here. What’s happening is that publishers are finding increasing ways of expanding the scope of interaction with the researcher.

In the days of print, publishers had much less opportunity for contact with the academic community. Publishers had the strongest relationships with editors, the very senior academics who represented a very narrow slice of their authorship and readership. Aside from that, the only point of contact with researchers was at submission and peer review. After printing, journals were shipped to libraries. Publishers had no way of knowing directly if they were being read. With the advent of the internet, that has all changed.

Today, we’re moving past the minimal amount of user data given by usage statistics and really starting to understand how researchers think, act, and work. The information that comes out of that relationship enables the design of products that help researchers work more effectively and collaboratively. In turn, those new products broaden the scope of the relationship and further bootstrap the process.

So the question becomes, what’s next in expanding that scope? Perhaps it’s about understanding trends in funding, or supporting academics more directly in the grant application or even tenure process? Maybe we’ll go further upstream and develop tools that will help researchers plan their long term research programs better, to help them maximize the impact of their careers.

We’re also likely to see improved information flow across the connections that already exist. COUNTER stats, for example give publishers extremely limited information, in their present form, about usage and tell them nothing that happened beyond the download of the PDF. The STM association’s voluntary principles on article sharing speak directly to that point. The submission process is also likely to be improved in terms of both usability for authors and the amount and quality of actionable data generated.

The digitisation of scholarly publishing is sometimes thought of as a burden by publishers. The extra complexity and cost of the digital infrastructure can seem like extra responsibility. Instead, I urge publishers to look at the possibilities that digital infrastructure offers. We can use it to forge stronger relationships with our customers, deliver the products and services that they need and add greater value. If that isn’t a way to help publishers of all types and sizes stay relevant and advance their businesses, I don’t know what is.

The post Increasing the Scope of Researcher Engagement Through Technology appeared first on Digital Science.

]]>
On the Benefits of Institutional Identity – A Guest Post by Richard Padley https://www.digital-science.com/blog/2015/11/on-the-benefits-of-institutional-identity-a-guest-post-by-richard-padley/ Tue, 17 Nov 2015 10:00:29 +0000 https://www.digital-science.com/?p=15500 Richard Padley is the Chairman, CEO and co-founder of Semantico, a specialist software company who creates innovative digital publishing solutions for publishers and information providers. He is known for his passion on topics such as access and managing online identities, the semantic web, taxonomies and discoverability, and mobile and cross platform delivery. As a community we […]

The post On the Benefits of Institutional Identity – A Guest Post by Richard Padley appeared first on Digital Science.

]]>
csm_Unknown_7c97632b61Richard Padley is the Chairman, CEO and co-founder of Semantico, a specialist software company who creates innovative digital publishing solutions for publishers and information providers. He is known for his passion on topics such as access and managing online identities, the semantic web, taxonomies and discoverability, and mobile and cross platform delivery.

Original: https://commons.wikimedia.org/wiki/File:Fingerprint_detail_on_male_finger.jpg
Original: Here

As a community we have made huge advances in providing the infrastructure needed to uniquely identify contributors to scholarly works. The recent launch of the ORCID auto-update functionality adds a missing link by allowing ORCID records to automatically update as new papers are published. However, there is another missing link that I want to focus on here; the link between individuals and their institutions.

 1.       The missing link

Within our sector there is a general shift in focus from institutions to individuals, which in turn is being accelerated by the increasing adoption of open access revenue models. This adds momentum to the growth in uptake of ORCID as a persistent identifier for individual researchers and contributors. We currently lack an equivalent persistent identifier structure for organisations and institutions, however, as I’ll mention later, there are some interesting contenders. Without this structure, it is impossible to unambiguously link a researcher to all of their affiliated institutions. Consequently, this lack creates a blind spot for metrics and adds friction in a number of ways for readers, researchers, publishers and institutions.

2.       Metrics and impact 

Whilst conventional measures of impact are calculated at the journal title level, the provision of persistent identifiers for researchers greatly facilitates the calculation of impact and other alternative metrics at the individual researcher level. This is an important step in helping reduce the iniquities in researcher assessment in the cases where this is still based on the impact factor of journals where a researcher has published.

Metrics can also be aggregated and measured at other levels; the recent introduction of Crossref’s Open Funder Registry provides a standardised list of funder names necessary to accurately link published content back to funding organisations, thus allowing aggregate measures of research output, including impact, to be calculated for each funding organisation.

By this line of reasoning it should also be possible to calculate aggregate impact for a given institution, publisher, or, perhaps more interestingly, at the learned society level too. Certainly this would be desirable from the institutional funding perspective given the political climate where research assessment places an ever increasing demand for quantifiable and reproducible metrics. I believe this would also be valuable for learned societies as it would strengthen the rationale around their publishing programmes at a time when these are under stress from the growth in OA.

3.       Analytics

Of course journal impact factors are only one measure in the broader field of analytics. Here too the same concerns about identity are equally present. Measuring usage and understanding user behaviour all hinge on the ability to identify both individuals and organisations in their journey across the whole scholarly ecosystem. Institutional identifiers also enable the functioning of our software service infrastructure to deliver business intelligence; turnaway data which is currently unusable can be turned into actionable sales information when institutions can be reliably identified.

4.       Learned societies

Identity has always been important from the society perspective. Clearly, societies need to identify their members in order to deliver membership services. As these membership services may not always be delivered by the society directly there is a need to for software services delivering identity management both internally and externally to third parties including partner publishers. Again, here is a place where a stable organisational identifier system would benefit the society in both managing entitlements to services as well as providing analytic and impact data around society activities.

5.       Publishers

Publishers need to manage institutional identity in a whole raft of different contexts; subscription systems, CRM, marketing databases, hosting providers and APC processing to name just a few. Often these functions are managed in separate systems, resulting in complex data unification challenges in order to derive business intelligence. Using a stable institutional identifier is the first step towards decreasing the complexity in the data flows between these systems. But an identifier on its own is not enough: simply copying data and identifiers between individual systems leads to drift and synchronisation headaches. A key enabling technology here is the provision of a software service specifically for identity management to ensure integration between multiple systems is effective and efficient.

6.       Existing initiatives

Given all of the benefits I have outlined above, it would be surprising if work had not gone on in this field already. We already have two formally standardised institutional identity systems: ISIL and ISNI. ISIL is focussed on providing identifiers for libraries, and ISNI on identifiers for contributors to creative works (including organisations). In our experience both of these systems lack organisational infrastructure and have complex data sharing and re-use requirements which contribute to their low uptake within the scholarly community.

Other initiatives include OrgRef, Ringgold and GRID. GRID, from Digital Science, seems particularly promising in that the data is available under an open CC-BY licence and they are providing value added services such as disambiguation as part of their revenue model. GRID is brand new and it’s not yet clear to me yet just how persistent the standard is or how third parties can create their own IDs, although I’m told that this is intended to be the case.

At Semantico we have a significant interest in this area as our SAMS Sigma product is focussed on identity and access management. This provides the essential software service – identifying individuals and organisations – needed to unlock the benefits I have described above. SAMS already leverages both individual and organisational identifiers to provide real single sign on across the scholarly web; stronger uptake of a standard identifier in our community will enable these identities to flow more seamlessly across the scholarly ecosystem.

7.       What we need to do

I believe there is an opportunity for the scholarly community to come together and address the challenge of providing a stable persistent identifier for institutions. This conversation should clearly involve the existing stakeholders I have listed above. But a standard alone is insufficient for success; both ORCID and Crossref demonstrate the need for organisational infrastructure, services and outreach in order to drive uptake and ensure success.

The post On the Benefits of Institutional Identity – A Guest Post by Richard Padley appeared first on Digital Science.

]]>
What’s so Wrong with the Impact Factor? Part 2 https://www.digital-science.com/blog/2015/10/whats-so-wrong-with-the-impact-factor-part-2/ Tue, 13 Oct 2015 10:30:58 +0000 https://www.digital-science.com/?p=14469 In last week’s perspective I asked the question ‘What’s wrong with the Impact Factor? Part 1’. Anybody who’s followed the debate over the years will be familiar with many of the common objections to the metric – Here’s an example of a blog post on the subject. But how valid are the common objections? Does […]

The post What’s so Wrong with the Impact Factor? Part 2 appeared first on Digital Science.

]]>
The self fulfilling prophecy: Oedipus in the arms of Phorbas.
The self fulfilling prophecy: Oedipus in the arms of Phorbas.

In last week’s perspective I asked the question ‘What’s wrong with the Impact Factor? Part 1’. Anybody who’s followed the debate over the years will be familiar with many of the common objections to the metric – Here’s an example of a blog post on the subject. But how valid are the common objections? Does the Impact Factor (IF) really harm science? If so, is the IF the cause or just a symptom of a bigger problem? Last week I focused on the mathematical arguments against IF. Principal among those is that IF is mean when it should be a median. This week, I’m going to look more closely at the psychology of IF and how it alters authors’ and readers’ behavior, potentially for the worse.

Impact Factor is a self-fulfilling prophecy

Whether we’re discussing, as we did last week, the propensity for highly cited papers to gather more citations, or whether it’s the fact that papers published in high impact journals are likely to get more citations simply because of the perceived value of the journal brand, IF creates a sort of feedback loop where articles in high impact journals are perceived to be better, leading to greater citations, which raises the prestige of the journal, and so on.

It’s worth noting that in Anureg Acharya’s keynote at the ALPSP conference about a month ago, he talked about research that he had done into changing citation patterns. The article is on arXiv, here. Acharya et al showed that the fraction of top cited articles in non-elite journals is steadily rising. Acharya’s central thesis is that this effect is due to increasing availability of scholarly content. The fact that scholars are not entirely limited to the collections in their libraries, but are able to access information both in Open Access journals and also through scholarly sharing, means that they are no longer limited to reading (and therefore citing) articles published in core collections.

Others would argue that with a flatter search landscape through services like PubMed, Google, and arXiv, the power of the journal brand for readers (although perhaps not for authors), is steadily eroding.

It’s a journal-level metric that is misused as an article-level one

The IF was originally designed as a way to judge journals, not articles. Eugene Garfield, the scientometrician who came up with the measure, was simply trying to provide a metric to allow librarians to decide which subscription journals should be in their core collections. He never intended it to be used as a proxy measure for the quality of the articles in the journal.

You can hardly blame the IF itself for not being a good measure of research quality. Nobody said it was. Or, at least they didn’t until recently. As Hicks et al point out in the Leiden Manifesto, the ubiquitous use of IF as a metric for research quality only really started in the mid 90s. So, if the metric is misused, that leads us to an obvious corollary.

It’s unfair to judge researchers on the impact factor of the journals they publish in

If we’re judging researchers poorly, we’re likely to be denying grants and tenure to people who could be making more of a contribution. However, the question is: can we blame the IF itself for that?

If the impact factor only became the ubiquitous measure of research quality that it is in the last 20 years, does that mean that publishing in Cell, Nature or Science was previously not important?

We can argue whether it’s gotten more or less important to publish in high impact journals in recent decades but one senior scientist said to me recently that getting ‘a really good Nature paper’ launched their career. The reality is that even before the IF became the juggernaut that it is today, articles in high prestige journals were always seen as a measure of research quality.

Impact Factor isn’t the problem

The problem isn’t the measure itself. Sure, there are issues with it from a statistical best-practice point of view and it seems to distort the way we value research, but I think that something else is at work here. The problem is that when researchers are evaluated, very often the venue in which they publish is taken to be more important than the work itself. If we’re going to judge research and researchers fairly against one another in the future and move past IF, that has to change.

For researchers and their outputs to be judged fairly, two things have to happen. Firstly, the trend towards article level metrics and alternative metrics for evaluation has to continue and be supported by librarians, publishers, funders and scholars themselves. The study from Google that I mentioned shows erosion of the citation advantage of the journal brand. It’s happening quite slowly and arguably only in terms of readership and citation, not authorship.

The second thing is more cultural and more subtle. When speaking to academics about the fact that assessment strategies are moving towards multiple measures and a broader sense of impact and value, this point is met with suspicion. I wrote a post a while ago about confusion around the concept of excellence in academia. I think the reason for suspicion is that reviewers on assessment panels are generally senior academics whose ideas of what constitutes good work are rooted in the age of the paper journal. If this is to change, funders, librarians and scientometricians must all do more to reach out to academics, and particularly those who sit on review panels. We need a clearer, more consistent message as to how assessment should be changing.

That’s what I think is wrong with Impact Factor, or rather how our obsession with it reflects a deeper problem. What do you think is the heart of the matter? Why are we so fixated on this overly simplistic metric? Is it really harming the advancement of knowledge? What can we do to change things? Please feel free to post a comment below. Alternatively, you can contribute to the conversation on twitter using hashtag #IFwhatswrong. Next week’s perspective will be a partly crowd sourced post built from the ideas and thoughts that everybody contributes.

 

The post What’s so Wrong with the Impact Factor? Part 2 appeared first on Digital Science.

]]>
More Confusion: Do We All Agree on What Constitutes Authorship? https://www.digital-science.com/blog/2015/08/more-confusion-do-we-all-agree-on-what-constitutes-authorship/ Tue, 25 Aug 2015 14:38:00 +0000 https://www.digital-science.com/?p=13813 As I was scrolling through Twitter this morning, I came across a tweet from fellow Scholarly Kitchen chef and distinctive eyeglasses wearer, Phil Davis, that pointed to an article in Inside Higher Ed (IHE) on the thorny issue of academic credit and authorship: Research reveals significant share of scholarly papers have ‘guest’ or ‘ghost’ authors […]

The post More Confusion: Do We All Agree on What Constitutes Authorship? appeared first on Digital Science.

]]>
As I was scrolling through Twitter this morning, I came across a tweet from fellow Scholarly Kitchen chef and distinctive eyeglasses wearer, Phil Davis, that pointed to an article in Inside Higher Ed (IHE) on the thorny issue of academic credit and authorship:

The article reports on a piece of research presented by Professor John P Walsh of the Georgia Institute of Technology and recent graduate Sahra Jabbehdari at the annual meeting of the American Sociological Association. Walsh and Jebbehdari report on the apparently high instances of both guest authors (those who didn’t make adequate contribution to be listed) and ghost authors (those who did make a contribution and were left off the list).

One point in the article stood out for me because it shows that authorship is one among many areas of scholarly communication where many people are concerned about ethical standards, but we don’t all agree on what those standards should be. For example, the article in IHE states

…the new research shows that 37 percent of medical papers had a guest author, with about two-thirds of those being authorships granted simply for providing data.

please-do-not-ask-for-credit-as-refusal-often-offendsSo the problem is that about a quarter of academic articles give authorship credit to the person who actually sat at the bench and did the work?

How shocking!

Sarcasm aside, in my experience (I’m an author on 21 peer-reviewed articles, six conference proceedings and one book chapter) working in both biology and physics labs, it was considered only right and proper to include the person who gathered the data as an author on an article, irrespective of whether they were involved in the intellectual design of the experiment. To be clear, it is considered best practice for the primary author to send the manuscript to all authors for feedback and to incorporate any changes until a consensus is reached. In other words, all authors have a responsibility to participate at some level in the writing, but providing data is absolutely adequate for authorship.

The IHE article quotes the International Committee of Medical Journal Editors:

“recent graduate of GITsubstantial contributions to the conception and design” of a research project, a key role in “drafting the article or revising it,” and a role in final approval. Merely getting funding or gathering data are not sufficient, the standards say.

This made me wonder whether this was an example of how sometimes in the publishing industry (and sometimes among librarians), our ideas about how researchers do or should behave is different to the way that they actually behave in practice. Is this an area where the industry’s idea of how researchers do, or should, work is out of step with what the academy itself thinks? I did a bit of googling and I think that the answer is.. kind of.

If we look at the actual guidelines published by ICMJE in 2013, it says

1. Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; AND
2. Drafting the work or revising it critically for important intellectual content; AND
3. Final approval of the version to be published; AND
4. Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part

The acquisition of data is stated as sufficient reason to be included as an author. So the article in IHE is misinterpreting the ICMJE. To be fair, the IHE article does say simply for providing data, which could be interpreted as authors who provided data but never saw the manuscript prior to submission, but I doubt that.

On the other hand, the Faculty Council of Harvard Medical School state:

Everyone who is listed as an author should have made a substantial, direct, intellectual contribution to the work. For example (in the case of a research report) they should have contributed to the conception, design, analysis and/or interpretation of data. Honorary or guest authorship is not acceptable. Acquisition of funding and provision of technical services, patients, or materials, while they may be essential to the work, are not in themselves sufficient contributions to justify authorship.

The acquisition of data is not directly included as reason for authorship. Arguably, ‘provision of technical services’ might cover data acquisition. Meanwhile, the office of the provost at Yale writes that authorship should be granted to those that ‘conduct’ a component of the research. To me, that means taking the data but again, it’s not clear.

The IHE article quotes Walsh as saying, ‘We are in an era of high-stakes evaluation’. The implication being that this creates incentives to extend the author list. This is absolutely true. My own personal anecdote, I remember once being persuaded to ‘be more generous’ when writing my author list and include a senior faculty member who had made no intellectual contribution to the article but owned the piece of equipment that I was using. Singling out those that actually do the experiments particularly as unworthy of authorship seems unfair to me and runs the risk of addressing the problem of growing author lists by picking on the most junior members of the research community; graduate students and postdocs.

I suggest that we need to take a step back here as an industry and discuss in greater detail what should and should not constitute the right to take credit for a piece of work. Digital Science began working on these ideas some time ago when Amy Brand, who is now Director of the MIT Press, co-chaired the working committee for project CRediT. Moving forward, the discussion needs to include funders, publishers, people working in scientometrics and most importantly, the researchers themselves, so that we shape our system of incentives in such a way as to benefit the advancement of knowledge.

The post More Confusion: Do We All Agree on What Constitutes Authorship? appeared first on Digital Science.

]]>
SciELO’s Contribution to the Globalization of Science https://www.digital-science.com/blog/2015/08/scielos-contribution-to-the-globalization-of-science/ Fri, 21 Aug 2015 09:00:36 +0000 https://www.digital-science.com/?p=13570 Abel L Packer, MLS, 2011-Present, Director of SciELO Program, 1999-2010, Director of the Latin American Center on Health Sciences Information / PAHO / WHO     Rogerio Meneghini, PhD in Biochemistry, Post doctorate Stanford Universty, 1998- Present- Scientific Coordinator of SciELO Program, 1965-2004 Professor, Dept Biochemistry, University of Sao Paulo   About two decades ago, […]

The post SciELO’s Contribution to the Globalization of Science appeared first on Digital Science.

]]>
EDU_0559Abel L Packer, MLS, 2011-Present, Director of SciELO Program, 1999-2010, Director of the Latin American Center on Health Sciences Information / PAHO / WHO

 

 

Foto jan 2013Rogerio Meneghini, PhD in Biochemistry, Post doctorate Stanford Universty, 1998- Present- Scientific Coordinator of SciELO Program, 1965-2004 Professor, Dept Biochemistry, University of Sao Paulo

 

Photo by John W. Schulze (CC-BY)
Photo by John W. Schulze (CC-BY)

About two decades ago, conferences and consultations organized by the São Paulo Research Foundation (FAPESP) and the Latin American and Caribbean Center on Health Sciences Information Center (BIREME/PAHO/WHO) addressed the fact that only a small fraction of our nationally edited peer-reviewed journals, containing world-class science, were indexed in the ISI Citation Indexes or in Pubmed.

At that time scientific publishing was effectively split into two. On one hand, there were so-called mainstream journals, published largely by commercial publishers in Europe and the US. Operating in parallel were regional publishers, often based in developing countries, which addressed non-English-speaking markets.

The 1995 article “Lost Science in the Third World” by W. Wayt Gibbs in Scientific American, 273, 92 – 99, discussed some of the difficulties faced by regional publishers. On one level, we were trapped in a vicious cycle. If regionally relevant research wasn’t widely included in international indexes, it would be less likely to be cited and therefore, journals would not be eligible for inclusion in those indexes. Many in Latin America and the global south however felt that a more insidious bias was at work. As the Scientific American article suggests, many people felt that publishers from non-English speaking countries and the developing world were being held to inclusion standards more strictly than publishers in America or Europe

As a consequence, only a limited share of the best science published by nationally edited journals would reach the international scientific community. In addition, most of the journals communicating research of local scope or interest had limited visibility, including Brazil, due to the poor distribution of the print issues.

SciELO addresses this situation as a combination of a citation index and online open access publisher of selected journals, many of which are available only in Portuguese. FAPESP and BIREME, as leading organizations in their fields of expertise, created SciELO 18 years ago with a unique cooperative model that was particularly suited to scholarly publishing in emerging markets. SciELO helps regional publishers become more competitive in the global market place by working together, and sharing expertise and resources. Their mandate was, and still is, to advance both scholarly communication and science in Brazil.

SciELO quickly emerged as both a publishing platform that compliments the international “main stream” journals and a novel co-operative model for academic publishing. The value of this approach was later corroborated by the “regionalization” policy of the WoS, which from 2007 onwards, increased the coverage of journals from Brazil from 30 to over 130. The recent surge of interest in the Latin American market from commercial publishers wishing to partner with regional publishers is further testament to the effectiveness of SciELO’s approach to supporting the local publishing industry.

The National Commission for Scientific and Technological Research of Chile (CONICYT) adopted the SciELO publishing model shortly after it was launched in 1998, thus becoming a leading partner in building the network, that now encompasses 14 IberoAmerican countries and South Africa. The network currently covers more than a thousand active journals with an archive of over half a million full text articles. The platform serves over a million COUNTER compliant downloads on average per day, making SciELO a significant contributor to global scholarly communication.

SciELO has succeeded, to a great extent, in overcoming discrimination against “regional publishing”. The networked cooperative approach, combined with continuous improvement in journal production while keeping costs tightly managed, has contributed decisively to the globalization of indexing and publishing. It has also culturally and academically enriched the global flow of scientific information by recognizing and enhancing the critical role played by hundreds of dispersed nonprofit publishers that are communicating research through a myriad of journals covering a range of fields of both international and regional importance.

Compared to the Web of Science (WoS) and Scopus, SciELO stands out as a multilingual index and publisher, supporting journals from all disciplines. Journal selection is informed by recommendations and advice of local scientific committees. This unique, more qualitative selection process affects the make-up of the index. For example, SciELO indexes relatively more social sciences and humanities journals than WoS or Scopus. In terms of languages, in 2014 about 70% of the SciELO indexed articles were published in Portuguese or Spanish and 38% in English. SciELO Brazilian publishers are making an enormous effort towards a balanced approach to multilingualism to strengthen the internationalization of their journals: in 2014, 58% of the articles were published in Portuguese, 57% in English and 16% simultaneously in both English and Portuguese.

The presence of SciELO makes a significant and positive difference to how the global information community thinks about future development. We are aware that there are still many challenges to be faced, and old attitudes detrimental to local research communities still remain to be overcome in some quarters.

The post SciELO’s Contribution to the Globalization of Science appeared first on Digital Science.

]]>
Three Simple Ways to Improve the Tenure Process in the United States https://www.digital-science.com/blog/2015/07/three-simple-ways-to-improve-the-tenure-process-in-the-united-states/ https://www.digital-science.com/blog/2015/07/three-simple-ways-to-improve-the-tenure-process-in-the-united-states/#comments Tue, 21 Jul 2015 12:27:10 +0000 https://www.digital-science.com/?p=13178 Stacy Konkiel is a Research Metrics Consultant at Altmetric, a data science company that helps researchers discover the attention their work receives online. Since 2008, she has worked at the intersection of Open Science, research impact metrics, and academic library services with teams at Impactstory, Indiana University & PLOS. Many institutions in the US and […]

The post Three Simple Ways to Improve the Tenure Process in the United States appeared first on Digital Science.

]]>
team-stacyStacy Konkiel is a Research Metrics Consultant at Altmetric, a data science company that helps researchers discover the attention their work receives online. Since 2008, she has worked at the intersection of Open Science, research impact metrics, and academic library services with teams at Impactstory, Indiana University & PLOS.

When all you have is a hammer, everything looks like a nail.
When all you have is a hammer, everything looks like a nail.

Many institutions in the US and worldwide are changing the way that they assess academics for tenure. They’re making a move towards using a more holistic set of impact metrics (a “baskets of metrics”, if you will) and, in some cases, doing away with metrics like the journal impact factor (IF) as a sole means of evaluating their faculty’s work. In so doing, they are righting wrongs that have plagued academia for too long, particularly the misuse of citation-based metrics for evaluation.

In this post, I’ll discuss which metrics should be abandoned, which we should put into context, and what data we can add to tenure dossiers to make it easier to interpret the “real world” impacts of scientific research.

Citation-based metrics are not inherently faulty…

Using the journal IF as a means of evaluation, for example, can help librarians understand the importance of academic journals within the scholarly community (the purpose for which it was originally invented). And citations can help authors understand the scholarly attention that their journal article or book has received over time.

However, citation-based metrics offer an incomplete picture of the total impact of research. Citations can only shed light on the attention that articles and books alone receive, and only upon the attention of other scholars. These limitations can be problematic in an era when an increasing number of scholars are moving beyond the journal article to share their work (often in the form of data, software, and other scholarly outputs) and many universities and funders are asking scholars to consider the broader impacts of their research (on public policy, technology commercialization, and so on).

Moreover, some citation-based metrics are being used incorrectly. The journal IF, in particular, is a victim of this: it is often used to measure the quality of a journal article, when it (a) is a journal-level measure and therefore inappropriate for measuring article-level impact, and (b) cannot measure quality per se, but instead can help evaluators understand the far more nebulous concept of “scholarly impact”.

Luckily, a growing number of universities are changing the way research is evaluated for tenure and promotion. They are reducing their dependence on journal-level measures to understand article-level impacts, and taking into consideration the effects that research has had on the public, policymakers, and more.

These institutions are using metrics much more holistically, and I think they serve as an excellent roadmap for changes that all universities should make.

The current state of “impact” and tenure in the United States

At many universities in the US, researchers who are being assessed for tenure are required to prepare a dossier that captures their value as a researcher, instructor, and as a member of the larger disciplinary community. And at some universities with medical schools, a researcher’s clinical practice may also be considered.

Dossiers are designed to be able to easily communicate impact to both a jury of disciplinary colleagues (other department members and external colleagues within the same field), as well as university committees of faculty and administrators, some of whom may know little about the candidate’s specific research area.

There are many ways to showcase research impact and universities typically offer dossier preparation guidelines to help faculty navigate their options. And more often than not, these guidelines primarily recommend listing raw citation counts and journal IFs as a means of helping others understand the significance of their work.

But there are better and more inclusive recommendations for the use of research impact metrics in a dossier. Here are three in particular that I think form a good starting point for anyone interested in reviewing their own university’s tenure and promotion preparation guidelines with an eye towards improvement.

  • Improvement 1: Remove journals from the equation

The journal that an article is published in cannot predict the quality of a single article. Moreover, journal titles may unfairly prejudice reviewers. (“I don’t recognize this journal’s name, so it must be rubbish.”)

So, why don’t more institutions disallow any mention of journal titles in tenure dossiers? Or make it clear that journal IFs should not be (miss)used as evidence of quality?

At the very least, instructions should require reviewers to read and evaluate the quality of an article on its own merits, rather than rely upon shortcuts like journal IFs or journal title prestige. And contextualized, article-level citation counts (which I’ll talk about more below) should only be used to inform such reviews, not replace them.

  • Improvement 2: Require context for all metrics used in documentation

Tenure dossier preparation guidelines should make it clear that raw citation counts aren’t of much use for evaluation unless they’re put into context. Such context should include how the article’s citations compare to other articles published in the same discipline and year, and possibly even the same journal. Here’s an example of how such context might work, taken from an Impactstory profile:

impactstory1

Context can also include who has cited a paper, and in what manner (to acknowledge prior work, commend a study that’s advanced a field, and so on).

Some researchers include contextualized citation counts in their dossier’s narrative section, for example:

In 2012, I published my landmark study on causes for Acute Respiratory Distress Syndrome. The article has since been cited 392 times. (In contrast, an average epidemiology journal article published in 2012 has been cited only 68 times, according to Impactstory.org.)

How researchers might best document contextualized impact may differ from discipline to discipline and institution to institution, but context is vital if academic contributions are to be evaluated fairly. So, those creating tenure and promotion preparation guidelines should take care to include instructions for how to provide context for any and all metrics included in a dossier.

  • Improvement 3: Expand your university’s definition of impact

Citations are good for understanding the scholarly attention a journal article or book has received, but they can’t help others understand the impacts research has had on public policy, if it has introduced a new technique that’s advanced research in a field, and so on. However, altmetrics can.

Altmetrics are loosely defined as data sourced from the social web that can help you understand how often research has been discussed, read, shared, saved, and reused. Some examples of altmetrics include:

  • Downloads and views on PubMed Central,
  • Mentions in policy documents,
  • Bookmarks on Mendeley, and
  • Software forks (adaptations) on GitHub.

Altmetrics are an excellent source of supplemental impact data to include alongside citation counts. They can help evaluators understand the effects that research has had on members of the public and other non-scholarly audiences (in addition to the non-traditional effects that research has had among other scholars, like whether they’re reading, bookmarking, and adapting others’ scholarship). They can also help evaluators understand the influence of scholarly outputs other than books and journal articles (for example, if a piece of bioinformatics data analysis software is being used by other researchers to run their own analyses). Altmetrics also often include qualitative data, so you can to discover who’s saying what about an article (providing some of the all-important context that I touched upon above).

Altmetric, the company I work for, offers a bookmarklet that makes it easy to find altmetrics from a variety of sources, including an important non-scholarly source: public policy documents. The report generated includes not only a raw count of the mentions an article has received in public policy documents, but also links out to the documents that mention it.

Here’s an example of an Altmetric report for an article that’s been mentioned in several policy documents:

AM_skonkiel

Using the report data, researchers can more easily document the effect their research has had on public health policy, like so:

My 2012 landmark article on causes for Acute Respiratory Distress Syndrome has been cited in at least three public policy documents, including the World Health Organization’s report on recommendations for reducing mortality rates for children living in poverty.

Mentions in policy documents are just one of the ways scientists can showcase the many flavors of impact their work has had. Others include: has their article been recommended by experts on Faculty of 1000? Has their research software been widely adopted by others in their discipline? Are their peers discussing their recent articles on their research blogs? The possibilities for understanding the broader dissemination and applications of research are many.

An increasing number of universities and departments (including the University of Colorado Denver Medical School and IUPUI) are beginning to offer guidance on using (and interpreting) altmetrics data in tenure dossiers. Any instructions for using altmetrics should be careful to recommend including context (both “who’s saying what” about research, as well as how much attention a work has received in comparison to other research in a discipline) and steer candidates away from using numerical indicators like raw counts or the Altmetric score (which we created as a means of helping people identify where there is altmetrics data to explore, not to rate the quality of articles).

And of course researchers themselves need not wait for guidelines to include supplementary indicators like altmetrics in their tenure dossiers, to help paint a more complete picture of the impact their work has had. Researcher-oriented tools like Impactstory and the Altmetric bookmarklet, described above, can help them discover where their work is making a difference, and provide the contextualized data to help them share those impacts with others.

It’s time for the tenure and promotion process to get smarter and more nuanced. Let’s start by using relevant data to understand real world impacts, and manage how we use traditional data like citation counts to put scholarly impact into a better context.

Do you have ideas for how we could improve the use of metrics in tenure and promotion? Leave them in the comments below or share them with me (@skonkiel) via Twitter.

The post Three Simple Ways to Improve the Tenure Process in the United States appeared first on Digital Science.

]]>
https://www.digital-science.com/blog/2015/07/three-simple-ways-to-improve-the-tenure-process-in-the-united-states/feed/ 1
Confusion in the Ivory Tower: Do We All Agree on What ‘Excellence’ Means? https://www.digital-science.com/blog/2015/07/confusion-in-the-ivory-tower-do-we-all-agree-on-what-excellence-means/ Tue, 14 Jul 2015 14:04:10 +0000 https://www.digital-science.com/?p=13078 First of all, an apology for the lack of a perspectives post last week. I had intended to write something while on vacation in Austin, Texas visiting family but the call of happy hour at Chuy’s and Antone’s ended up being just too loud. Those of you who know Austin know what I’m talking about. While I […]

The post Confusion in the Ivory Tower: Do We All Agree on What ‘Excellence’ Means? appeared first on Digital Science.

]]>
First of all, an apology for the lack of a perspectives post last week. I had intended to write something while on vacation in Austin, Texas visiting family but the call of happy hour at Chuy’s and Antone’s ended up being just too loud. Those of you who know Austin know what I’m talking about.

4925366044_790cae3eb3_zWhile I was away, I had some time to think about some recent discussions I’ve had on the subject of research assessment. If you’re expecting a great epiphany in the next few 600 words or so, I’m afraid that you’re going to be disappointed. The only conclusion that I’ve been able to come up with is that it’s all a bit confusing. Put simply, various stakeholders seem to have different perspectives on how research assessment works currently and how it should work in the future. In order to move forward, we must first identify and then address a number of misunderstandings.

As I mentioned in my post of two weeks ago, it’s tempting for publishers to think that the reason scholarly communication hasn’t changed more quickly than it has, is because our ultimate customers, researchers and academics, simply don’t want it to. Many say that researchers are interested solely in traditional high impact articles. On the other hand, researchers seem frustrated with their dependence on that one narrow aspect of scientific communication and unsure quite who’s fault it is.

My post on researcher frustration, that I mentioned above, got some attention from the open science community over social media. One of the tweets that caught my eye was from Michael Markie (@MMMarksman), associate publisher at F1000, who said:

So perhaps, if it’s not researchers or publishers that are the gatekeepers to change here, maybe it’s the funders? Well, if you look at the history of funder mandates and research assessment, funders have been quite progressive in areas like open science and research assessment so I don’t think we’re really waiting for them to pave the way, they already are. At least, they are in terms of policy.

With all the talk about funders lately, another driver of academic behaviour seems to be receiving a bit less attention; namely the hiring, tenure and grant committees that are populated by senior academics themselves. We’ve touched on this topic on the perspectives blog before but I think it deserves more attention because it seems to be a major source of confusion. Going back a couple of years, Micheal Eisen responded to criticisms of his call to boycott high impact subscription journals by writing that in his opinion:

The widely held notion that high-impact publications determine who gets academic jobs, grants and tenure is wrong.

Eisen is certainly correct that the view is widely held by academics, I’m just not sure how wrong it really is. In a recent perspective on behalf of the FENS-Kavli Network of Excellence, Dr Tara Spires-Jones, wrote about the challenges faced at early and mid-career created by the pressure to publish in high impact journals, particularly in order to impress grant review panels. Which begs another question: If funders are saying that they want to assess performance differently, are the reviewers hearing that message? My current working hypothesis is that they’re not.

As part of one of the conversations that I cited in my previous post, a senior tenured academic told me that he needed to design his research programs so that his students and postdocs could get high impact articles. To do otherwise would be unfair to their career progression. Almost in the same breath, however, he told me that when he went up before the tenure committee a few years ago, it was the letters of support from international colleagues that were the key factor. So which is it, high impact factor publications or the respect of one’s peers that’s the important target for progression?

Part of it is that priorities change as a researcher moves through their career. Traditionally, a high impact article in Cell, Nature or Science can launch a researchers career. As a result, many senior faculty members tell their early and mid career mentees that while high impact papers are no longer so important to their own career progression as senior researchers, it’s the only sure fire route to academic success at earlier stages. The question that I’d like to know the answer to is: Is this still true today or are younger researchers being given advice that’s out of date?

I don’t know the answer to this, because tenure requirements, like snowflakes, are all different and not as transparent in practice as we’re told they are in theory. Take for example this advice piece from 2010 in the Chronicle of Higher Education, which is fairly typical of the kind of answer you get when talking to people involved in tenure assessment. The article talks a lot about the organizational aspects of the committee and clearly emphasizes letters of support, there’s no definition of what constitutes high achievement or how that is measured.

The sense I get from senior academics and the administrators is they think of ‘high quality research’ or ‘teaching achievements’ as self evident, as if you can’t really explain what those terms mean, but you know it when you see it. The perception amongst more junior academics is that these are code words for publishing lots of high impact papers. The reality is that research quality might be defined in a number of ways, as this pathway to impact page from EPSRC neatly explains. An example of a good start is this page from the National Institutes of Health, showing their criteria for tenure for intramural researchers. The first bullet point here doesn’t fully define research quality, but it does at least state that it should include scientific rationale and experimental design.

I believe that academia itself needs to do a better job understanding its own goals and ideals when it comes to research excellence. At the same time, those involved in setting policy for research assessment need to do more to inform and educate both senior and more junior academics about how they want quality to be assessed. Without this vital step, academics will continue to make the same assumptions about what is considered valuable, while progressive assessment policies will fail to have their full effect.

The post Confusion in the Ivory Tower: Do We All Agree on What ‘Excellence’ Means? appeared first on Digital Science.

]]>
The PDF Puzzle: Will We Ever Really Move On? https://www.digital-science.com/blog/2015/06/the-pdf-puzzle-will-we-ever-really-move-on/ https://www.digital-science.com/blog/2015/06/the-pdf-puzzle-will-we-ever-really-move-on/#comments Tue, 23 Jun 2015 11:11:50 +0000 https://www.digital-science.com/?p=12744 Over the past few years, some publishers, librarians and researchers have been trying to develop approaches to supersede the dominant format in online scholarly publishing, the PDF. Adobe began developing the PDF in 1991, during a time when the model for information exchange was one of a singular point of publication, with no interactivity. The […]

The post The PDF Puzzle: Will We Ever Really Move On? appeared first on Digital Science.

]]>
690555355_93948eae34_z
Leather bound copy of the PDF 1.5 specification. *

Over the past few years, some publishers, librarians and researchers have been trying to develop approaches to supersede the dominant format in online scholarly publishing, the PDF. Adobe began developing the PDF in 1991, during a time when the model for information exchange was one of a singular point of publication, with no interactivity. The PDF is based on a pre-digital paradigm of publishing and many would argue, is anachronistic.
There’s a lot wrong with the PDF as a format for publishing. It’s flat and horribly two-dimensional. While it’s possible to implement limited interactive features using embedded Java, the format wasn’t originally intended to cover such use cases, and even simple implementations can be challenging compared with much more complex functionality in HTML5. PDF is also painfully static. As a digital version of a published page, PDFs weren’t designed to be editable. While it’s possible to do a little bit of manual touching up and annotation, real changes require the document to be recompiled adding time and cost to publishing production workflows.

So if they are so horrible, why do the majority of researchers still click on the PDF links when they visit article pages? Is it purely inertia after all these years? Are readers still so accustomed to PDFs that they’ve simply never tried the highly functional, feature-rich HTML environments? I suspect not.

Perhaps, as this Quora question suggests, one of the attractions of the PDF is its linear reading environment and lack of distracting colourful adverts. There’s undoubtedly something to that, but as Marlo Harris explained at the ALPSP Disruption seminar that I moderated last year, Wiley responded to this sort of end-user feedback by creating the Anywhere Article. Despite the clean, linear environment and plenty of white space with no adverts, Wiley received a surprising amount of inquiries from people asking where the PDF download link was.

The part of the PDF puzzle that we haven’t been able to address yet, is one of its defining features and so obvious that we sometimes overlook it. The clue is in the name; it’s portable. PDFs are downloaded as self-contained, cross-platform, printable documents, which means that you can keep them on your hard-drive, email them, put them in a repository, or whatever else you want to do with them. Downloading a PDF gives a sense of ownership that cannot be replicated with HTML. At least, that’s what people think.

Some people argue that the printability is the unique selling point of the PDF. It’s true that there’s a familiarity to taking a printed article and a highlighter pen with you as you travel, commute or sit in a coffee shop, but the way that people consume information is changing rapidly as a result of the widespread adoption of tablet computing. Today, it’s very common to see people using iPads and Android tablets in previously work-hostile environments, like underground trains, city buses, or economy class on United Airlines. While I don’t think that the desire for printability is going away quickly, we’re in the middle of a steady shift away from paper and towards mobile computing.

The real problem is connectivity. It’s impossible to read the HTML version of an article when you’re not connected to the internet. Some publishers have looked to solve this problem by creating proprietary journal apps, but in many fields, particularly in science, researchers often read from dozens of journals during their literature searches and don’t want to install dozens of apps in order to do so.

The PDF suffers no such problem because it lives locally on the device and isn’t tied to a particular publisher.

WWGD: What would Google do?

I was recently speaking with Carissa Gilman of American Cancer Society about the issue of offline access and she told me that her new favourite app is Pocket. Pocket is an app with Chrome and Safari extensions that enables you to easily store web pages on your computer or mobile device so that you can load them back up and read them when you’re not online. Opera and Dolphin mobile browsers have similar functionality but the workflow is a little clunky and hidden, so they’ve never really got traction.

At the same time, Google are increasingly making their web-apps offline friendly. Gmail has an offline app and Google docs implemented offline editing a short time ago. The offline components of these web apps not only let you work without connectivity but also make it possible to work without frustration when traveling in and out of mobile data coverage while on the move or in a hotel room or conference centre with unreliable wifi. Web apps that sit inside the browser, store information locally and sync automatically when able, blur the line between online and offline computing, allowing the user to trivially move back and forth in a consistent environment without even thinking about it.

What will the successor to the PDF look like?

If we finally want to go beyond the PDF, we have to stop looking at all the things that we don’t like about it, and look more carefully at why it’s still around. The successor to the PDF will be interchangeably online and offline, it may well look like a PDF and be printable, but it will also have web-enabled features like clickable references, metrics, embedded supplements, data sets and multimedia content. By replicating the look and feel of the PDF and making it truly portable through cloud-based, offline friendly reading and reference management web apps, we can finally offer researchers something unequivocally better than the static, two-dimensional PDF.

*Image credit: Thanks to Ralph Giles. CC-BY Share alike

The post The PDF Puzzle: Will We Ever Really Move On? appeared first on Digital Science.

]]>
https://www.digital-science.com/blog/2015/06/the-pdf-puzzle-will-we-ever-really-move-on/feed/ 2
In Defence of Electronic Journals: Why Scientists Process Information Differently to Law Students https://www.digital-science.com/blog/2015/05/in-defence-of-electronic-journals-why-scientists-process-information-differently-to-law-students/ https://www.digital-science.com/blog/2015/05/in-defence-of-electronic-journals-why-scientists-process-information-differently-to-law-students/#comments Tue, 12 May 2015 10:02:23 +0000 https://www.digital-science.com/?p=11707 Last Saturday, the latest issue of Against the Grain dropped through my letterbox, having made the long journey from the editor’s desk in South Carolina to my office in Edinburgh. The theme of April’s issue is ‘disappearing print stacks’ and there are several great articles that make for an interesting read. Co-director of the Science […]

The post In Defence of Electronic Journals: Why Scientists Process Information Differently to Law Students appeared first on Digital Science.

]]>
These books are so heavy and I only want one sentence...
These books are so heavy and I only want one sentence…

Last Saturday, the latest issue of Against the Grain dropped through my letterbox, having made the long journey from the editor’s desk in South Carolina to my office in Edinburgh. The theme of April’s issue is ‘disappearing print stacks’ and there are several great articles that make for an interesting read.

Co-director of the Science Libraries Division at the University of Chicago, Andrea Twiss-Brooks’ article on the use of robotic storage to maximize shelving capacity, reminds me of one of my favorite pieces of victorian futurism; Charles Cutter’s vision of the Buffalo Public Library in 1983. Cutter predicted that you could order a book from the stacks with a ‘keyboard’ – and he certainly wasn’t far off. Fast forward to today, the University of Chicago are one a growing number of libraries that are doing almost exactly that, albeit with robots rather than human runners.

An interesting aspect of this transformation, one that could be easily overlooked, is the effort that went into selecting which books would be stored robotically. Twiss-Brooks explains that collections are selected not only by which ones would have the least effect on research and teaching (presumably collections that supported courses that are no longer taught, or research that is no longer active at the institution), but also, to quote from the article:

“that selections could be easily explained to library users, as well as providing a large volume of material that could be quickly identified and processed in a timely fashion.”

This nuanced approach to selecting which service is appropriate for which content underlines the importance of understanding the various factors that affect researchers’ needs, their usage patterns, and their relationship to specific content.

Research Librarian Audrey Powers from University of South Florida explores this concept in her article entitled, ‘’No books, but Everything Else.’ Powers discusses some of the benefits and risks of the movement towards replacing physical collections with other types of services, from careers guidance to coffee shops. Powers also references a number of peer-reviewed studies that investigate the differences in reading experience and comprehension between physical volumes and ebooks. One article that she cites is by Anne Magen of the National Center for Reading Education and Research at the University of Stavanger in Norway. Magen compared reading comprehension in students who read linear texts in both electronic and paper form, finding that students who read texts on paper retained information better. The idea that paper is superior for learning is supported by some good science, as Powers rightly points out, and there’s evidence that students simply prefer physical textbooks to their electronic equivalent.

Just as there’s a danger of overlooking the details of the selection criteria that the University of Chicago applied when building their automated storage and retrieval system, there’s a detail in this argument that’s important to note. In all of these studies, at least as far as I can tell, researchers compared recall effectiveness for linear texts. This is, of course, a vitally important aspect of scholarly reading, particularly in the humanities, law and similar subjects, and particularly for students in those areas. As a former scientist, however, I’d like to urge caution when extending those findings to scientific journals.

When I first began reading scientific journal articles, I did try to read from title to conclusion, but I soon learned better. As a physicist and more so as a biologist, I hardly ever read linearly and how I read the article depended on why I’d pulled it in the first place. If, for example, I was looking at a paper because I was keeping up to date with my field, I’d read the abstract (or at least part of it). If it was interesting I’d read the conclusion, then perhaps I’d look at the figures and maybe read the results section. The article had to be really good and highly relevant for me to actually do anything more than scan the discussion and introduction sections. If, on the other hand, I was reading the article because I was following a chain of information from a review article, or even another research paper, that reference would generally refer to a specific result or finding, which I would zero in on then more often than not, put the paper to one side.

In other words, I (and almost all of the scientists that I’ve ever discussed this with over the last 15 years) read scientific articles in an extremely non-linear (and rapid) way. I’m sure that some would say that my reading patterns weren’t very scholarly, and perhaps they’re weren’t, but I think that they’re representative. I think that scientists treat information in a very workmanlike fashion, discovering facts and then moving on. This style of reading is massively accelerated by electronic journals and the technologies that can go along with them. Instant access to content online from anywhere on or off campus, hyperlinks to accelerate personal acquisition, and the ability to electronically interrogate content all support this type of interaction with literature. I personally would hate to go back to the old days of photocopying one article at a time, stuffing them in a folder and taking them home for the weekend only to realize that I’d missed one important piece of information.

I’m not writing this to disagree with Power’s excellent article in Against the Grain, in fact, quite the opposite. Powers argues that it’s important to thoroughly research and understand both what library users want and how they process information so that their real needs can be effectively met. I completely agree. In many situations, physical media offer the best way for researchers to absorb information. This idea is attractive to those of us that value the physicality of reading. It’s for that reason that I myself offer my own note of caution. The reading patterns and information needs of scholars vary according to discipline, type of content, career stage and activity. In many cases, particularly in science, electronic collections don’t represent a trade-off between space and utility but offer the best solution all around.

The post In Defence of Electronic Journals: Why Scientists Process Information Differently to Law Students appeared first on Digital Science.

]]>
https://www.digital-science.com/blog/2015/05/in-defence-of-electronic-journals-why-scientists-process-information-differently-to-law-students/feed/ 4
Nearing the Top of the Mountain: How Mid-Career Researchers See the Publishing Landscape https://www.digital-science.com/blog/2015/05/nearing-the-top-of-the-mountain-how-mid-career-researchers-see-the-publishing-landscape/ Tue, 05 May 2015 13:27:51 +0000 https://www.digital-science.com/?p=11561 Recently, I’ve been having a number of conversations with mid-career researchers about their scientific communication needs, what they want from publishers, and how they’d like to see things change. Mid-career researchers have titles like Assistant Professor, or in some regions, they might be called Lecturers or confusingly enough, Fellows or Readers.  Basically, they’re that group of […]

The post Nearing the Top of the Mountain: How Mid-Career Researchers See the Publishing Landscape appeared first on Digital Science.

]]>
Nearing the top of a perilous journey. -Mount Hua, China
Nearing the top of a perilous journey. -Mount Hua, China

Recently, I’ve been having a number of conversations with mid-career researchers about their scientific communication needs, what they want from publishers, and how they’d like to see things change.

Mid-career researchers have titles like Assistant Professor, or in some regions, they might be called Lecturers or confusingly enough, Fellows or Readers.  Basically, they’re that group of academics who are no longer students or postdocs but aren’t yet fully tenured professors. While they’re not under the sorts of pressures that I talked about last year in my blog posts on the Scholarly Kitchen, mid-career researchers certainly have their own particular sets of concerns and needs.

Achieving tenure feels like the final cut on the pathway from student to fully established researcher for many academics, and it’s not hard to see why. While tenure in some places arguably offers less protection than it used to, it’s still perceived by many as the ultimate in job security. In contrast to postdocs, mid-career researchers are expected to stand on their own. This can be a great opportunity for those with good ideas, as they no longer have to struggle with the perception that somebody else is the intellectual driving force behind their work. Of course, this means that expectations are extremely high. Tenure and permanent positions in academia are not granted lightly and to be successful, researchers have to show that their work is of the highest calibre, that they are international leaders in their field, and that they are substantially advancing the interests of the institution.

In talking to this group, the things that struck me are the differences between mid-career and postdoctoral level researchers, and how those influence their scientific communication needs. Very few of them talk much about open access, and there’s much less interest in disrupting the publishing industry. What there seems to be a lot of concern about, is getting their work published in high impact journals and the necessity to do so in order to get funding and tenure. At the recent ALPSP access all areas seminar at the London Book Fair, Stephen Hill of HEFCE explained that assessment panels for the REF were explicitly told not to use impact factor as a proxy for research quality. The researchers that I’ve spoken to, however, are skeptical about how well that guideline was applied. Internationally the situation is more stark. I recently spoke to somebody who told me, under condition of anonymity, that during a European grant selection panel on which he served, the assessment of a researcher’s track record consisted almost entirely of counting how many Nature and Science papers they had authored.

Many mid-career researchers feel a sense of frustration that they need to regularly publish in highly selective, high impact publications, but those publications generally focus on articles that have broad interest across disciplines, which makes it hard to get accepted if their work is important and good science, but doesn’t necessarily have a sexy headline. To be fair, the highest impact publications are all subscription journals for historical reasons and as such have always positioned themselves as a service to readers, rather than authors. In a way, many researchers are expecting high impact subscription journals to serve a purpose that isn’t aligned with their missions.

How should publishers respond?

Many publishers are already trying to ease the pressure on researchers in need of establishing impact. Despite the apparent lack of enthusiasm for open access amongst mid-career researchers, I think that OA, or at least the associated author pays business model is helping matters. By shifting the economic power towards the author, publishers are more directly incentivized to cater to their communication needs. A good example of this effect in action is the excellent work that PLOS, Scientific Data and some society journals are doing to help researchers deposit their data (a practice which increases citation rate). I’m currently in the process of organising a pre-meeting seminar for the SSP annual conference in May that will explore this issue. Other examples of good work include the creation of cascading peer-review, which feeds pre-reviewed articles into second tier journals like Acta Neuropathologica Communications. Cascading reduces the risk of losing time by submitting to high impact journals only to be rejected, freeing researchers up to do more productive things. Add to that increasing support for altmetrics, blogs, and even videos of surgical techniques (don’t click that link during lunch), and you can see how the industry is helping.

There is, however, more that can be done to better align the expectations that many researchers have with publisher practices and to be fair, the burden shouldn’t entirely lie with publishers. The academic community has a role to play, particularly internationally, to shift it’s own attitude to what is meant by impact, particularly when participating in grant review boards and also when reviewing for journals.

Despite some great work by publishers to innovate, many academics feel like publishers are the conservative force preventing change. To me, this indicates that there isn’t enough good dialogue taking place. Publishers must reach out to academics more to better communicate why they are taking the approaches that they’re taking and to learn more about what’s needed. In turn, more researchers have to participate in the conversation about how scholarly communication must evolve to better fit the needs of researchers.

The post Nearing the Top of the Mountain: How Mid-Career Researchers See the Publishing Landscape appeared first on Digital Science.

]]>