impact Archives - Digital Science https://www.digital-science.com/tags/impact/ Advancing the Research Ecosystem Tue, 04 Jul 2023 16:09:28 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 Impact and Reputation https://www.digital-science.com/challenge/impact-reputation/ Wed, 09 Dec 2020 13:17:09 +0000 https://www.digital-science.com/?post_type=project&p=39094 We want to help to shape a world in which academics are evaluated on the substance and quality of their own research.

The post Impact and Reputation appeared first on Digital Science.

]]>

Advancing Research Impact and Reputation

There are many factors involved in generating quality research outputs. They are the product of long hours spent securing funding, challenging the norm, conducting novel work, collaborating, publishing and disseminating your final findings so that others can build on them. For many researchers, the end goal is to help society overcome some of the biggest challenges it faces. But how can we measure the impact of research on society? And how can we use the insights we gain to inform future decision-making?

‘Impact’ has become something of a buzzword in recent times. Researchers are continuously seeking new ways to quantify their effect on society, often to satisfy the requirements of funders or research impact exercises, such as the REF in the UK. However, is it really necessary for all research to demonstrate a tangible impact? Are we in danger of stifling groundbreaking research that could underpin the impactful discoveries of tomorrow?

Our goal is to help the community answer these questions. At Digital Science, we are working to:

  • Enable the identification of any areas of potential impact that have previously gone unnoticed
  • Determine areas in which research topics can be united to create greater impact
  • Help researchers establish the need for impact in their research planning, and the best route to achieving that impact

Defining Impact

We want to help to shape a research culture in which all parties can claim the credit and recognition that they deserve. We want to contribute to building a societal culture in which expertise is valued.

New Challenges Need a New Approach

We want to help to shape a world in which academics are valued not by the name or impact factor of the journals in which they’ve published, but on the substance and quality of their own research. We believe that our software solutions, and the insights gained from analysing the data they contain, can help our community create that world.

Beyond Benchmarking

Resources

Measuring In Context

Building Context for Search and Evaluation

Real-Time Bibliometrics

Gathering Benchmarking Insights and Tracking Impact

The post Impact and Reputation appeared first on Digital Science.

]]>
REF 2021 https://www.digital-science.com/challenge/ref2021/ Fri, 31 Jul 2020 10:30:53 +0000 https://www.digital-science.com/?post_type=project&p=5344 Preparations for the imminent REF 2021 require a great deal of time and attention. Our data and tools can help support your REF 2021 submissions and more.

The post REF 2021 appeared first on Digital Science.

]]>

Data and tools to support REF 2021

The UK’s Research Excellence Framework (REF) requires an enormous investment of time and effort from higher education institutions (HEIs) in preparing for the submission of their research for national assessment. Some tasks and requirements are clearly defined, while others require HEIs to think more broadly and outside of the box.

We support UK HEIs

Our diverse yet specialised portfolio companies can support UK HEIs by collating evidence for impact and excellence.

An overview of how we can help with your REF preparations

SymplecticDimensionsAltmetric
REF2 Research OutputsElements’ dedicated assessment module enables you to manage your UoA submission and comply with all mandatory REF data and justification statements. This includes identification, proposal, review, selection, attribution, and Open Access complianceCompare your Institutionally held data with Dimensions’ world view on your staff’s outputs and research in context to retrieve a fuller picture of your output portfolio, including OA status and bibliometricsFrom social media to policy citations, Altmetric’s insights can help you discover additional context in order to help inform your REF2 selections
REF3 Impact case studiesElements’ dedicated Impact module enables you to gather impact narratives, evidence, links, and metadata to help inform case study selection and submission.Dimensions enables you to gain insights into your UoA’s research portfolio, in context: grants, patents, and clinical trial links show you your unit’s research impact beyond citation counts to give you a fuller pictureIdentify your research outputs’ links to policy documents, patents and online activity such as news and blogs to help evidence and corroborate impact case studies
REF5 Research Environment
Collect, collate and report on professional activities in your UoA to inform your submission strategy and populate your REF5a and REF5b Research Environment statementsGain insights into the context of your UoA’s research (including patents, citations, grants, funders, collaborations, Open Access) to inform your submission strategy and populate your REF5a and REF5b Research Environment statementsOffering a comprehensive overview of research impact indicators and links to policy documents, patents and online media, Almetric’s insights can help enrich and underpin your REF5a and REF5b Research Environment statements

Plan Ahead With Our Webinars

Boosting your REF 2021 submission

Watch our webinar to learn how Dimensions’ new UoA categorisation can support your REF submission and provide you with new insights.

Plan ahead for REF 2021

This insightful 60-minute webinar will give you more information about how our tools can help you with your submission for REF 2021.

Research in the context of SDGs

Watch our webinar to learn how Dimensions’ new UoA categorisation can support your REF submission and provide you with new insights.

Juergen Wastl

For more information, get in touch

Juergen Wastl | Director of Academic Relations and Consultancy

Additional Reading

The nature, scale and beneficiaries of research impact An initial analysis of Research Excellence Framework (REF) 2014 impact case studiesKing’s College London and Digital Science
This report is based on an analysis of the 6,679 non-redacted impact case studies that were submitted to the 2014 REF. Using a mix of text-mining approaches and qualitative analysis, the nature, scale and beneficiaries of the non-academic impact of research is described.

The Ascent of Open Access – Digital Science
An analysis of the Open Access landscape since the turn of the millennium. It compares the leading countries for research outputs as well as Open Access collaboration trends.

The Diversity of UK research– Digital Science
The economic and societal impact of university research seen through a total of 6,975 case studies. This Report contains visualisations of the knowledge networks underpinning the impact of UK university research.

The REF guidance isn’t trying to catch you out – Catriona Firth – Head of REF Policy at Research England

Full list of REF abbreviations – For a full list please see page 121 on the Guidance on submission document

The latest publications from REF 2021

The post REF 2021 appeared first on Digital Science.

]]>
Measuring In Context https://www.digital-science.com/resource/measuring-in-context/ Wed, 10 Jun 2020 16:48:09 +0000 https://www.digital-science.com/?post_type=story&p=49135 Rebecca Pool speaks to Daniel Hook, Chief Executive of Digital Science, about China's move away from a single-point metrics-focused evaluation system.

The post Measuring In Context appeared first on Digital Science.

]]>

Measuring in context

In this piece published in Research Information, Rebecca Pool speaks to Daniel Hook, Chief Executive of Digital Science, about China’s move away from a single-point metrics-focused evaluation system.

“‘We are seeing unsettled times for metrics in China,’ he says. ‘The government has effectively [asked] each institution to locally define the metrics that are important to it, and that it would like to work on, and so create a new norm for China from the ground up.’

Daniel Hook, chief executive of Digital Science

Digital Science is not a fan of single-point metrics and ranking and as a result invested in the non-traditional bibliometric company, Altmetric in 2012 and introduced the Dimensions database in 2017. Dimensions links many types of data including Altmetric data, awarded grants, patents, and more recently datasets, with a view to moving research evaluation practices beyond basic indicators.

The post Measuring In Context appeared first on Digital Science.

]]>
Dimensions Releases Datasets as a New Content Source for All Users https://www.digital-science.com/blog/2020/01/dimensions-releases-datasets-as-a-new-content-source-for-all-users/ Tue, 28 Jan 2020 10:35:26 +0000 https://www.digital-science.com/?p=32853 Digital Science’s Dimensions database will now integrate more than 1.4m datasets as a new content type. The datasets will be available to all users.

The post Dimensions Releases Datasets as a New Content Source for All Users appeared first on Digital Science.

]]>

Digital Science’s Dimensions database will now integrate more than 1.4m datasets as a new content type. The datasets will be available to all users – including those using the free version of Dimensions. 

Data will be sourced from figshare.com and include datasets uploaded on figshare, as well as from other repositories such as Dryad, Zenodo, Pangea, and Figshare hosted repositories including NIH. Datasets are defined as items shared on repositories which are categorised as datasets – this excludes e.g. preprints, posters, images, and software. The datasets will be updated daily and more repositories will be added following the initial release. They will become the sixth content type added to Dimensions after grants, publications, citations, alternative metrics, clinical trials and patents.

Christian Herzog, CEO Dimensions, said:

 “Since the foundation of the Research Data Alliance in 2013, the acceptance of datasets as first-class research outputs has accelerated: many institutions, publishers and funders promote the publication of research data, and the use of this data. While data sets have been displayed already on the publication detail pages in Dimensions, we are now making them discoverable as a content type on its own right – but integrated in the general context with grants, publications, clinical trials, patents and policy documents.”

The addition of datasets will allow institutions to discover and analyse trends in publicly available data at institutional level and makes even more linked data available in one platform, rather than disconnected databases.

“Datasets are an important research output which many of our users are interested in, Researchers can find datasets for reuse, funders will be able to analyse the impact of grants, it will also be beneficial to organisations interested in making their data more accessible, and publishers looking at where datasets get deposited and publications with associated datasets.”

Mark Hahnel, CEO and founder, Figshare, added:

“Ingestion of datasets into Dimensions demonstrates Digital Science’s commitment to the elevation of data to a first class output, a first step on what we see to be a long and complex, but worthwhile endeavour. Open data should and will be the norm in academic research.”

Read the news on Dimension’s blog.

The post Dimensions Releases Datasets as a New Content Source for All Users appeared first on Digital Science.

]]>
What Can We Learn From Ten Million US Patents? https://www.digital-science.com/blog/2018/06/what-can-we-learn-from-ten-million-us-patents/ Fri, 15 Jun 2018 09:14:35 +0000 https://www.digital-science.com/?p=29146 Patents are one of the richest sources of technical information available, we need to capture and utilize the knowledge in this exponentially growing knowledge base.

The post What Can We Learn From Ten Million US Patents? appeared first on Digital Science.

]]>
This post was originally featured on IFI’s blog.

The ten millionth US Patent will be granted on Tuesday, June 19, 2018. Is this just another number, or a true milestone for intellectual property? It depends who you ask.

patent-timeline

According to the United States Patent and Trademark Office (UPSTO), “The issuance of patent 10 million is an exceptional milestone. It is a timely and relevant opportunity to promote the importance of innovation, the ubiquity of intellectual property, and the history of America’s patent system.”

The office is marking the occasion with some special projects including:

  • A new patent cover design (shown above)
  • 10 million patents microsite featuring an interesting timeline of granted patents
  • The Ten for 10m project, encouraging media outlets to write stories on important inventions in their respective industries

On the other hand, some feel there’s nothing to celebrate because the US Patent system is in dire need of reform. Additionally, the US also no longer leads the world in the number of yearly granted patents—last year China granted 420,000 patents compared to 320,000 granted in the US.

One thing that’s for certain is that the rate of innovation and invention has continued to accelerate. It took four years to move from patent 8 million to patent 9 million (2011-2015) and three years to move from patent 9 million to patent 10 million.

Since patents are one of the richest sources of technical information available, it makes sense to capture and utilize the knowledge in this exponentially growing knowledge base. The UPSTO states that 80% of the information contained in patents can’t be found anywhere else, and other experts believe the figure is closer to 90%. Industries including life sciences, medical devices, and high tech are mining the data to learn about advances in their fields as well as to avoid infringing on technology that has already been patented. Investment professionals leverage patent information to monitor the R&D activity of technology-focused companies.

To be successful in this endeavor, it’s essential to use the highest quality patent data available. With patent data growing rapidly, information management has gotten harder. At IFI CLAIMS®, we understand that major decisions and millions of dollars hinge on reliable information about intellectual property. That’s why we have specialists who monitor the quality of incoming data from the millions of global records added to CLAIMS® Direct each month.

Everyone can agree that patents are a key indicator that innovation that leads to economic success. And ten million US patents is a lot!

The post What Can We Learn From Ten Million US Patents? appeared first on Digital Science.

]]>
Unraveling the Engagement and Impact of Academic Research https://www.digital-science.com/resource/unraveling-the-engagement-and-impact-of-academic-research/ Wed, 16 May 2018 22:56:31 +0000 https://www.digital-science.com/?post_type=story&p=41850 Our report finds new evidence that highlights differences in the primary audiences engaging with malaria and Alzheimer’s disease research.

The post Unraveling the Engagement and Impact of Academic Research appeared first on Digital Science.

]]>

Measuring in context

Altmetric Report Cover

Our report finds new evidence that highlights differences in the primary audiences engaging with malaria and Alzheimer’s disease research, respectively.

The study, which was conducted by the consultancy team at Digital Science, concluded that policymakers make up the primary community engaging with Malaria research, whilst practitioners and mainstream news outlets were most prominent for Alzheimer’s disease.

The post Unraveling the Engagement and Impact of Academic Research appeared first on Digital Science.

]]>
Interdisciplinary Research: Methodologies for Identification and Assessment https://www.digital-science.com/resource/methodologies-for-identification-and-assessment/ Thu, 17 Nov 2016 12:07:26 +0000 https://www.digital-science.com/?post_type=story&p=41950 The results highlight issues around the responsible use of ‘metrics’

The post Interdisciplinary Research: Methodologies for Identification and Assessment appeared first on Digital Science.

]]>

Measuring in context

Interdisciplinary Research - Methodologies for Identification and Assessment

The objective of the study behind this report was to compare the consistency of indicators of ‘interdisciplinarity’ and to identify a preferred methodology. The outcomes reveal that choice of data, methodology and indicators can produce seriously inconsistent results despite a common set of disciplines and countries.

This raises questions about how interdisciplinarity is identified and assessed. It reveals a disconnect between the research metadata analysts typically use and the research activity they assume they have analysed. The results highlight issues around the responsible use of ‘metrics’ and the importance of analysts clarifying the link between any quantitative proxy indicator and the assumed policy target.

The post Interdisciplinary Research: Methodologies for Identification and Assessment appeared first on Digital Science.

]]>
Evolved Metrics for the New Age of Research: Digital Science Webinar Summary https://www.digital-science.com/blog/2016/10/evolved-metrics-new-age-research-digital-science-webinar-summary/ Wed, 19 Oct 2016 10:06:29 +0000 https://www.digital-science.com/?p=21619 As part of a continuing series, we recently broadcast a Digital Science webinar on Evolved Metrics for the New Age of Research. The aim of these webinars is to provide the very latest perspectives on key topics in scholarly communication. The webinar focused on a number of issues surrounding the assessment of research impact. Topics […]

The post Evolved Metrics for the New Age of Research: Digital Science Webinar Summary appeared first on Digital Science.

]]>

As part of a continuing series, we recently broadcast a Digital Science webinar on Evolved Metrics for the New Age of Research. The aim of these webinars is to provide the very latest perspectives on key topics in scholarly communication.

The webinar focused on a number of issues surrounding the assessment of research impact. Topics covered included: NIH’s new metric to evaluate funded medical and academic research – the Relative Citation Ratio (RCR); new attempts to assess research impact; the importance of research evaluation and measurement to the scientific research community and much more.

Laura Wheeler (@laurawheelers), Community Manager at Digital Science, started the webinar by giving a brief overview of the esteemed panel and their backgrounds before handing over to Steve Leicht, from ÜberResearch, who moderated and questioned the panel.

We were delighted to welcome Dr George Santangelo, Director at the Office of Portfolio Analysis, National Institutes of Health (NIH), to start the conversation. George started with a bold statement:

“I think, at this point, there is general agreement about journal level metrics that they’re inadequate as a way of assessing anything about individual articles… No one metric is going to adequately represent the value we obtain when we make investments in biomedical research”

He then talked about the history of the NIH, stating the San Francisco Declaration on Research Assessment (DORA) as a turning point in recognizing that journal level metrics are inadequate.

George looked at the limitations of bibliometrics commonly used to measure the value of a publication or compare groups of publications:

  • Publication Counts: field-dependent, use-independent
  • Impact Factor:  journal-level not article-level.
  • Citation Rates: field- and journal-dependent
  • h-index: field-dependent and time-dependent

“…everything published in journals with a high impact factor (>28), taken together, accounts for <11% of the most influential papers.”

It was explained it’s clear we are missing a lot by focusing on a small number of high profile journals with high impact factors. So, what’s the alternative? The Relative Citation Ratio (RCR)!

To find more information on RCR, see a recent paper George co-authored on Relative Citation Ratio (RCR): A New Metric That Uses Citation Rates to Measure Influence at the Article Level published on PLOS.

George made an important point about RCR values:

“RCR values are normalized, benchmarked citation rates that measure not the quality, importance, or impact of the work, but the influence of each article relative to what is expected given the scope of its scientific topic. The scientific topic is defined by an article’s co-citation network, which is a highly resolved and dynamic determination of the corresponding target audience.”

After information about RCR was first released, there was some very positive feedback!

screen-shot-2016-10-17-at-16-29-18

“If you can fit the directions to download RCR values in a tweet then obviously it’s a pretty simple process to use the website – this was an important goal!”

Euan Adie from Altmetric was up next, George made sure to include the Altmetric score of his 2015 paper on RCR!

At this point, it’s worthwhile noting the steps needed to calculate the RCR:

Step 1: Use the co-citation network of each article to calculate its field citation rate.

Step 2: Benchmarking against a group of peers generates an expected citation rate.

If you’re interested in viewing publicly available RCRs then have a look at this website: iCite.od.nih.gov. A snapshot of the website’s opening page is provided below:

George finished his illuminating presentation with a demonstration of his tools in action!

“We’re extremely pleased with the usability of our tools and the positive feedback we’ve received!”

screen-shot-2016-10-18-at-10-07-25

In terms of diverse metrics, George made sure to mention that he never intended for the creation of the RCR to be the end of his team’s investment in time and resources in developing metrics to assess the outputs of NIH investments. He highlighted how they are committed to developing and using diversified metrics:

“Diversified use of metrics can contribute to research assessment: using citations to track bench-to-bedside translation”

Steve Leicht thanked George for his time and expertise:

“As a member of the community, and as a general fan of the breath and evolution of research metrics, the thing that I appreciate the most in your approach is the fact that your team was very open in your process of discovery – you put out the preprint, actively reached out to the bibliometrics and research communities for ideas and recommendations, and you continue to make you work transparent and open, so thank you!”

Next up, we had Euan Adie, Founder and CEO, of Altmetric, talking about, ‘The changing metrics landscape: new outputs, new data’. Euan mentioned that he was primarily focusing on material from Altmetric, the things he’s obviously most familiar with, but that great work is being done elsewhere by Impactstory and Plum Analytics.

“It’s not just us doing it, but Altmetric.com is what I’ll be talking about today!”

Euan reminisced about his background in Bioinformatics as a software developer!

“Traditionally the way research is recognized is with citations, right? And they’re good at measuring scholarly use, but what if your focus is less on being cited in other journals but more on being cited in places like this…”

See policy documents below:

screen-shot-2016-10-18-at-10-37-41

Euan made a vital point: what all these policy documents have in common with journal articles is a reference list. They cite the evidence that has gone into producing the report – the same applies to a field like education.

“Your’re influencing the next generation of researchers”

Don’t forget the social influence of your work! Think of politicians like Barack Obama citing your work to win a political debate! None of this is captured in traditional citation data.

Important: we need to consider the broader impacts of the research.

“We’re facing what has been termed ‘The evaluation gap’. On the one side, we’re saying ‘researchers should be doing great work, publishing in journals, producing data sets and writing software, and we should be recognizing the broader impact and quality of their work’ and on the other side is what’s actually available now; what tools and processes can we use to make these assessments. I’m interested in how we can fill this gap!”

screen-shot-2016-10-18-at-10-50-02

Euan then talked about how Altmetric goes about answering this question referencing the diagram below:

screen-shot-2016-10-18-at-10-56-03

Euan explained that, if you were imagining a scale of 0 to 100 for each of the categories above, assuming higher numbers are better is not always appropriate. Sometimes you can get lots of attention and it’s a bad thing; for example, Andrew Wakefield’s paper suggesting a link between the MMR vaccine and autism. Conversely, a more positive example would be Obama’s paper on healthcare in The Journal of American Medical Association (JAMA), where the attention reflected close scrutiny with a positive outcome.

“Isn’t it cool that POTUS was subject to peer review!”

Altmetric gathers qualitative data to help users determine how a piece of research is being received and what broader impacts it might go on to have.

“It’s more about using data that we can collect qualitatively and quantitatively. Quality is very difficult to metricize. Altmetric works in the following way: for each scholarly output that gets produced – data sets, software, posters, books, articles – a real-time report is produced and continually updated.”

screen-shot-2016-10-18-at-11-23-07

Euan goes on to mention all the unique ways in which Altmetric measures activity – snapshot above.

Speaking to the complementary nature of Altmetrics and more traditional bibliometrics, citations in particular, Euan noted:

“It’s like peanut butter and jelly, you can have one without the other but ideally you want them both together!”

Importantly, altmetrics do not just apply to journal articles, they can be used to track the engagement and attention surrounding books and other research outputs too. Euan referred to Altmetric’s recent partnership with the Open Syllabus Project:

“Essentially, integrating the Open Syllabus data lets us show the author or publisher of a book where their work is being used in academic reading lists.”

screen-shot-2016-10-18-at-11-32-46

Euan wrapped up his presentation and invited everyone to try out Altmetric via their free browser bookmarklet.

“I think it’s fascinating to see how you continue to engage new pieces of information to measure out engagement and impact” Steve Leicht

Our final panelist was Mike Taylor, Head of Metrics Development, Digital Science, and he started by thanking our keen audience:

“It’s really gratifying to see so many actively involved people following our webinar today”

Mike looked at how we are going to be improving the use of RCR at Digital Science over the next few years:

“A humanities article may take as much as eight years to reach peak citation, a computer science article or a physics letter probably peaks within the first year… for those of us who work in the field, we are aware of these variations, but generally speaking people outside the field don’t understand these dramatic variations. RCR works to normailze – to iron-out – those variations, so you can compare articles from different subject areas”

The RCR gives a clear indication whether an article is performing against its peers, independent of field citation rate:

  • 1.0 = as expected, > 1.0 is better
  • Sophisticated normalization means you can compare articles from different subject areas
  • Highly open metric – formulation, data and license
  • The statistical characteristics permit robust analysis

Digital Science is making a big commitment to the RCR. RCR is being placed on Symplectic, Dimensions, Figshare, Altmetric and ReadCube.

“It’s a metric you’re going to see a lot of”

Digital Science’s platforms answer particular use cases. Our customers’ varied needs shape our metrics development programme.

“There’s no single metric that can be used for all cases! Unfortunately, we live in a time where the impact factor has dominated peoples thinking in this space. We need to be proud and assertive when saying different metrics answer different cases”

Mike went on to mention the wider context of metrics, saying that as the community moves towards open science, expectations on evaluation and metrics will change. As a consequence, more and more people, including non-specialists, will be making use of metrics and creating reports.

RCR is a great combination: it’s easy to understand, covers an amount of normalization, builds a future pathway and is open. Altmetrics shows a way to understanding a wider context.

“With RCR, we are starting to see a breakdown of reliance on journal metrics in terms of understanding the importance of an article metric. Most normalized metrics only look at the subject areas of the article’s journal”

With RCR, an article is compared against others in its co-citation network, not those it the same journal!

“As an article accrues citations, the network increases in diversity, size and stability, and the RCR value matures – it reflects how an article is actually being used!”

Mike talked about network theory, its relevance, and his predictions for the future. Interestingly enough, Google’s search engine and its pagerank algorithm was inspired by Eugene Garfield’s work on bibliometrics! New technological developments such as cloud computing, graph databases and new mathematical techniques are opening up new fields. Mike mentioned a recent example of the application of network theory in the role of dragons in creationist myths! Scientists Trace Society’s Myths to Primordial Origins.

Mike talked about some more fascinating work on network analyses from the 3:AM Altmetrics Conference he just attended in Romania by Lauren Cadwallader, ‘Particular patterns of altmetric behavior may indicate high probability of policy impact’.

Mike concluded his fascinating presentation with a strong message:

“I think one of the things we’re going to see over the next few years is a increasing importance in qualitative, descriptive narratives. We have to go beyond the number and recognise that researchers are humans with ambitions wanting to describe their work in a wider context… It’s very hard to do that just in numbers.”

The webinar ended with a lively Q&A debate spearheaded by Steve Leicht; great questions invoked great responses! Using #DSwebinar, our audience was able to interact with our panel throwing their opinions into the mix. An example of a question given to George can be found below:

Are there plans for the NIH to continue updating the data on icites in the long term? How is RCR being used at NIH?

“Yes, we’re committed to continuing to support the website. We’ve just added data from 2015, and the coverage includes all papers in PubMed. We’re working with other agencies in the US to expand that beyond just PubMed but, for now, it’s just all papers in PubMed which included most of the papers for NIH to be evaluating. We don’t use these values in funding decisions, we use them to track the progress of science – to track and compare fields; to establish new funding mechanisms. It allows us to look in the rear view and establish a specific area or hurdle that needs to be overcome or focus on a particular area of research like microbiome research… That’s how we will use this kind of metric. It’s worth pointing out, this does not get at quality, or the importance of the work, or the impact of the work using citation data; it does, however, tell us the influence. A word I like, because it covers a lot of the debate about the use of citations. One must, however, take influence with a grain of salt and not use it as a proxy for quality, which has been done mistakenly in the past in regards to impact-factor.”

If you feel you still have something to say – we’re all ears! Tweet us @digitalsci using #DSwebinar.

View Webinar Here

The post Evolved Metrics for the New Age of Research: Digital Science Webinar Summary appeared first on Digital Science.

]]>
Researchers, Metrics and Research Assessment https://www.digital-science.com/blog/2016/09/notes-alpsp-2016-researchers-metrics-research-assessment/ Fri, 30 Sep 2016 09:10:25 +0000 https://www.digital-science.com/?p=20991 Summer is over – it’s official. You can tell because the weather has changed, and also because the ALPSP annual awards dinner and conference was last week (or perhaps two weeks ago by the time I finish writing this post). For me, ALPSP kicks off the fall conference season and provides a great opportunity to […]

The post Researchers, Metrics and Research Assessment appeared first on Digital Science.

]]>
Summer is over – it’s official. You can tell because the weather has changed, and also because the ALPSP annual awards dinner and conference was last week (or perhaps two weeks ago by the time I finish writing this post). For me, ALPSP kicks off the fall conference season and provides a great opportunity to gauge the mood of the industry after everybody has had a chance to clear their head during the summer break.

This year, two particular sessions stood out for me: The first of these was moderated by Isabel Thompson of Oxford University Press and was on the subject of academic engagement and what it means today.

In her introduction, Thompson quoted an anonymous ex-researcher who currently works in publishing, as saying that researchers think of publishers, ‘spectrum that ranges from pure evil on one side, to a necessary evil on the other!’ – to my surprise, Isabel later fessed she was quoting me. Admittedly, I can’t remember making that joke, although it does sound like something I’d say (I hasten to add that I don’t personally think publishers are evil at all).

The take home message? From Isabel Thompson
The take home message? From Isabel Thompson

“…he gets the impression that researchers think about publishers as sitting somewhere on a spectrum that ranges from pure evil on one side, to a necessary evil on the other!”

To drive home that point, the opening speaker, Philippa Matthews, an academic medic from Oxford University, summed up many of the complaints that researchers have around the process of publishing. It is telling that most of the complaints that she brought up were familiar: lengthy review processes, onerous submission requirements and editors who don’t screen manuscripts properly before review; despite the fact that we’ve heard these complaints before, it’s a good idea that we’re reminded of their importance.

It wasn’t all negative, Matthews praised open science platforms like F1000. She also reported on her own positive experiences in getting a non-standard output published. In her case it was a live, interactive database of functional biological data. In that vein, she called for greater flexibility on the part of publishers with respect to non-standard publication types. In the same session, Emma Wilson, Director of Publishing at Royal Society of Chemistry, gave an excellent presentation describing how RSC communicate and engage with their editorial boards, as a way to stay in touch with their community – they also consider young researchers to be a valuable asset.

Another stand-out session was Beyond Article Level Metrics moderated by Melinda Kenneway – Ben Johnson of HEFCE was the first to speak. He gave a high-level summary of HEFCE’s Metric Tide report asking to what extent funders should be using metrics to aid, or even replace, qualitative assessment of research outputs. Jennifer Lin of Crossref also spoke in favour of the metricization of research assessment; she pointed out the metrics have the potential to reduce conflict and quantify decision making. Finally, Claire Donovan, who is a Reader in Science and Technology studies at Brunel, suggested that metrics have the potential to replace narrative in research assessment.

Obviously, this is a complex issue. Digital Science’s consulting group has warned in the past that the overuse of metrics can inhibit proper decision making because people tend to alter behaviour to fit the metrics.

A slide from Ben Johnson on the responsible use of metrics
A slide from Ben Johnson on the responsible use of metrics

You’ll be pleased to read that all who spoke at the ALPSP session called for the responsible and appropriate use of metrics. Donovan reminded us of Goodhart’s Law: when a measure becomes a target, it ceases to become a good measure.

Without delving too deep into the debate, what these discussions show are that various stakeholders in scholarly communication, including institutions and funders, are taking the measurement of research outputs increasingly seriously. This is an important signal for publishers because it points to a new area of value that they can provide.

The lifeblood of any publisher is the community that they serve. Publishing has always been about helping researchers communicate and disseminate their work. While that hasn’t changed, the mechanisms have altered drastically since the popularisation of the internet. Today, a new research communication infrastructure is being developed, which, in part, is being used to underpin new research evaluation frameworks. Those frameworks are being employed by funders and institutions alike to make strategic investment decisions – such as who to hire and what to fund.

Projects like ORCID and Crossref mark an important turning point – as decision makers in academia increasingly move towards metrics, or at least automatic tracking of research outputs and impact, through mechanisms like current research evaluation systems (CRIS), publishers will find that it’s increasingly important to participate. By providing meta-data and coordinating with institutions and funders, publishers will be able to help make sure that the communities they serve get the credit (and continued financial support) they deserve for the work that they do.

You can see a video of your humble narrator (jump to 36 mins) saying something similar during a panel discussion at ALPSP about the future of digital publishing!

The post Researchers, Metrics and Research Assessment appeared first on Digital Science.

]]>
Relative Citation Ratio (RCR) – A Leap Forward in Research Metrics https://www.digital-science.com/blog/2016/08/relative-citation-ratio-rcr-leap-forward-research-metrics/ https://www.digital-science.com/blog/2016/08/relative-citation-ratio-rcr-leap-forward-research-metrics/#comments Wed, 24 Aug 2016 12:14:52 +0000 https://www.digital-science.com/?p=20366 There is no perfect metric.  There is no number or score which fully encapsulates the value, impact, or importance of a piece of research. While this statement might appear obvious, research evaluation and measurement are a fact of life for the scientific research community.  The administrative work in faculty recruitment, promotion, and tenure are coupled […]

The post Relative Citation Ratio (RCR) – A Leap Forward in Research Metrics appeared first on Digital Science.

]]>
RCR

There is no perfect metric.  There is no number or score which fully encapsulates the value, impact, or importance of a piece of research.

While this statement might appear obvious, research evaluation and measurement are a fact of life for the scientific research community.  The administrative work in faculty recruitment, promotion, and tenure are coupled with activity reporting and institutional benchmarking – and these measures are increasingly central to a Research Office’s existence. The two most commonly used metrics used to evaluate research output are focused on funding and publications.

Funding is central to many disciplines: it is a fairly simple measure – do you have it, and how much?  How do you compare with your peers? What percentage of your salary have you managed to cover through grants? If you have lab space at your institution, what is your grant-generated-dollars-per-square-foot-of-lab-space ratio?

“historical highlights for achievement were associated with volume (how much are you publishing), prestige (which journals published your work), and citations (who is referencing your work.)”

But publications presents many more nuanced challenges.  In particular, the historical highlights for achievement were associated with volume (how much are you publishing), prestige (which journals published your work), and citations (who is referencing your work.)  These three measures all have cautionary tales of note.  While volume of publications are important, associated factors about the area of science and the use of co-authorship and collaboration may all be considered.  But here, the number of publications is relatively straightforward – similar to funding disciplines.  But quality measures like, “where you are publishing,”, “who is citing your work,” present unique challenges.

“Where have you published?”:

  • “Are you publishing in high-quality journals?” is often confused with “are you articles having a high impact”?
  • Journals are most commonly rated/ranked based on Journal Impact Factor (JIF)
  • While JIF is an interesting measure of the quality of an overall journal, it tells us little about the influence of an individual article
  • Research has shown that many papers considered to be “breakthroughs” in the field are published in journals with modest or moderate JIFs
  • If you use journal metrics to appraise an article, how do you distinguish a great paper in a mid-tier journal versus a mediocre paper which was accepted by a top-tier journal?

“How often are you cited?”

  • Many variables also make it challenging to compare results.
  • Some fields, like biology, tend to cite heavily (25+ on average).
  • Other fields, like physics cite much less (under 10 on average).
  • Therefore a paper with nine citations in physics might be outperforming a biology paper with 18 when the number of citations inside the field is taken into account.
  • Citations vary by year – the longer a paper has been in existence, the longer it has had a chance to be cited.
  • A 2014 paper with five citations might be preferable to a similar 2005 publication with 6 citations.  The 2005 publication might have more citations, but it has only gained one more citation in the additional nine years.
  • So how do you compare different papers, published at different times, and across different fields – when you are a Research Dean or Provost at a top institution?

A group at the National Institutes of Health (NIH), in the United States has been working on the answer.  Building on new approaches in the field of bibliometrics, the Office of Portfolio Analysis at NIH lead by team members Bruce Ian Hutchins, Xin Yuan, James M Anderson, and George M Santangelo have helped to develop and present the Relative Citation Ratio (RCR.)   The RCR works to help normalize a paper to its field(s) and the year it was published, so that you can do something close to facilitating an, “apples to oranges” comparison of different papers in different fields and at different times.  The RCR was first described in a paper posted on bioRxiv.  You can experience the RCR for any list of articles with PubMed IDs on iCite, a tool that the NIH launched to enable individual researchers to evaluate the results of their articles.

The way that the RCR achieves this is by dividing a paper’s actual citation count by an expected citation count giving you the kind of observed-over-expected ratio with which those having worked on quality-improvement initiatives will be comfortable. The RCR presents as a decimal number: a ‘1’ indicates that it’s performing as expected. Higher than 1 is better ratio – in the case of the RCR, the article has gotten more citations than its peers in year and subject areas.

RCR

How is the RCR calculated?

Earlier attempts at taking into account the field of an article usually took the subject area of the article’s journal, and calculated a relative citation rate against all matching articles, eg ‘medicine’. While this is a reasonable approach for very well-defined journals with discrete subject areas, it fails to provide a reliable metric for articles that draw on different subject areas, or are of interest to people outside a narrowly defined topic.

“Instead of solely relying on the publishing journal’s subject area, the RCR takes into account all the articles that are cited with it.”

Instead of solely relying on the publishing journal’s subject area, the RCR takes into account all the articles that are cited with it. This provides for a much more nuanced view of the subject area of the article as the average reflects a more diverse blend of citations behavior.

Our team at UberResearch is working to test RCR with representatives and ambassadors from a group of our partner programs.  These programs include publishers, research funders, and academic research institutions.  While no one metric is perfect, there is general agreement that RCR is a big leap forward in comparing publication’s citation rates across disciplines.  Based on our results, Digital Science is building RCR into many of the our respective products.  RCR values already appear in the ReadCube viewer – touching over 15M unique users a year – and have been rolled out across the Dimensions database – with over 25M publication abstracts and over $1T in historical funding awards.

Image of Dimensions

The workflow advantages of having a common value for comparing publications are enormous.

It is worth noting that the RCR continues to be developed and improved.  The team at NIH are openly soliciting ideas and feedback to make further improvements to the calculation and approach, and the Digital Science bibliometrics team are engaging with the innovative ideas that the RCR has brought to the metrics world.

Since the Digital Science announcement of adopting RCR, we have seen multiple publishers, funders, and academic partners begin to weave RCR into their own evaluative processes and workflows.  While every approach to computing metrics has its pros and cons, what really differentiates RCR is the combination of significant improvement in approach, cross-disciplinary application, and rapid market adoption.

As I started this post – There is no perfect metric… However, the RCR has made a leap forward in a positive direction.  With consistent improvement over current approaches, increased article-level measurement, wide market adoption, and the support of NIH, we believe RCR is poised to be an increasingly leading indicator in the research metrics landscape.

 

 

The post Relative Citation Ratio (RCR) – A Leap Forward in Research Metrics appeared first on Digital Science.

]]>
https://www.digital-science.com/blog/2016/08/relative-citation-ratio-rcr-leap-forward-research-metrics/feed/ 1