how we support industry Archives - Digital Science https://www.digital-science.com/tags/industry/ Advancing the Research Ecosystem Mon, 22 Jul 2024 08:12:37 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 Scismic and Objective, Skills-focused, AI-driven Recruitment in STEM https://www.digital-science.com/blog/2024/07/scismic-and-objective-skills-focused-ai-driven-recruitment-in-stem/ Mon, 22 Jul 2024 08:10:09 +0000 https://www.digital-science.com/?p=72542 Find out how Scismic is using AI responsibly, helping to remove biases in datasets to ensure fairer and more ethical recruitment programs.

The post Scismic and Objective, Skills-focused, AI-driven Recruitment in STEM appeared first on Digital Science.

]]>

To AI or not AI?

The use of AI technologies has always been susceptible to charges of potential bias due to skewed datasets large language models have been trained on. But surely firms are making sure those biases have been ironed out, right? Sadly, when it comes to AI and recruitment, not all applications of the technology are the same so firms need to tread carefully. In other words – if you don’t understand it, don’t use it.

Since the launch of ChatGPT at the end of 2022, it has been difficult to read a newspaper, blog or magazine without some reference to the strange magic of AI. It has enthused and concerned people in equal measure, with recruiters being no different. From every gain in being able to understand and work with huge amounts of information, there appears to be negatives around data bias and inappropriate uses. 

Scismic is part of the larger company Digital Science, and both have been developing AI-focused solutions for many years. From that experience comes an understanding that responsible development and implementation of AI is crucial not just because it is ‘the right thing to do’, but because it simply ensures better solutions are created for customers. Customers who in turn can trust Digital Science and Scismic as partners during a period of such rapid change and uncertainty.

AI in focus

The potential benefits of using AI in recruitment are quite clear. By using Generative AI such as ChatGPT, large amounts of data can be scanned and interpreted quickly and easily, potentially saving time and money during screening. In turn, the screening process may also be improved by easily picking up key words and phrases in applications, while communications about the hiring process can be improved by using AI-powered automated tools.

But, of course, there is a downside. Using AI too much seems to take the ‘human’ out of Human Resources, and AI itself is only as good as the data it has been trained on. A major issue with AI in recruitment has been highlighted by the recent brief issued by the US Equal Employment Opportunities Commission (EEOC), which supported an individual who has claimed that one vendor’s AI-based hiring tool discriminated against them and others. The EEOC has recently brought cases against the use of the technology, suggesting that vendors in addition to employers can be held responsible for the misuse of AI-based technology. 

When should we use AI?

In general, if you don’t understand it, do not use it. Problems arise for both vendors and recruiters alike when it comes to the adoption of AI tools at scale. While huge data sets offer the advantages set out above, they also introduce biases over and above human biases that employers and employees have been dealing with for years. Indeed, rather than extol the virtues of using AI, it is perhaps more instructive to explain how NOT to use this powerful new technology.

As a responsible and ethical developer of AI-based recruitment solutions, colleagues at Scismic were surprised to see a slide like the one below at a recent event.  While it was designed to show the advantages of AI-based recruitment technology to employers it actually highlights the dangers of ‘layering’ AI systems on top of each other. This means the client company will lose even more visibility on who and how the system is selecting – increasing the risk of bias, missing good candidates and, ultimately, the risk of legal challenge. 

In this scenario, with so many technologies layered onto each other throughout the workflow, it is almost impossible to understand how the candidate pipeline was developed, where candidates were excluded, and at which points bias has caused further bias in the selection process!

While the list of AI tools used in the process is impressive, which is less so from a recruitment perspective is the layer upon layer of potential biases these tools might introduce to the recruitment process.

At Scismic, they offer a different approach. AI is used to REMOVE biases in datasets, so that all of the advantages of using automated processes are protected by introducing mitigating processes, thus ensuring a fairer and more ethical recruitment program for employers. 

Positive Discrimination?

Scismic’s technology focuses on objective units of qualifications – skills. We use AI to reduce the bias of terminology usage associated with describing skills. Now we have two ways in which we reduce evaluation bias:

  1. Blinded candidate matching technology that relies on objective units of qualifications – skills
  2. Removing bias of candidates terminology to describe their skill sets.

What type of AI is being used?

To help explain how Scismic does this, we can split AI into subjective (or Generative) AI like ChatGPT, and objective AI. Subjective AI is, broadly, a contextual system that makes assumptions on what to provide the user based on the user’s past interactions and its own ability to use context. This system can work well for human interactions (such as ChatBots) which is what it was designed for. 

However, when applied to decision making about people and hiring (which is already an area fraught with difficulty) subjective and contextual systems can simply reinforce existing bias or generate new bias. For example, if a company integrates a GenAI product into its Applicant Tracking System (ATS) and the system identifies that most of the people in the system share a particular characteristic then the system will assume that’s what the company wants. Clearly if the company is actually trying to broaden its hiring pool this can have a very negative effect, which can also be challenged in court. 

Objective AI works differently as it does not look at the context around the instruction given but only for the core components it was asked for. This means it doesn’t make assumptions while accumulating the initial core results (data) but can provide further objective details on the data set.  In many ways it is a ‘cleaner’ system but because it is focused and transparent it is the better choice for removing unintended bias.

AI is a tool and, as with so many jobs that require tools the question is often; what is the best tool to use? In short, we recommend that a tool that produces better results with less bias is the answer in a hiring process.

Case by case

To show how well some cases can turn out when using ‘objective AI’ responsibly and astutely, here are three case studies that illustrate how to arrive at some genuinely positive outcomes:

  1. The right AI: With one customer, Scismic was hired to introduce a more diverse pool of talent as the company was 80% white males, and those white males were hiring more white males to join them. After introducing Scismic’s recruitment solution, the percentage of diverse applicants across the first five roles they advertised rose from 48% to 76%
  2. The right approach: One individual who had been unlucky in finding a new role in life sciences for a very long time finally found a job through Scismic. The reason? He was 60 years old. With an AI-based hiring process, his profile may well have been ignored as an outlier due to his age if a firm typically hired younger people. However, by removing this bias he finally overcame ageism – whether it had been AI- or human-induced – and found a fulfilling role with a very grateful employer
  3. The right interview: Another potential hire being helped by Scismic is neurodivergent, and as a result appears to struggle to be successful in interviews. An AI-based scan of this person’s track record might see a string of failed interviews and therefore point them to different roles or levels of responsibility. But the lack of success is not necessarily down to this, and human intervention is much more likely to facilitate positive outcomes than using AI as a shortcut and misdiagnose the issue.

When not to use AI?

One aspect highlighted in these case studies is that while AI can be important, what can be equally as important is when NOT to use it, and understand it is not a panacea for all recruitment problems. For instance, it is not appropriate to use AI when you or your team don’t understand what the AI intervention is doing to your applicant pipeline and selection process. 

Help in understanding when and when not to use AI can be found in a good deal of new research, which shows how AI is perhaps best used as a partner in recruitment rather than something in charge of the whole or even part of the process. This idea – known by some as ‘co-intelligence’ – requires a good deal of work and development on the human side, and key to this is having the right structures in place for AI and people to work in harmony. 

For example, market data shows that in the life sciences and medical services, employee turnover is over 20%, and in part this is due to not having some of the right structure and processes in place during recruitment. Using AI in the wrong way can increase bias and lead to hiring the wrong people, thus increasing this churn. However, using AI in a structured and fair way can perhaps start to reverse this trend.

In addition, reducing bias in the recruitment process is not all about whether to use or not use AI – sometimes it is about ensuring the human element is optimized. For instance, recent research shows that properly structured interviews can reduce bias in recruitment and lead to much more positive outcomes. 

With recruitment comes responsibility

It is clear that AI offers huge opportunities in the recruitment space for employees and employers alike, but this comes with significant caveats. Both for recruiters and vendors, the focus on developing new solutions has to be how they can be produced and implemented responsibly, ethically and fairly. This should be the minimum demand of employers, and is certainly the minimal expectation of employees. The vision of workplaces becoming fairer due to the adoption of ethically developed AI solutions is not only a tempting one, it is one that is within everyone’s grasp. But it can only be achieved if the progress of recent decades in the implementation of fairer HR practices are not lost in the gold rush of chasing AI. As a general rule, recruiters and talent partners should understand these components of the technologies they are using:

  1. What is the nature of the dataset the AI model has learnt from? 
  2. Where are the potential biases and how has the vendor mitigated these risks?
  3. How is the model making the decision to exclude a candidate from the pipeline? And do you agree with that premise?


Understanding the steps involved in creating this structure can be instructive – and will be the focus of our next article, ‘Implementing Structured Talent Acquisition Processes to Reduce Bias in your Candidate Evaluation’. In the meantime, you can contact Peter Craig-Cooper at Peter@scismic.com to learn more about our solutions.

See also our announcement: STEM skills-based economy focus for Scismic’s new Chief Commercial Officer

Simon Linacre

About the Author

Simon Linacre, Head of Content, Brand & Press | Digital Science

Simon has 20 years’ experience in scholarly communications. He has lectured and published on the topics of bibliometrics, publication ethics and research impact, and has recently authored a book on predatory publishing. Simon is an ALPSP tutor and has also served as a COPE Trustee.

The post Scismic and Objective, Skills-focused, AI-driven Recruitment in STEM appeared first on Digital Science.

]]>
Digital Science and Artificial Intelligence https://www.digital-science.com/resource/digital-science-and-artificial-intelligence/ Wed, 28 Feb 2024 10:58:24 +0000 https://www.digital-science.com/?post_type=story&p=70025 Digital Science supports your journey towards AI adoption using our technical and analytical capabilities

The post Digital Science and Artificial Intelligence appeared first on Digital Science.

]]>

AI-powered solutions to transform your research

At Digital Science, we recognize that the journey toward AI adoption is as unique as the organizations and individuals we support. From bench researchers to medical affairs professionals to research offices, our approach is grounded in collaboration and deep understanding.

Since 2013, we’ve been investing in advanced AI methodologies, expanding our technical and analytical capabilities, and assembling a global team of AI experts. To us,  AI isn’t a one-size-fits-all solution; it encapsulates a range of both new and existing capabilities and approaches that when thoughtfully applied, can significantly enhance capabilities and streamline workflows. Our commitment continues to be focused on working closely with our partners, deeply understanding their unique challenges and aspirations, to deliver innovative and responsible AI capabilities that enhance human intelligence, drive progress, and unlock the full potential of the research community.

Our Capabilities

For the last decade, we have focused around machine learning innovations with Dimensions.ai, investment in Writefull and development of different LLMs. Building on this AI lineage, 2024 will see a continuous flow of new releases, starting with Dimensions Research GPT Enterprise and Dimensions Research GPT.

Dimensions in ChatGPT

Available via OpenAI’s GPT Store, the new products aim to provide users looking to use ChatGPT for research-related questions with generative answers they can trust – grounded in scientific evidence from Digital Science’s Dimensions database.

Key features of Dimensions Research GPT Enterprise – available to Dimensions customers with a ChatGPT Enterprise licence – include: 

  • Answers to research queries with publication data, clinical trials, patents and grant information
  • Set up in the client’s private environment and only available to client’s end users
  • Notifications each time content generated is based on Dimensions data, with references and citation details
  • Possible for clients to have custom features (following prior discussion with Dimensions).

For Dimensions Research GPT, answers to research queries are linked to tens of millions Open Access publications, and access to the solution is free to anyone with a Plus or Enterprise subscription to OpenAI’s GPT Store.

Next-generation search experience

Dimensions has introduced a new summarization feature to support the user in their discovery process for publications, grants, patents and clinical trials. It has integrated AI-driven summarization capabilities into the Dimensions web application to enable all users to accelerate the identification of the most relevant content for their research questions. Short, concise summaries are now available for every record in a given search result list with a single click, providing users with AI-generated insights quickly. The Dimensions team has used feedback from members of the research community – including academic institutions, industry, publishers, government, and funders – to develop this summarization feature in the Dimensions web app.

Smarter searching in Dimensions

Other AI solutions will follow shortly from Digital Science, all of which seek to surface AI capabilities to support users with specific, relevant functionalities where AI in particular can offer improved results. Just as importantly, they have been developed with a grounding in reliability and responsibility so that users can trust them as they do with all our other products. 

Connecting your Data

The Dimensions Knowledge Graph, powered by metaphactory, aims at helping customers harness the synergy of global research knowledge and their internal data, and enable AI-powered applications and business decisions.

AI-Powered Writing Support

Writefull uses big data and Artificial Intelligence to boost academic writing. With language models trained on millions of journal articles, it provides the best automated language feedback to date leading the next generation of research writing help.

Deeper Understanding of Scholarly Papers

Available within ReadCube Enterprise Literature Management & Papers Reference Management, our beta AI Assistant is designed to enhance research efficiency by providing real-time, in-depth analysis, summarization, and contextual understanding of scholarly articles within a researcher’s literature library.

Our latest AI insights

An experienced partner in AI

The history of AI at Digital Science

AI & Digital Science

How does Digital Science use AI? We ask ChatGPT

The post Digital Science and Artificial Intelligence appeared first on Digital Science.

]]>
Fast forward: a new approach for AI and research https://www.digital-science.com/blog/2024/02/fast-forward-a-new-approach-for-ai-and-research/ Wed, 28 Feb 2024 10:09:04 +0000 https://www.digital-science.com/?p=70008 We look at the new Dimensions Research GPT solutions, combining the scientific evidence base of Dimensions with ChatGPT's preeminent Generative AI.

The post Fast forward: a new approach for AI and research appeared first on Digital Science.

]]>
With the launch of Dimensions Research GPT and Dimensions Research GPT Enterprise, researchers the world over now have access to a solution far more powerful than could have been believed just a few years ago. Simon Linacre takes a look at a new solution that combines the scientific evidence base of Dimensions with the pre-eminent Generative AI from ChatGPT.


For many researchers, the ongoing hype around recent developments with Generative AI (GAI) has left them feeling nonplussed, with so many new, unknown solutions for them to use. Added to well-reported questions over hallucinations and responsibly-developed AI, the advantages that GAI could offer have been offset by some of these concerns.

In response, Digital Science has developed its first custom GPT solution, which combines powerful data from Dimensions with ChatGPT’s advanced AI platform; introducing Dimensions Research GPT and Dimensions Research GPT Enterprise

Dimensions Research GPT’s answers to research queries make use of data from tens of millions of Open Access publications, and access is free to anyone via OpenAI’s GPT Store; Dimensions Research GPT Enterprise provides results underpinned by all publications, grants, clinical trials and patents found within Dimensions and is available to anyone with an organization-wide Dimensions subscription that has ChatGPT enterprise account. Organizations keen to tailor Dimensions Research GPT Enterprise to better meet the needs of specific use cases are also invited to work with our team of experts to define and implement these.

These innovative new research solutions from Dimensions enable users of ChatGPT to discover more precise answers and generative summaries by grounding the GAI response in scientific data – data that comes from millions of publications in Dimensions – through to the increasingly familiar ChatGPT’s conversational interface. 

These new solutions have been launched to enable researchers – indeed anyone with an interest in scientific research – to find trusted answers to their questions quickly and easily through a combination of ChatGPT’s infrastructure and Dimensions’ well-regarded research specific capabilities. These new innovations accelerate information discovery, and represent the first of many use cases grounded in AI to come from Digital Science in 2024.

How do they work?

Dimensions Research GPT and Dimensions Research GPT Enterprise are based on Dimensions, the world’s largest collection of linked research data, and supply answers to queries entered by users in OpenAI’s ChatGPT interface. Users can prompt ChatGPT with natural language questions and see AI-generated responses, with notifications each time any content is based on Dimensions data as a result of their queries on the ChatGPT platform, with references shown to the source. These are in the shape of clickable links, which take users directly to the Dimensions platform where they can see pages with further details on the source records to continue their discovery journey. 

Key features of Dimensions Research GPT Enterprise include: 

  • Answers to research queries with publication data, clinical trials, patents and grant information
  • Set up in the client’s private environment and only available to client’s end users
  • Notifications each time content generated is based on Dimensions data, with references and citation details.

Dimensions Research GPT (public) screen capture
Sample image of a query being run on Dimensions Research GPT.

What are the benefits to researchers?

The main benefit for users is that they can find scientifically grounded, inherently improved information on research topics of interest with little time and effort due to the combination of ChatGPT’s interface and Dimensions’ highly regarded research specific capabilities. This will save researchers significant time while also giving them peace of mind by providing easy access to source materials. However, there are a number of additional key benefits for all users in this new innovation:

  • Dimensions AI solutions makes ChatGPT research-specific – grounding the answers in facts and providing the user with references to the relevant documents
  • It calls on millions of publications to provide information specific and relevant to the query, reducing the risk of hallucination of the generative AI answer while providing an easy route to information validation
  • It can help overcome challenges of sheer volume of content available, time-consuming tasks required in research workflows and need for trustworthy AI products.

What’s next with AI and research?

The launch of Dimensions Research GPT and Dimensions Research GPT Enterprise represents Digital Science’s broader commitment to open science and responsible development of AI tools. 

These new products are just the latest developments from Digital Science companies that harness the power of AI. In 2023, Dimensions launched a beta version of an AI Assistant, while ReadCube also released a beta version of its AI Assistant last year. Digital Science finished 2023 by completing its acquisition of AI-based academic language service Writefull. And 2024 is likely to see many more AI developments – with some arriving very soon! Dimensions Research GPT and Dimensions Research GPT Enterprise, alongside all Digital Science’s current and future developments with AI, exemplify our commitment to responsible innovation and bringing powerful research solutions to as large an audience as possible. If you haven’t tested ChatGPT yet as part of your research activities, why not give it a go today?

Simon Linacre

About the Author

Simon Linacre, Head of Content, Brand & Press | Digital Science

Simon has 20 years’ experience in scholarly communications. He has lectured and published on the topics of bibliometrics, publication ethics and research impact, and has recently authored a book on predatory publishing. Simon is an ALPSP tutor and has also served as a COPE Trustee.

The post Fast forward: a new approach for AI and research appeared first on Digital Science.

]]>
AI: To Buy or Not to Buy https://www.digital-science.com/blog/2023/11/ai-to-buy-or-not-to-buy/ Thu, 30 Nov 2023 11:32:51 +0000 https://www.digital-science.com/?p=68549 What AI capabilities is GE HealthCare bringing into the medical technology company? Here's what the patent data tells us.

The post AI: To Buy or Not to Buy appeared first on Digital Science.

]]>
Shortly after General Electric spun off its HealthCare division, the newly released company started buying AI technology. To share some strategic insights, Digital Science’s IFI CLAIMS Patent Services has taken a look at the target companies’ patents to see what capabilities they’re bringing into the medical technology company.

The phrase ‘patently obvious’ is used in many contexts, from political exchanges to newspaper op-ed columns. Curiously, it is rarely used in the realm of actual patents, but in the case of General Electric’s (GE) HealthCare division, its use seems entirely appropriate.

In early 2023, GE made the decision to spin off GE HealthCare, and immediately following the move the new entity started its M&A strategy by acquiring two companies of its own – Caption Health and IMACTIS. At this early stage, is it possible to infer whether these were sound investments? Six months later, there is still a way to go before full year financial results are posted along with other financial data. However, Digital Science company IFI CLAIMS Patent Services – a global patent database provider for application developers, data scientists, and product managers – can gain insights by looking into the patents the newly enlarged GE HealthCare now holds.

Patents = Strategic Insights

It should be ‘patently obvious’, but checking companies’ patents can be a part of any due diligence process before an investment decision is made. Not only does this help understand risk and technology overlaps, it can also be used to determine where R&D efforts are currently focused in the target acquisition, and in turn set the strategy for the newly merged entity. Analyzing a company’s patent holdings in the midst of M&A dealings provides insights, such as: 

  • Strategic direction of companies (i.e., such as the extent to which they are making strides in AI)
  • Unique takes on M&A transactions as it is possible to determine – based on companies’ technologies – if core competencies overlap or not with the acquiring company
  • Ascertaining if a company’s core competencies are enhanced or not by the acquisitions it’s made

IFI’s latest acquisition report takes a look at GE HealthCare’s acquisitions of IMACTIS and Caption Health’s patented technologies to determine the innovative direction of the company.

‘A good fit’

So what insights can be gleaned from patent data about GE HealthCare and its nascent M&A strategy? According to the report, the acquisition of Caption Health and IMACTIS were a ‘good fit’ for GE HealthCare. Both the acquisitions point towards GE HealthCare’s continued growth in terms of both AI and the application of AI to its existing core technologies. Specifically:

  • IMACTIS is a tech healthcare company that offers, among other things, the provision of 3D virtual imaging to surgical navigation
  • Caption Health focuses on providing AI capabilities and image data generation to ultrasound technologies

You can see from the chart below that GE HealthCare competes with a number of major companies in establishing AI-related patents, which surged in 2019-2020 before dipping in 2021. As such, the acquisitions in the early part of 2023 of companies that are focused on technology and AI in particular seem to be a good strategic move, especially given the furore around AI technology since late 2022.

Competitive landscape for AI patent applications. Source: https://www.ificlaims.com/news/view/blog-posts/the-ifi-deal-ge-healthcare.htm

What the data says

The report concludes that both Caption Health and IMACTIS make sense for GE HealthCare for several reasons. In the current competitive climate, Caption Health adds necessary AI capabilities while IMACTIS adds new dimensions to the suite of patents it has with 3D virtual images. So overall, it’s a gold star for GE HealthCare when it comes to enhancing its patent – and future commercial – strategy. Isn’t that obvious?

Top patented concepts by Caption Health. Source: https://www.ificlaims.com/news/view/blog-posts/the-ifi-deal-ge-healthcare.htm
Top patented concepts by IMACTIS. Source: https://www.ificlaims.com/news/view/blog-posts/the-ifi-deal-ge-healthcare.htm

Three key takeaways

1. Digital Science’s IFI CLAIMS Patent Services – a global patent database provider for application developers, data scientists, and product managers – can help customers gain insights by looking into the patents held by firms, such as newly enlarged GE HealthCare.

2. IFI’s latest acquisition report takes a look at GE HealthCare’s acquisitions of IMACTIS and Caption Health’s patented technologies to determine the innovative direction of the company – the report concludes that both Caption Health and IMACTIS make sense for GE HealthCare for a number of reasons.

3. Checking companies’ patents should be a part of any due diligence process before any corporate investment decision is made, especially in pharmaceuticals sector.

Simon Linacre

About the Author

Simon Linacre, Head of Content, Brand & Press | Digital Science

Simon has 20 years’ experience in scholarly communications. He has lectured and published on the topics of bibliometrics, publication ethics and research impact, and has recently authored a book on predatory publishing. Simon is an ALPSP tutor and has also served as a COPE Trustee.

The post AI: To Buy or Not to Buy appeared first on Digital Science.

]]>
Heading to Sci Foo! https://www.digital-science.com/blog/2023/07/heading-to-sci-foo/ Wed, 12 Jul 2023 05:14:56 +0000 https://www.digital-science.com/?p=64342 The Digital Science team is heading off to San Francisco, California for the annual Science Foo Camp (Sci Foo)!

The post Heading to Sci Foo! appeared first on Digital Science.

]]>

The Digital Science team is getting ready to head off to San Francisco for the annual Science Foo Camp. This is a remarkable gathering of scientists, thinkers, technologists, creators and communicators, who come together over three days in mid-July.

‘Sci Foo’, as it’s affectionately known, is unlike any other science conference. Hosted by ‘X’ (formerly “Google X”), it is an ‘unconference’ with no fixed agenda, and is co-organized by Google, O’Reilly Media, Digital Science and Nature.

Sci Foo 2022
Attendees at Sci Foo 2022, pictured at X (from left): Amarjit Myers, Cat Allman, Marsee Henon, Adam Flaherty and Suze Kundu. Photo: Amarjit Myers.

Since the first event in 2006, Sci Foo has aimed to do things differently – 18 years later it retains that original spirit and continues to attract some of the most prolific players on the world stage. Indeed, the British astrophysicist Lord Martin Rees has called Sci Foo the ‘Woodstock of the Mind’.

Forging an environment of openness and collaboration, attendees are encouraged to connect and share ideas with those around them. The schedule includes the always popular lightning talks but discourages keynotes and corporate overviews – and is dominated by unconference sessions that are proposed and organised by the attendees themselves. This format allows for unparalleled diversity of disciplines and thinking, with a rich seam of discussion, debate and insights running through the event. Conversations are encouraged to continue over mealtimes and into the evening.

As one of the organizers, Digital Science is especially excited for Sci Foo 2023. With around 250 attendees, we have also provided travel support to a number of early-career scientists from South Africa, Ecuador, Brunei and other countries and we are looking forward to the energy they will bring to what promises to be a fantastic Sci Foo.

We would also like to thank our co-organizers including  Tim O’Reilly and Marsee Henon from O’Reilly Media; Raiya Kind and Laurie Wu from Google; Magdalena Skipper of Springer Nature; and Sci Foo veteran Cat Allman, as well as the many volunteers from across all these organisations – it would not be possible without them. 

If you want to know more about Sci Foo 2023 including who’s there and what’s trending, please look out for online chat about the event via the official hashtag #SciFoo and discussion on Twitter and LinkedIn from the Digital Science team.

About the Author

Amarjit Myers, Head of Strategic Events | Digital Science

The post Heading to Sci Foo! appeared first on Digital Science.

]]>
Why is it so difficult to understand the benefits of research infrastructure? https://www.digital-science.com/blog/2022/11/benefits-of-research-infrastructure/ Tue, 15 Nov 2022 08:28:24 +0000 https://www.digital-science.com/?p=59617 Adopting these five priority persistent identifiers (PIDs) would lead to universally better research strategy decisions.

The post Why is it so difficult to understand the benefits of research infrastructure? appeared first on Digital Science.

]]>

Persistent identifiers – or PIDs – are long-lasting references to digital resources. In other words, they are a unique label to an entity: a person, place, or thing. PIDs work by redirecting the user to the online resource, even if the location of that resource changes. They also have associated metadata which contains information about the entity and also provide links to other PIDs. For example, many scholars already populate their ORCID records, linking themselves to their research outputs through Crossref and DataCite DOIs. As the PID ecosystem matures, to include PIDs for grants (Crossref grant IDs), projects (RAiD), and organisations (ROR), the connections between PIDs form a graph that describes the research landscape. In this post, Phill Jones talks about the work that the MoreBrains cooperative has been doing to show the value of a connected PID-based infrastructure.

Over the past year or so, we at MoreBrains have been working with a number of national-level research supporting organisations to develop national persistent identifier (PID) strategies: Jisc in the UK; the Australian Research Data Commons (ARDC) and Australian Access Federation (AAF) in Australia; and the Canadian Research Knowledge Network CRKN, Digital Research Alliance of Canada (DRAC), and Canadian Persistent Identifier Advisory Committee (CPIDAC) in Canada. In all three cases, we’ve been investigating the value of developing PID-based research infrastructures, and using data from various sources, including Dimensions, to quantify that value. In our most recent analysis, we found that investing in five priority PIDs could save the Australian research sector as much as 38,000 person days of work per year, equivalent to $24 million (AUD), purely in direct time savings from rekeying of information into institutional research management systems.

Investing in infrastructure makes a lot of sense, whether you’re building roads, railways, or research infrastructure. But wise investors also want evidence that their investment is worthwhile – that the infrastructure is needed, that it will be used, and, ideally, that there will be a return of some kind on their investment. Sometimes, all of this is easy to measure; sometimes, it’s not.

In the case of PID infrastructure, there has long been a sense that investment would be worthwhile. In 2018, in his advice to the UK government, Adam Tickell recommended:

Jisc to lead on selecting and promoting a range of unique identifiers, including ORCID, in collaboration with sector leaders with relevant partner organisations

More recently, in Australia, the Minister for Education, Jason Clare, wrote a letter of expectations to the Australian Research Council in which he stated:

Streamlining the processes undertaken during National Competitive Grant Program funding rounds must be a high priority for the ARC… I ask that the ARC identify ways to minimise administrative burden on researchers

In the same letter, Minister Clare even suggested that preparations for the 2023 ERA be discontinued until a plan to make the process easier has been developed. While he didn’t explicitly mention PIDs in the letter, organisations like ARDC, AAF, and ARC see persistent identifiers as a big part of the solution to this problem.

A problem of chickens and eggs?

With all the modern information technology available to us it seems strange that, in 2022, we’re still hearing calls to develop basic research management infrastructure. Why hasn’t it already been developed? Part of the problem is that very little work has been done to quantify the value of research infrastructure in general, or PID-based infrastructure in particular. Organisations like Crossref, Datacite, and ORCID are clear success stories but, other than some notable exceptions like this, not much has been done to make the benefits of investment clear at a policy level – until now.

PID-optimised research lifecycle
Figure 1. The PID-optimised research lifecycle (Source: https://resources.morebrains.coop/pidcycle/).

It’s very difficult to analyse the costs and benefits of PID adoption without being able to easily measure what’s happening in the scholarly ecosystem. So, in these recent analyses that we were commissioned to do, we asked questions like:

  • How many research grants were awarded to institutions within a given country?
  • How many articles have been published based on work funded by those grants?
  • What proportion of researchers within a given country have ORCID IDs?
  • How many research projects are active at any given time?

All these questions proved challenging to answer because, fundamentally, it’s extremely difficult to quantify the scale of research activity and the connections between research entities in the absence of universally adopted PIDs. In other words, we need a well-developed network of PIDs in order to easily quantify the benefits of investing in PIDs in the first place! (see Figure 1.)

Luckily, the story doesn’t end there. Thanks to data donated by Digital Science, and other organisations including ORCID, Crossref, Jisc, ARDC, AAF, and several research institutions in the UK, Canada, and Australia, we were able to piece together estimates for many of our calculations.

Take, for example, the Digital Science Dimensions database, which provided us with the data we needed for our Australian and UK use cases. It uses advanced computation and sophisticated machine learning approaches to build a graph of research entities like people, grants, publications, outputs, institutions etc. While other similar graphs exist, some of which are open and free to use – for example, the DataCite PID graph (accessed through DataCite commons), OpenAlex, and the ResearchGraph foundation – the Dimensions graph is the most complete and accessible so far. It enabled us to estimate total research activity in both the UK and Australia.

However, all our estimates are… estimates, because they involve making an automated best guess of the connections between research entities, where those connections are not already explicit. If the metadata associated with PIDs were complete and freely available in central PID registries, we could easily and accurately answer questions like ‘How many active researchers are there in a given country?’ or ‘How many research articles were based on funding from a specific funder or grant program?’

The five priority PIDs

As a starting point towards making these types of questions easy to answer, we recommend that policy-makers work with funders, institutions, publishers, PID organisations, and other key stakeholders around the world to support the adoption of five priority PIDs:

  • DOIs for funding grants
  • DOIs for outputs (eg publications, datasets, etc)
  • ORCIDs for people
  • RAiDs for projects
  • ROR for research-performing organisations

We prioritised these PIDs based on research done in 2019, sponsored by Jisc and in response to the Tickell report, to identify the key PIDs needed to support open access workflows in institutions. Since then, thousands of hours of research and validation across a range of countries and research ecosystems have verified that these PIDs are critical not just for open access but also for improving research workflows in general.

Going beyond administrative time savings

In our work, we have focused on direct savings from a reduction in administrative burden because those benefits are the most easily quantifiable; they’re easiest for both researchers and research administrators to relate to, and they align with established policy aims. However, the actual benefits of investing in PID-based infrastructure are likely far greater.

Evidence given to the UK House of Commons Science and Technology Committee in 2017 stated that every £1 spent on Research and Innovation in the UK results in a total benefit of £7 to the UK economy. The same is likely to be true for other countries, so the benefit to national industrial strategies of increased efficiency in research are potentially huge.

Going a step further, the universal adoption of the five priority PIDs would also enable institutions, companies, funders, and governments to make much better research strategy decisions. At the moment, bibliometric and scientometric analyses to support research strategy decisions are expensive and time-consuming; they rely on piecing together information based on incomplete evidence. By using PIDs for entities like grants, outputs, people, projects, and institutions, and ensuring that the associated metadata links to other PIDs, it’s possible to answer strategically relevant questions by simply extracting and combining data from PID registries.

Final thoughts

According to UNESCO, global spending on R&D has reached US$1.7 trillion per year, and with commitments from countries to address the UN sustainable development goals, that figure is set to increase. Given the size of that investment and the urgency of the problems we face, building and maintaining the research infrastructure makes sound sense. It will enable us to track, account for, and make good strategic decisions about how that money is being spent.

Phill Jones

About the Author

Phill Jones, Co-founder, Digital and Technology | MoreBrains Cooperative

Phill is a product innovator, business strategist, and highly qualified research scientist. He is a co-founder of the MoreBrains Cooperative, a consultancy working at the forefront of scholarly infrastructure, and research dissemination. Phill has been the CTO at Emerald Publishing, Director of Publishing Innovation at Digital Science and the Editorial Director at JoVE. In a previous career, he was a bio-physicist at Harvard Medical School and holds a PhD in Physics from Imperial College, London.

The MoreBrains Cooperative is a team of consultants that specialise in and share the values of open research with a focus on scholarly communications, and research information management, policy, and infrastructures. They work with funders, national research supporting organisations, institutions, publishers and startups. Examples of their open reports can be found here: morebrains.coop/repository

The post Why is it so difficult to understand the benefits of research infrastructure? appeared first on Digital Science.

]]>
Demonstrating Real Impact: SDG Reporting for Institutions https://www.digital-science.com/blog/2022/11/demonstrating-real-impact-sdg-reporting-for-institutions/ Wed, 09 Nov 2022 14:20:11 +0000 https://www.digital-science.com/?p=59572 Institutions can now track which of their research outputs, publications, and activities connect to the SDGs thanks to a new label scheme.

The post Demonstrating Real Impact: SDG Reporting for Institutions appeared first on Digital Science.

]]>

For nearly three decades the UN has been bringing together countries from around the globe to hold climate summits on how to address the growing climate crisis. Last year’s Conference of the Parties (COP) in Glasgow (delayed by a year due to the pandemic) took major steps toward addressing the climate crisis, but failed to deliver the national commitments required to together limit warming globally to 1.5C laid out by the Paris Agreement.

After a year of extreme weather events, from record heatwaves to disastrous flooding, this year’s COP27 in Sharm el-Sheikh, Egypt, will be crucial as the world seeks to take steps together toward mitigating and preventing the worst impacts of climate change. 

A UN Climate Change ‘Global Innovation Hub’ (UGIH) will be held during COP27, accessible digitally for the first time to enable greater collaboration, and is set to “ratchet up the scale and effectiveness of innovation in tackling climate change and help deliver on the UN’s Sustainable Development Goals”. The UGIH aims to accelerate action across both the Paris Agreement and the 2030 Agenda. 

The United Nations’ Sustainable Development Goals (SDGs) are designed to be a blueprint for achieving a better and more sustainable future for all by addressing the global challenges we face. The SDGs are at the centre of the UN’s 2030 Agenda for Sustainable Development and represent an urgent call for action by all countries – both developed and developing – in global partnership. They recognise that ending poverty and other deprivations must go hand-in-hand with strategies that improve health and education, reduce inequality, and spur economic growth – all while tackling climate change and working to preserve our oceans and forests.

Tracking and reporting on SDGs in Elements 

Since SDGs were first introduced, there has been a growing vested interest in tracking, analysing and showcasing the ways in which researchers are contributing to achieving these goals, and in demonstrating global research impact at an institutional level. This can be clearly seen in the increasing numbers of institutions participating in the Times Higher Education (THE) Impact Rankings, which in 2022 showed a 23% increase from 2021. THE Impact Rankings are the only global performance tables that assess universities against SDGs, and currently show participation across 110 countries and regions.   

This year we introduced simple but powerful functionality into Elements, allowing institutions to track which research outputs, publications, and activities connect back to the 17 SDGs via use of a new label scheme. SDG labels can be applied to any items captured in Elements (eg. publications, grants, professional & teaching activities and records of impact). 

Labels can be applied manually, in bulk via the Elements API, or automatically through our Dimensions integration. Dimensions uses machine-learning to automatically analyse publications and grants, and map them to relevant SDGs. Dimensions maps SDG labels to over 12.9 million publications and hundreds of thousands of grants, with more records being analysed and mapped all the time. These labels are now automatically harvested into Elements together with other metadata on Dimensions records. Those who are licensed to use Dimensions as a data source can further exploit the benefit of having SDG labels harvested and applied to records automatically.

Read our Digital Science report on Contextualising Sustainable Development Research.

Once collected, SDG data can be used for powerful reporting purposes, whether at an individual, school, or institutional level. We have introduced stock dashboards to support initial reporting on SDG labels. These tools can help research institutions demonstrate which individuals, schools or groups are focused most on specific SDGs, analyse gaps and areas of further necessary investment, and even demonstrate return on investment for funding.

Sustainable Development Goals (SDGs) labels on publications

Labels can also be applied to user profiles and surfaced in public profiles within the Discovery Module add-on to Elements, helping external researchers, members of the press and other stakeholders identify specialists working toward particular sustainability goals (see examples of public profiles showcasing SDG labelling at Oklahoma State University or Lincoln University). This can help drive discoverability of research, open up opportunities for greater collaboration and innovation, and support the public understanding and availability of science by connecting the media to knowledgeable scientific sources. 

Users can search and filter by specific SDGs they are interested in to find researchers specialising in that field, while the researchers themselves can showcase their work within their own profiles. 

Applying the SDG framework to Elements facilitates and supports both internal and external collaboration and innovation, advancing global efforts to achieve the 2030 Agenda for Sustainable Development.

SDG Case Study: Carnegie Mellon University 

Carnegie Mellon University (CMU) is a private, global research university which stands among the world’s most renowned educational institutions. CMU acquired Elements in 2017 and now uses the platform across a wide range of use cases, including “service tracking, faculty annual reviews, publications and monitoring, public directory, custom reporting, data visualization and analysis, data feeds to external websites, open access research and scholarship, data migration from historical systems, researcher identity management, and mapping faculty research to Sustainable Development Goals”. Read more.

During 2021, the University Libraries worked alongside the Provost Office’s Sustainability Initiative to conduct the Sustainable Development Goal mapping with a set of early adopters.

A recent news post on the Carnegie Mellon libraries blog on their ongoing expansion of Elements across campus explains how Director of Sustainability Initiatives Alexandra Hiniker utilised Elements to support faculty in thinking critically about how their work aligns with the 2030 Agenda.

“One thing I’ve heard consistently from students, faculty, staff, and external partners that I work with here in Pittsburgh, across the country, and around the world, is that they want to know what our CMU community is doing on the range of sustainable development goals that cover everything from poverty and hunger, to good health and wellbeing, peaceful, just and strong institutions, reducing inequalities, and of course, climate action,” explains Hiniker in a recent video interview published by the university. “There’s so much great work going on across CMU but it’s hard to pull out all of that information, and share it with all of these different people who are interested in collaboration.

“As part of my role linking students, staff, and faculty across the campus to sustainability efforts, I heard from them that the most important thing was to connect to different parts of the university to which they usually didn’t have access,” Hiniker explained. “Elements is a way for people to quickly access information about what researchers are doing, so that they can help contribute to finding solutions to some of the world’s greatest challenges.”

Elements is now providing a centralized space for CMU’s campus researchers to record which SDGs are associated with their research outputs and other academic activities. The Libraries’ Elements reporting and data visualization team worked with the Sustainability Initiatives Office to build reporting dashboards which surface data on how faculty initiatives and research across campus are supporting specific SDGs. 

You can hear more from Hiniker directly in this short interview:

Find out more or get support

Elements can help you track and report on how your researchers are contributing towards the United Nations Sustainable Development Goals as we all work towards achieving a better and more sustainable future for all. Not only does this make participation in the THE Impact Rankings far simpler, it also helps you demonstrate your commitment to global progress to researchers and faculty, prospective students, funders, and other key stakeholders. If you’d like to get in touch to learn more about Elements, or if you’re a current client who’d like more information on how to integrate Dimensions as a data source, or surface SDG labels in public profiles, please get in touch to find out more. 

The Digital Science Consultancy team can also produce tailored analysis for non-profits, governments, funders, research institutions and STEM publishers to inform strategy to meet organisational goals. We can help you relate the influence and impact that your organisation has to research on the UN’s Sustainable Development Goals (SDGs).

Natalie Guest

About the Author

Natalie Guest, Marketing Director | Symplectic

Natalie Guest works in pursuit of the advancement of knowledge by delivering flexible research solutions that help universities, institutions and funding organisations achieve their research goals. She has 10 years’ experience in B2B technology marketing, focusing predominantly on the scholarly publishing, research and information management sector.

The post Demonstrating Real Impact: SDG Reporting for Institutions appeared first on Digital Science.

]]>
The Changing Landscape of Open Access Compliance https://www.digital-science.com/blog/2022/10/the-changing-landscape-of-open-access-compliance/ Tue, 25 Oct 2022 13:31:06 +0000 https://www.digital-science.com/?p=59427 Global shifts in Open Access - and variable policies - require a nuanced and flexible approach to supporting OA compliance.

The post The Changing Landscape of Open Access Compliance appeared first on Digital Science.

]]>

Globally, the past decade has seen a move from 70% of all publishing being closed access to 54% being open access. In recent years, the COVID-19 pandemic has changed science publishing and necessitated a huge acceleration in transitioning to Open Access (OA) models, driven by a need for speed in publishing and an accompanying growth in preprints. The even more recent memo from the White House’s Office of Science and Technology Policy (OSTP) will see this trend advance rapidly in the United States, with not only federally funded publications themselves but associated datasets being required to be made publicly available without embargo.

In this blog post, Symplectic‘s Tzu-I Liao examines global shifts in approach to Open Access, and discusses how Symplectic plans to continue to evolve Elements’ functionality to build in more flexibility and support for multiple OA pathways. 

Tracking global trends – and differences – in the OA landscape

Figure 1: Open access policies adopted between 2005 and 2022 by institutions, according to ROARMAP.

Figure 1 shows a dramatic 10x increase of OA policies adopted between 2005 and 2022 by institutions, according to ROARMAP. Numbers of policies adopted by funders increased from 19 in 2005 to 142 to 2022. We are excited to see the growing effort in making research more open – but on the other hand, as research organisations are required to give account of how the funded projects are compliant and managed efficiently, we now are faced with a much bigger range of requirements and criteria to monitor within Symplectic Elements. 

Interestingly, amidst the changing landscape of OA requirements, the latest updates to government funder policies suggest trends of similar requirements globally. For example, UK Research and Innovation (UKRI), the Government of the UK that directs research and innovation funding, updated its OA policy in 2020, mandating all funded research outputs be made open immediately upon publication without any embargo. Comparably, in August 2022 the OSTP within the US government issued guidance for federal agencies to set up their own OA policy to make funded research available to the public immediately once published, permitting no delay.

On the other side of the world the National Health and Medical Research Council (NHMRC), the main authority of the Australian Government responsible for medical research, also revised its policy in September 2022 and listed these same criteria for compliance. These funders also underline the importance of making research data open. Both UKRI and NHMRC prescribed the reuse licence to be applied to the outputs. While it is reassuring to see major funders moving towards similar directions, the OA landscape in front of us is still full of uncertainties.

While many policies point to similar criteria, much more variables now need to be taken into consideration when calculating compliance. Table 1 below demonstrates the difference of policy requirements between three major funders in different regions: both UKRI and NIH turn to making publications available immediately, while ARC retains the option of embargo; while UKRI and ARC pay more attention to metadata and research data, it is not yet specified in NIH’s policy.

AuthorityUKRI (UK)NIH (USA)ARC (Australia)
Mandate (articles)Version of Record immediately available upon publication or Accepted Manuscript deposited in repositoryMust be submitted to NIHMS upon acceptance & publicly available on PubMed CentralShould be open access within 12 months of publication
Mandate (metadata)Must be used on deposit platformsN/AMust be public within 3 months of publication
Mandate (data)Need to have data accessibility statementN/AOpen access is encouraged
Table 1: Different policy requirements between three major funders in different regions.

On top of the divergent paths for making research output ‘open’ or ‘publicly available’ (which are not always clearly defined), many policies also mention requirements about metadata and/or research data. However, clearer guidance on these areas are yet to be published. There are more policies encouraging the adoption of Gold OA pathway, but hybrid models and transformative deals make monitoring increasingly complicated. Some funders specify that outputs and metadata need to be deposited on platforms meeting certain requirements, although there is no comprehensive list of such platforms. 

Monitoring open access with Elements so far

Currently Symplectic Elements already helps institutions support and streamline open access workflows. With our flexible repository tools, administrators can customise harvest and deposit processes for the institutional repositories and use them as data sources to maintain an accurate representation of the outputs and minimise duplicates and efforts. 

At the same time, our OA Monitor module focuses on supporting an oversight of the institution’s activities for green OA. This involves keeping an eye on very detailed metadata about different types of publications with different funding sources and potentially multiple authors of different statuses. Our OA Monitor module supports defining an OA Policy: from what groups of users and publications are included in the policy, to detailed instructions about what compliance means. You can exclude inactive users or publications with embargo requests. We offer a complex algorithm to check compliance based on deposit deadlines, embargo period, deposit file versions etc. With the defined policy, users can set up prompts for researchers at various steps of workflows to deposit full text to the institutional repository for publications covered by the policy. The goal is to provide users with actionable information about how they can increase the proportion of deposited works. 

How OA changes impact upon Elements

As all these different mandates emerge, there are more criteria and more specifications at different levels that need attention. Our current approach to focus on monitoring the Green OA pathway may see more gaps going forward.

For example, the updated UKRI policy now no longer allows any embargo for the Green OA pathway– all outputs should be made open by the time of publication. To be compliant, there are no more grace periods (as for REF submissions) or any embargo that could serve as a buffer zone. This could mean that the deposit monitoring workflow likely needs to start much earlier in the publishing cycle. Alternatively, institutions might also see more researchers considering taking the Gold OA pathway, and thus need to monitor such activities more rigorously.

In other words, the sets of data points to capture and/or curate in order to fulfil the institutional needs in the OA monitor are now different.

More and more institutions feel the need to set up their own policies: partly responding to the funder mandates, partly trying to simplify the workflows required for widening needs. More stakeholders of various levels and focuses need to be brought onboard, with wider impacts on more parts of an organisation. 

The work entailed to support researchers and departments to be as compliant as possible has also changed significantly. Perhaps you want to help researchers choose an open journal to publish in, prompting researcher to deposit the right version of the work with the right licence together with the research dataset, and also include a statement about data availability or handle support with Author Processing Charges (APCs) or copyright retention etc

All these lead to more complex and institution- or even funder-specific monitoring workflows. We see institutions develop very specific reporting needs around these processes, some of which can be fulfilled partly by custom reports. These reports work well for specific needs, but are often less flexible and require more maintenance effort. While our OA Monitor and reports cater very well to the current scope of monitoring, there are some limitations because of the single-policy framework. Not all OA Monitor concepts, like first deposit date or reuse licences, are in the Reporting database – which means we are not always utilising everything we already have curated. 

Responding to OA changes with Elements

As the landscape of Open Access continues to evolve, with national and even regional disparities and a growing proliferation of pathways to OA, supporting OA monitoring and reporting within Elements necessitates greater flexibility and ongoing attention to global mandates. Here are some of the changes in Elements functionality that have either been made, are in progress, or are planned in our upcoming roadmap: 

Extending support for repository integrations

Repository integrations remain an important part of OA monitoring, and we continue to support streamlining deposit workflows. For the majority of institutional and funder policies, the Green OA pathway remains the central part of the workflow. Organisations using Elements can continue with the established data verification and curation workflows, and use our powerful Repository Tools 2 (RT2) integration with repositories to customise harvesting, depositing and automated updates. These functionalities are now offered to clients using DSpace (fully supported from Elements v.6.9), Eprints, Hyrax, and Figshare for Institutions (fully supported from Elements v.6.9). 

Capturing more OA-related metadata from existing data sources

We have expanded the range of metadata available to provide organisations with a fuller picture of their OA activities and OA publishing patterns. 

One disruptive side-step in the workflow for many institutions is needing to go out of Elements to check whether an article has been published in a fully OA journal. Elements now allows institutions to capture this information manually or from integrated sources (eg. Dimensions, WoS & DOAJ) to flag potential Gold OA status (available from v.6.9).

New inclusion of relevant data points will assist tracking key funder requirements. For example, The Wellcome Trust and NIH require deposits of publications to PubMed Central/ EPMC. From v.6.10 Elements starts to capture file level metadata from EPMC, offering additional information on full-text deposits, relevant dates and licences that could be integrated into your verification workflow. Another example is the improved control over deposit versions and reuse licences in recent releases, which will help make it easier for researchers to remember to select CC-BY licence for the Author’s Accepted Manuscript and make that deposit compliant for the UKRI policy.

Next steps in Elements 

Following on from extensive user research in this space, we plan to build upon our existing rich feature set to offer additional open access monitoring functionality to help  institutions understand the many different pathways their researchers use to make their research openly available. We also plan to evolve our OA policy compliance capabilities to remain in step with the changes in key funder policies.

The future of the OA landscape

Following on from extensive user research in this space, we continue to engage with clients in an ongoing series of user-led workshops which aim to craft guidance on the more nebulous areas of open access – such as data availability statements and rights retention policies. 

We will continue to closely monitor this shifting landscape, proactively working to create functionality to fit incoming mandates across geographies and working closely with the Elements community to identify ways to support OA engagement, compliance and reporting. 

If you’d like to engage with us on any of the areas raised in this blog, please get in touch.

Tzu-i Liao

About the Author

Tzu-i Liao, Jr. Product Manager | Symplectic

The post The Changing Landscape of Open Access Compliance appeared first on Digital Science.

]]>
Gathering a sense of community to showcase universities’ research capabilities https://www.digital-science.com/blog/2022/10/gathering-sense-of-community-to-showcase-research-capabilities/ Thu, 20 Oct 2022 15:16:58 +0000 https://www.digital-science.com/?p=59378 This NZ university community is growing stronger thanks to a new showcase of its research equipment, capabilities and expertise.

The post Gathering a sense of community to showcase universities’ research capabilities appeared first on Digital Science.

]]>

A New Zealand university has become a leader in demonstrating its research expertise, equipment and facilities – and it’s building a stronger research community along the way.

In the Māori language, the word “rāpoi” means to “cluster” or to “gather together”. It just so happens that a high-performance computing cluster (HPC) at Te Herenga Waka — Victoria University of Wellington (Te Herenga Waka) is named the Rāpoi HPC, and – perhaps by a happy coincidence – it’s become one of the star attractions of that university’s new showcase of research equipment, capabilities and expertise.

Over the past couple of years, Te Herenga Waka in the nation’s North Island capital of Wellington has been working closely with Symplectic and its own research community to “gather together” what is now among the world’s most impressive – and publicly searchable – collections of resources at any university available for research and consultancy.

Human resources (expertise) and infrastructure resources (equipment, facilities and services) are now all discoverable at the Wellington university’s portal, which is powered by Symplectic: https://people.wgtn.ac.nz/

Te Herenga Waka — Victoria University of Wellington’s Hunter Building.

Sheila Law, Research Information Systems Administrator with Te Herenga Waka’s Research Office, says having earlier built a showcase of the university’s academic expertise through new, searchable staff profiles, the next challenge was to tackle its infrastructure.

“The aim was to create a searchable catalogue of specialist equipment, to showcase our state-of-the-art capabilities, which could be used to support external engagement and facilitate long-lasting collaborations with other researchers,” Sheila says.

But until work began in late 2021, there was no central location or asset register of equipment to refer to. “All the information was held in institutes and schools and centres, with no overview of the resources that we had.”

Kate Byrne, VP Product Management with Symplectic, says: “We’ve found from working with organisations around the world that although many of them have data of this kind, it is very fragmented. And so one of the things we’ve been looking at most is to help some of our clients to start thinking about the early stages of that journey, because it’s very easy to look at our tools and go: isn’t this shiny, it would be great to do that. But actually stepping back and looking at where that data is going to come from, and how you’re going to work with your community to enable it, is part of the challenge,” Kate says.

Wellington’s university was up to the challenge. The small project team worked closely with school managers and technical managers, and with the assistance of internal champions, they gathered all the data needed, arranged for photos to be taken, and identified four categories under which equipment could be clustered: software, instruments, database, and services. 

While the data gathering and entry was painstaking, the result is that Wellington’s university now has a central register with 300 pieces of equipment on show both internally and externally. The portal is notable for its data completeness and quality, and provides images to help demonstrate its equipment and facilities. 

Te Herenga Waka now has a central register with 300 pieces of equipment on show both internally and externally.

Sheila says the university’s staff responded to the finished product straight away. “As soon as we launched it, we started to get interest from areas where they were seeing the potential in it. 

“There was one – the Rāpoi – it’s a sort of computer data hub, which is available for anybody across the university to use. It was great to get that on there. And as soon as that was available, the staff that were using it wanted to link to it from their personal profiles. So that was nice to see – that was our first proof that people are liking this, and now they can see how useful it is, that they can actually show the full capability of their expertise and the resources they have available to them.” 

The Rāpoi High Performance Computing Cluster is one of the key attractions of Te Herenga Waka’s new showcase of research equipment, capabilities and expertise.

Since its launch earlier this year, the Wellington university’s equipment website has also gathered interest from staff in New Zealand government departments, who are currently working on a solution for a National Research System. 

Symplectic has been with Te Herenga Waka every step of the way. “How Wellington’s university has approached this challenge and developed such a successful outcome should become a showcase example to other institutions. It’s been a pleasure working with them,” Kate says. 

Sheila Law, Research Information Systems Administrator with Te Herenga Waka’s Research Office.

Sheila says the University’s staff have been critical to the project’s success. “One of our key learnings is how we engage with staff to ensure they become the experts in using the system, and managing the categories and resources that they need to optimise the curation of research activities. 

“By providing ongoing support, we increase the understanding and build confidence in using the system, and we also seek and receive feedback from those users to understand how they’re using the system, and also how they would wish to use it.” 

The concept of “gathering together” – of information and the research community – is ongoing: 

“We continue to look at ways in which we can enhance our profiles further, to create a one-stop ecosystem of interconnected research activities. We continue to explore and test new functionality as it’s made available, to see how we can reduce manual effort, to curate a rich and versatile research ecosystem, to build our reporting capability, and to help researchers expand their research opportunities and find new collaborators.” 

For more information about Symplectic’s Discovery module and Public Profiles: www.symplectic.co.uk/public-profiles

The post Gathering a sense of community to showcase universities’ research capabilities appeared first on Digital Science.

]]>
How can central research facilities expand their role in the science community? https://www.digital-science.com/blog/2022/07/central-research-facilities-expand-in-the-science-community/ Wed, 27 Jul 2022 10:16:19 +0000 https://www.digital-science.com/?p=58557 Governments and research consortia can reap great benefits for the community and industry through large, shared research facilities.

The post How can central research facilities expand their role in the science community? appeared first on Digital Science.

]]>

Governments and research consortia can reap great benefits for the community and industry through large, shared research facilities and infrastructure.

What happens when experiments are too big and too expensive for a single university to run?

Some research efforts need to be conducted at a huge scale, drawing on multiple partner institutions; it’s not always feasible or advisable for one institution to be the sole focus of that work. Instead, large scientific instruments and experimental infrastructure are built and maintained at central facilities.

These advanced research tools range from underground labs at the bottom of mines, to free-electron lasers and particle accelerators – such as the Large Hadron Collider at CERN, Switzerland.

The Large Hadron Collider (LHC) is the world’s largest particle accelerator. It’s a prime example of a facility that fosters international scientific collaboration. Photograph: Dominguez, Daniel; Brice, Maximilien. Credit: CERN.

Due to their scale and cost, these facilities tend to be built and managed by government agencies or public research funders and made available to the national or international research communities. Such research facilities represent a major, long-term investment in a region’s research and innovation capability – from initial conception, to designing, building, maintaining, and upgrading facilities and their equipment, the commitment required from governments and their agencies can span decades of public expenditure.

For example, by the time Diamond – the UK’s national synchrotron light source – started accelerating electrons, it was one of the British Government’s largest capital expenditures on research and development in 40 years, costing £260 million (more than US $300 million in today’s currency). From the time of the initial report in the 1990s recommending its construction to the date of its first user beam in 2007, the synchrotron had taken 14 years to come to fruition.

Diamond Light Source is the United Kingdom’s national synchrotron. It was the largest science facility to be built in the UK for 40 years.

Why do governments and funders invest so much in central research facilities?

The discoveries made at central facilities would be impossible elsewhere. Experiments at such facilities employ highly specialised technical equipment and foster collaboration between leading experts, attracting national and international scientific talent. This combination of technical excellence with scientific expertise leads to novel results, the creation of new knowledge and innovation, and pushes the frontiers of many research fields. From an academic standpoint, it ensures high-impact publications and insights that can have a ripple effect through science and education for decades to come. For governments, industry and the community, such discoveries can lead to great economic and social outcomes.

How do researchers get to run experiments at central facilities?

Many facilities operate a user program; the “users” are often researchers based at universities or government agencies, visiting the facility for a few days or weeks to perform experiments that further their research. While at the facility they work with a second group of researchers and technical crew, employed by the facility to run the scientific instruments. The users and inhouse researchers collaborate to run experiments and publish the results from experiments as co-authors.

How can Dimensions be used to evaluate user facilities?  

Dimensions is the world’s largest database of research information, including individual researchers and their institutions, their research grants, publications, datasets, patents – even clinical trials as a linked dataset. Using Dimensions, it’s possible to delve into the connections between researchers, including those working at central research facilities.

Here we explore the collaboration patterns of users and in-house researchers at ISIS Neutron and Muon Source in the UK. At ISIS, beams of neutrons or muons are used to study materials at the atomic scale – from cracks in wind turbine blades to the structures of viruses and the inner workings of lithium batteries.  

We’ve made this example a bit harder for ourselves because ISIS is not currently indexed in Dimensions as a research organisation; instead, inhouse scientists are shown to be affiliated to the Rutherford Appleton Laboratory (RAL) where ISIS is based. Therefore, to find articles that include experimental results from ISIS, we can devise a search string within Dimensions that returns articles mentioning ISIS in the full text. This search string will, for example, look into each article’s methods sections, in which co-authors will indicate that data was collected at ISIS. Sounds complex? Not really, because the information available to Dimensions is so deep that we’re able to conduct this search with relative ease.

Who gets to use these facilities, and how can Dimensions help?

Comparative analyses are always useful. Here (Figure 1) we’ve conducted an analysis that compares the use of ISIS Neutron and Muon Source by researchers at the 24 leading Russell Group universities (effectively the UK’s “Ivy League”) with all other non-Russell Group universities. We can see which institution’s researchers get to use ISIS by comparing the number of articles that mention that facility.

Figure 1: Number of articles that mention ISIS, authored by researchers from Russell Group universities versus non-Russell Group universities. Source: Dimensions.

We see that Russell Group researchers are nearly three times more likely to be co-authors on publications mentioning ISIS than a non-Russell Group researcher.

Such an analysis could be used to identify opportunities to widen the user base of central facilities, ensuring that all eligible researchers – regardless of their home institution, country or research focus – have the opportunity to run experiments at those facilities.

Figure 2: Collaboration networks of publications that mention ISIS. Organisation (nodes) linked by co-authored articles that mention ISIS (lines with thickness proportional to number of articles). Source: Dimensions, using VOSviewer.

In this network diagram (Figure 2) we see which research organisations are collaborating with ISIS. Co-authorship is used as a proxy for collaboration; if researchers from two different organisations have co-authored an article, then that is considered a collaboration between their two organisations.

As we might expect, articles that mention ISIS are most often co-authored by teams of researchers from RAL and UK universities, usually Russell Group. However, this network diagram reveals a second set of collaborations between inhouse researchers at ISIS affiliated to RAL and researchers at other neutron facilities across Europe and the US.

Similar analyses could be prepared showing collaborations between individual users, disciplines or countries. Dimensions’ full-text search means that analysis could also focus on a whole national laboratory site, such as RAL or Argonne National Laboratory, a single machine, or facility.

What else can Dimensions show?

There are a host of questions that Dimensions can support governments and funders in answering, including:

  • Which research areas or emerging technologies are facility experiments contributing to?
    Value: To anticipate the current and future needs of the research community and inform beamline commissioning and future strategy.
  • Comparing facilities – do existing or future infrastructure plans copy, compete or complement similar facilities? Are there existing collaborations or opportunities to foster new collaborations?
    Value: To avoid duplication and needless expense, and maximise strategic expenditure.
  • Do participants of facilities’ training programs become facility users?
    Value: To see where training and professional development programs are demonstrating the most effectiveness, leading to return on investment.
  • Find experiment proposal reviewers with relevant expertise.
    Value: Identifying cross-collaboration opportunities and widening the pool of facility users.
  • Are funding agency award holders using facilities?
    Value: Identifying appropriate use of public expenditure and return on investment.

About Dimensions

Part of Digital Science, Dimensions is a modern, innovative, linked research data infrastructure and tool, re-imagining discovery and access to research: grants, publications, citations, clinical trials, patents and policy documents in one place. www.dimensions.ai 

Talk with our team about how Dimensions can support your research.

Alex Sinclair

About the Author

Alex Sinclair is a Senior Analyst in the Dimensions Government and Funder team.

The post How can central research facilities expand their role in the science community? appeared first on Digital Science.

]]>