Mark Hahnel Articles - TL;DR - Digital Science https://www.digital-science.com/tldr/people/mark-hahnel/ Advancing the Research Ecosystem Wed, 30 Apr 2025 09:52:42 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 From Gold to Diamond: Is Equitable Open Access Still a Mirage? https://www.digital-science.com/tldr/article/from-gold-to-diamond-is-equitable-open-access-still-a-mirage/ Wed, 30 Apr 2025 09:45:00 +0000 https://www.digital-science.com/?post_type=tldr_article&p=75726 Mark Hahnel once wrote that Open Access is an inevitability. But what is the data telling us? In this new post, Mark shares the latest numbers and his insights on the current state of play for OA.

The post From Gold to Diamond: Is Equitable Open Access Still a Mirage? appeared first on Digital Science.

]]>
A few years ago, I wrote that Open Access is an inevitability. And in many ways, the data supports that view—at least at a glance.

Gold Open Access—the model where authors (or their funders) pay article processing charges to make work freely available—has grown steadily over the past decade. But if you look closely, the growth is slowing. And what’s taking its place isn’t the altruistic, community-powered model many hoped for. Instead, Hybrid Open Access is filling the gap: a model where paywalled journals charge extra to make select articles open.

Diamond Open Access—where publishing is free for both authors and readers, supported by institutions rather than APCs—has been making headlines. But here’s the uncomfortable truth: we still don’t have enough data to know whether Diamond OA is actually growing or simply being talked about more.

A System Stuck in the Middle

Digging into Dimensions data layered on top of OpenAlex, a clear pattern emerges. Gold OA’s momentum is slowing. But instead of researchers turning to Green OA (self-archiving in repositories) or Diamond OA, many are choosing Hybrid OA instead.

Why?

Because researchers chase visibility, reputation, and prestige. That often means publishing in high-impact journals—many of which are owned by legacy publishers who now offer hybrid models. These options give authors a way to comply with funder mandates without sacrificing perceived academic clout.

Institutions and libraries, meanwhile, are under pressure to show open access progress. Hybrid OA, while expensive, is a politically safe way to do that. It’s the administrative equivalent of checking a box—even if it means paying twice.

Transformative agreements like Read and Publish deals have only accelerated this trend, redirecting subscription budgets to cover OA fees—effectively normalizing hybrid publishing in many disciplines.

A New Vision Emerges

But while the hybrid tide rises, a quiet revolution is underway.

In 2022, a coalition of organizations—Science Europe, cOAlition S, OPERAS, and ANR—launched a bold Action Plan to support Diamond OA. Their goal: to build a truly equitable, community-driven publishing ecosystem, where knowledge is a public good and the costs are shouldered collectively—not by individual researchers.

This vision took shape through the DIAMAS project and culminated in the creation of the Diamond OA Standard (DOAS). Think of DOAS as a blueprint: a framework to help Diamond journals measure, improve, and sustain quality.

It’s built on seven pillars:

  • Legal ownership, mission, and governance
  • Open science practices
  • Editorial management and research integrity
  • Technical service efficiency
  • Visibility and impact
  • Equity, diversity, inclusion, and multilingualism
  • Continuous improvement

Together, these components aim to professionalize Diamond OA without compromising its values. They send a clear message: if scholarly communication is a public good, then it must be shaped and governed by the scholarly community itself.

The Missing Link: Measurable Growth

Despite this momentum, the numbers tell a more sobering story.

Early data from OpenAlex paints the picture that Diamond OA has plateaued in terms of publication volume. Enthusiasm and infrastructure have grown—but the data doesn’t reflect this.

Why the disconnect? Part of the issue is visibility. At Digital Science, we would love a better way to track Diamond Open Access growth. DOAJ lists 1,369 journals as being “without fees”, but there seem to be many edge cases resulting in a landscape that isn’t black and white.

It is also true that many Diamond journals operate with limited marketing, uncertain technical infrastructure, and fragmented funding. And despite the ideals, many researchers still don’t see them as viable options for career advancement.

What Comes Next?

There are reasons for optimism. The DIAMAS project isn’t just advocating for Diamond OA—it’s building the scaffolding. Its service portal offers templates, best practices, and technical guidance to help journals align with DOAS standards and professionalize their operations. The European Diamond Capacity Hub (EDCH) serves as a coordination center for Diamond OA stakeholders in Europe. Launched alongside the EDCH, the ALMASI Project focuses on understanding non-profit OA publishing in Africa, Latin America, and Europe.

The pieces are falling into place for equitable open access solutions. What’s needed is adoption and quantification. Funders and Institutions should include equitable OA in their promotion criteria. Researchers must see it as a credible home for their work. If anyone knows of faster ways to get clean data for Diamond, please reach out.

The path forward exists. But like any path, it only becomes clear by walking it. If you are working in this space and have this data, we would love to disseminate it through our tools at Digital Science.

The post From Gold to Diamond: Is Equitable Open Access Still a Mirage? appeared first on Digital Science.

]]>
The Perpetual Research Cycle: AI’s Journey Through Data, Papers, and Knowledge https://www.digital-science.com/tldr/article/the-perpetual-research-cycle-ais-journey-through-data-papers-and-knowledge/ Fri, 21 Mar 2025 09:45:00 +0000 https://www.digital-science.com/?post_type=tldr_article&p=75639 A "collaborative synergy" between AI and researchers "will define the next era of scientific progress", writes Mark Hahnel. How will AI enhance human intellect?

The post The Perpetual Research Cycle: AI’s Journey Through Data, Papers, and Knowledge appeared first on Digital Science.

]]>
Academics hypothesize, generate data, make sense of it and then communicate it. If AI can help to generate, mine, and refine knowledge faster than human researchers, what does the future of academia look like? The answer lies not in replacing human intellect but in enhancing it, creating a collaborative synergy between AI and human researchers that will define the next era of scientific progress. I’ve been playing around with chatGPT, Google Gemini and Claude.ai to see how well they all do at creating academic papers from datasets. 

AI can also serve as a tool to aid humans in data extraction from many papers. Consider a scenario where AI synthesizes information from hundreds of studies to create a refined dataset. That dataset then feeds back into the system, sparking new research papers.

This cycle—dataset to paper, paper to knowledge extraction, knowledge to new datasets—propels an accelerating loop of discovery. Instead of a linear research pipeline, AI enables a continuous, self-improving knowledge ecosystem.

From data to papers

I looked for interesting datasets on Figshare. The criteria was a) that I knew they would be re-usable as they had been cited several times. And b) the files were relatively small (<100MB) so as not to hit the limits of the common AI tools. 

This one fit the bill:

Rivers, American (2019). American Rivers Dam Removal Database. figshare. Dataset. https://doi.org/10.6084/m9.figshare.5234068.v12

From there I asked Claude 3.7 Sonnet “Based on the attached files, can you create a full length academic paper with an abstract, methods results, discussion and references”. Followed by “Can you convert the whole paper to latex so I can copy and paste it into Overleaf?”

The resulting paper needs a little tweaking in the layout of the results and graphs, but other than that, has done a great job.

Papers to new data/knowledge

A single paper is just the beginning. The real challenge is synthesizing knowledge from the ever-growing volume of research. This is where specialized knowledge extraction tools become crucial. How do we effectively mine this knowledge? This is where ReadCube shines. ReadCube helps researchers manage and discover scholarly literature, but its real power lies in its knowledge extraction capabilities. Imagine ReadCube as a powerful filter, sifting through countless pages to extract the nuggets of wisdom.

Tools like ReadCube can then analyze vast collections of papers, uncovering patterns and relationships that human researchers might miss. This process involves:

  • Text and citation mining: AI can analyze papers to identify emerging trends, inconsistencies, or knowledge gaps.
  • Automatic synthesis: AI can compare findings across thousands of studies, synthesizing insights into new, high-level conclusions.
  • Hypothesis generation: By recognizing correlations between disparate research areas, AI can propose new research directions.

The Flywheel Effect: How the Cycle Accelerates

The true magic happens when this extracted knowledge becomes the input for the next iteration. Each cycle follows this pattern:

  1. Raw data is processed by AI to generate initial research outputs
  2. Knowledge extraction tools mine these outputs for higher-order insights
  3. These insights form a new, refined dataset
  4. AI processes this refined dataset, generating more precise analyses
  5. The cycle continues, with each rotation producing more valuable knowledge


With each turn of this flywheel, the insights become more refined, more interconnected, and more actionable. The initial analyses might focus on direct correlations in the data, while later iterations can explore complex causal relationships, predict future trends, or suggest optimal intervention strategies.

This AI-driven, data-to-knowledge cycle represents a paradigm shift in research. Imagine the possibilities in fields like medicine, climate science, and economics. We’re moving towards a future where AI and human researchers work in synergy, pushing the boundaries of discovery. Rather than replacing researchers, AI acts as a force multiplier, enabling deeper faster knowledge generation.

The post The Perpetual Research Cycle: AI’s Journey Through Data, Papers, and Knowledge appeared first on Digital Science.

]]>
Presenting: Research Transformation: Change in the era of AI, open and impact https://www.digital-science.com/tldr/article/presenting-research-transformation-change-in-ai-open-and-impact/ Mon, 28 Oct 2024 09:45:00 +0000 https://www.digital-science.com/?post_type=tldr_article&p=73965 Mark Hahnel and Simon Porter introduce Digital Science's new report as part of our ongoing investigation into Research Transformation: Change in the era of AI, open and impact.

The post Presenting: Research Transformation: Change in the era of AI, open and impact appeared first on Digital Science.

]]>
Research Transformation report graphic
Research Transformation: Change in the era of AI, open and impact.

As part of our ongoing investigation into Research Transformation, we are delighted to present a new report, Research Transformation: Change in the era of AI, open and impact.

Within the report, we sought to understand from our academic research community how research transformation is experienced across different roles and responsibilities. The report, which is a mixture of surveys and interviews across libraries, research offices, leadership and faculty, reflects transformations in the way we collaborate, assess, communicate, and conduct research.

The positions that we hold towards these areas are not the same as those we held a decade or even five years ago. Each of these perspectives represent shifts in the way that we perceive ourselves and the roles that we play in the community. Although there is concern about the impact that AI will have on our community, our ability to adapt and change is reflected strongly across all areas of research, including open access, metrics collaboration and research security. That such a diverse community is able to continually adapt to change reflects well on our ability to respond to future challenges.

Key findings from the report:

  • Open research is transforming research, but barriers remain
  • Research metrics are evolving to emphasize holistic impact and inclusivity
  • AI’s transformative potential is huge, but bureaucracy and skill gaps threaten progress
  • Collaboration is booming, but increasing concerns over funding and security
  • Security and risk management need a strategic and cultural overhaul

We do these kinds of surveys to understand where the research community is moving and how we can tweak and adapt our approach as a company. We were very grateful to the great minds who helped us out with a deep dive into what has affected their roles and will affect their roles going forward. Metrics, Open Research and AI are very aligned with the tools that we provide for academics, and the strategy we have to make research more inclusive, transparent and trustworthy.

The post Presenting: Research Transformation: Change in the era of AI, open and impact appeared first on Digital Science.

]]>
Welcome to… Research Transformation!  https://www.digital-science.com/tldr/article/welcome-to-research-transformation/ Mon, 21 Oct 2024 13:15:18 +0000 https://www.digital-science.com/?post_type=tldr_article&p=70432 Transformation via and within research is a constant in our lives. But with AI, we now stand at a point where research (and many other aspects of our working life) will be transformed in a monumental way. As such, we are taking this moment to reflect on the activity of Research Transformation itself, and celebrating the art of change. Our campaign will show how research data can be transformed into actionable insights, how the changing role of research is affecting both those in academia and industry, and exploring innovative ways to make research more open, inclusive and collaborative, for all – especially for those beyond the walls of academia.

The post Welcome to… Research Transformation!  appeared first on Digital Science.

]]>

Open research is transforming the way research findings are discovered, shared and reproduced. As part of our commitment to the Open Principles and research transformation, we are looking into how open research is transforming roles, approaches, policies and, most importantly, mindsets for everyone across the research landscape. See our inspiring transformational stories so far.

Academia is at a pivotal juncture. It has often been criticized as slow to change, but external pressures from an increasingly complex world are forcing rapid change in the sector. To understand more about how the research world is transforming, what’s influencing change, and how roles are impacted, we reached out to the research community through a global survey and in-depth interviews.

Research Transformation stories so far…

Academic Survey Report Pre-registration

State of Open Data 2024 – Special Edition

Will 2025 be a turning point for Open Access? – Digital Science

How has innovation shaped Open Research? What does the future hold – especially with the impact of AI? Here’s Dan Valen speaking about Figshare’s key role, with innovation helping to transform the research landscape.

Digital Science has always understood its role as a community partner – working towards open research together. Here’s some ways in which we have helped to transform research over the last 14 years.

In our first piece, Simon Porter and Mark Hahnel introduce the topic and detail the three areas the campaign will focus on.

  • Making data more usable
  • Opening up channels & the flow of information
  • Transforming data through innovation & AI
  • Maintaining trust & integrity
  • Seeing both perspectives
  • What success looks like for knowledge transfer
  • Evolving roles and the role of people in bridging gaps
  • Research Transformation White Paper
  • How have roles changed:
    • In Academia?
    • In Publishing?
    • In Industry?
  • State of AI Report
  • How are we using AI in our research workflows?

Research Transformation

The way we interact with information can amplify our ability to make connections, and in doing so transforms how we understand the world. Supercharged by the AI moment that we are in, the steady march of digital transformation in society over the last three decades is primed for rapid evolution. What is true for society, is also doubly so for research. Alongside ground-breaking research and discoveries is the constant invitation to adapt to new knowledge and abilities. Combine the general imperative within the research sector to innovate with the rapidly evolving capabilities of generative AI and it is safe to say that expectations are high. Taking effective advantage of new possibilities as they arise however, requires successful coordination within society and systems. 

There is an art to transformation, and understanding the mechanisms of transformation places us in the best position to take advantage of the opportunities ahead.

In this series, we specifically seek to explore Research Transformation with an eye to adapting what we already know to the present AI moment. Transformation in Research is not just about digital systems, but it is also about people and organisations – crossing boundaries from research to industry, emerging new research sectors, creating new narratives and adapting to the possibilities that change brings.

At Digital Science, we have always sought to be an integral part of research transformation, aiming to provide products that enable the research sector to evolve research practice – from collaboration and discovery through to analytics and administration. Our ability to serve clients from research institutions to funders, publishers, and industry has placed us in a unique position to facilitate change across the sector, not simply within silos, but between them. In this series, we will be drawing on our own experiences of research transformation, as well as inviting perspectives from the broader community. As we proceed we hope to show that Research Transformation isn’t just about careful planning, but requires a sense of playfulness – a willingness to explore new technology, a commitment to a broader vision for better research, as well as an ability to build new bridges between communities.

1. The story of research data transformation

In the first of three themes, we will cover Research Transformation from the perspective of the data and metadata of research. How do changes to the metadata of research transform our ability to make impact, as well as see the research community through new lenses? How does technology enable these changes to occur? Starting almost from the beginning, we will look at how transitions in publishing practice have enabled the diversity of the research workforce to become visible. We will also trace the evolving story of the structure of a researcher’s papers, from the critical use of identifiers, to adoption of the credit ontology, through to the use of trust markers (including ethics statements and data and code availability, and conflict of interest statements.) The evolving consensus on structured and semi structured nature of research articles changes not only the way we discover, read and trust individual research papers, but also transforms our ability to measure and manage research itself.

Our focus will not only be reflective, but will also look forward to the emerging challenges and opportunities that generative AI offers. We will ask deep questions about how research should make its way into large language models. We also explore the new field of Forensic Scientometrics that has arisen in response to the dramatic increase in bad faith science in part enabled by generative AI, and the new research administration collaborations that this implies – both with research institutions and across publishing. We will aso offer more playful, experimental investigations.  For example, a series on ‘prompt engineering for librarians’ draws on the original pioneering spirit of the 1970’s MEDLARS Analysts to explore the possibilities that tools such as OpenAI can offer. 

2. The story of connection

Lifting up from the data, we note that a critical part of our experience of research transformation has been the ability to experience and connect with research fromshifting perspectives. In this second theme exploring research transformation, we aim to celebrate the art of making connections, from the personal transformations required  to make the shift from working within research institutions to industry, through to the art of building research platforms that support multiple sectors. We also cover familiar topics from new angles, For instance, how do the FAIR data principles benefit the pharmaceutical industry? How do we build effective research collaborations with emerging research sectors in Africa?

3. The story of research innovation

In our third theme, we will explore Research Transformation from the perspective of innovation, and how it has influenced the way research is conducted. Culminating in a  Research Transformation White Paper we will explore how roles have changed in academia, publishing, and industry.  Within this broader context of Research transformation, we ask ‘How are we using AI in our research workflows?’ How do we think we will be using AI in years to come?

Of course, many of us in the Digital Science community have been engaging with different aspects of research transformation over many years. If you are keen to explore our thinking to date, one place that you might like to start is at our Research Transformation collection on Figshare. Here we have collated what we think are some of our most impactful contributions to Research Transformation so far. We are very much looking forward to reflecting on research transformation throughout the year. If you are interested in contributing, or just generally finding out more, why not get in touch?

The post Welcome to… Research Transformation!  appeared first on Digital Science.

]]>
The TL;DR on… ERROR https://www.digital-science.com/tldr/article/tldr-error/ Wed, 25 Sep 2024 17:02:11 +0000 https://www.digital-science.com/?post_type=tldr_article&p=72358 We love a good deep dive into the awkward challenges and innovative solutions that are transforming the world of academia and industry. In this article and in the full video interview, we're discussing an interesting new initiative that's been making waves in the research community: ERROR.

Inspired by bug bounty programs in the tech industry, ERROR offers financial rewards to those who identify and report errors in academic research. ERROR has the potential to revolutionise how we approach, among other things, research integrity and open research by incentivising the thorough scrutiny of published research information and enhancing transparency.

Suze sat down with two other members of the TL;DR team, Leslie and Mark, to shed light on how ERROR can bolster trust and credibility in scientific findings, and explore how this initiative aligns with the principles of open research and how all these things can drive a culture of collaboration and accountability. They also discussed the impact that ERROR could have on the research community and beyond.

The post The TL;DR on… ERROR appeared first on Digital Science.

]]>
We love a good deep dive into the awkward challenges and innovative solutions transforming the world of academia and industry. In this article and in the full video interview, we’re discussing an interesting new initiative that’s been making waves in the research community: ERROR.

Inspired by bug bounty programs in the tech industry, ERROR offers financial rewards to those who identify and report errors in academic research. ERROR has the potential to revolutionize how we approach, among other things, research integrity and open research by incentivizing the thorough scrutiny of published research information and enhancing transparency.

I sat down with two other members of the TL;DR team, VP of Research Integrity Leslie McIntosh and VP of Open Research Mark Hahnel, to shed light on how ERROR can bolster trust and credibility in scientific findings, and explore how this initiative aligns with the principles of open research – and how all these things can drive a culture of collaboration and accountability. We also discussed the impact that ERROR could have on the research community and beyond.

ERROR is a brand new initiative created to tackle errors in research publications through incentivized checking. The TL;DR team sat down for a chat about what this means for the research community through the lenses of research integrity and open research. Watch the whole conversation on our YouTube channel: https://youtu.be/du6pEulN85o

Leslie’s perspective on ERROR

Leslie’s initial thoughts about ERROR were cautious, recognizing its potential to strengthen research integrity but also raising concerns about unintended consequences.

She noted that errors are an inherent part of the scientific process, and over-standardization might risk losing the exploratory nature of discovery. Drawing parallels to the food industry’s pursuit of efficiency leading to uniformity and loss of nutrients, Leslie suggested that aiming for perfection in science could overlook the value of learning from mistakes. She warned that emphasizing error correction too rigidly might diminish the broader mission of science – discovery and understanding.

Leslie: “Errors are part of science and part of the discovery… are we going so deep into science and saying that everything has to be perfect, that we’re losing the greater meaning of what it is to search for truth or discovery [or] understand that there’s learning in the errors that we have?”

Leslie also linked this discussion to open research. While open science encourages interpretation and influence from diverse participants, the public’s misunderstanding of scientific errors could weaponize these mistakes, undermining trust in research. She stressed that errors are an integral, even exciting, part of the scientific method and should be embraced rather than hidden.

Mark’s perspective on ERROR

Mark’s initial thoughts were more optimistic, especially within the context of open research.

Mark: “…one of the benefits of open research is we can move further faster and remove any barriers to building on top of the research that’s gone beforehand. And the most important thing you need is trust, [which] is more important than speed of publication, or how open it is, [or] the cost-effectiveness of the dissemination of that research.”

Mark also shared his excitement about innovation in the way we do research. He was particularly excited about ERROR’s approach to addressing the problem of peer review, as the initiative offers a new way of tackling longstanding issues in academia by bringing in more participants to scrutinize research.

He thought the introduction of financial incentives to encourage error reporting could lead to a more reliable research landscape.

“I think the payment for the work is the most interesting part for me, because when we look at academia and perverse incentives in general, I’m excited that academics who are often not paid for their work are being paid for their work in academic publishing.”

However, Mark’s optimism was not entirely without wariness. He shared Leslie’s caution about the incentives, warning of potential unintended outcomes. Financial rewards might encourage individuals to prioritize finding errors for profit rather than for the advancement of science, raising ethical concerns.

Ethical concerns with incentivization

Leslie expressed reservations about the terminology of “bounty hunters”, which she felt criminalizes those who make honest mistakes in science. She emphasized that errors are often unintentional.

Leslie: “It just makes me cringe… People who make honest errors are not criminals. That is part of science. So I really think that ethically when we are using a term like bounty hunters, it connotes a feeling of criminalization. And I think there are some ethical concerns there with doing that.”

Leslie’s ethical concerns extended to the global research ecosystem, noting that ERROR could disproportionately benefit well-funded researchers from the Global North, leaving under-resourced researchers at a disadvantage. She urged for more inclusive oversight and diversity in the initiative’s leadership to prevent inequities.

She also agreed with Mark about the importance of rewarding researchers for their contributions. Many researchers do unpaid labor in academia, and compensating them for their efforts could be a significant positive change.

Challenges of integrating ERROR with open research

ERROR is a promising initiative, but I wanted to hear about the challenges in integrating a system like this alongside existing open research practices, especially when open research itself is such a broad, global and culturally diverse endeavor.

Both Leslie and Mark emphasized the importance of ensuring that the system includes various research approaches from around the world.

Mark: “I for one think all peer review should be paid and that’s something that is relatively controversial in the conversations I have. What does it mean for financial incentivization in countries where the economics is so disparate?”

Mark extended this concept of inclusion to the application of artificial intelligence (AI), machine learning (ML) and large language models (LLMs) in research, noting that training these technologies requires access to diverse and accurate data. He warned that if certain research communities are excluded, their knowledge may not be reflected in the datasets used to build future AI research tools.

“What about the people who do not have access to this and therefore their content doesn’t get included in the large language models, and doesn’t go on to form new knowledge?”

He also expressed excitement about the potential for ERROR to enhance research integrity in AI and ML development. He highlighted the need for robust and diverse data, emphasizing that machines need both accurate and erroneous data to learn effectively. This approach could ultimately improve the quality of research content, making it more trustworthy for both human and machine use.

Improving research tools and integrity

Given the challenges within research and the current limitations of tools like ERROR, I asked Leslie what she would like to see in the development of these and other research tools, especially within the area of research integrity. She took the opportunity to reflect on the joy of errors and failure in science.

Leslie: “If you go back to Alexander Fleming’s paper on penicillin and read that, it is a story. It is a story of the errors that he had… And those errors were part of or are part of that seminal paper. It’s incredible, so why not celebrate the errors and put those as part of the paper, talk about [how] ‘we tried this, and you know what, the refrigerator went out during this time, and what we learned from the refrigerator going out is that the bug still grew’, or whatever it was.

“You need those errors in order to learn from the errors, meaning you need those captured, so that you can learn what is and what is not contributing to that overall goal and why it isn’t. So we actually need more of the information of how things went wrong.”

I also asked Mark what improvements he would like to see from tools like ERROR from the open research perspective. He emphasized the need for better metadata in research publishing, especially in the context of open data. Drawing parallels to the open-source software world, where detailed documentation helps others build on existing work, he suggested that improving how researchers describe their data could enhance collaboration.

Mark also feels that the development of a tool like ERROR highlights other challenges in the way we are currently publishing research, such as deeper issues with peer review, or incentives for scholarly publishing.

Mark: “…the incentive structure of only publishing novel research in certain journals builds into that idea that you’re not going to publish your null data, because it’s not novel and the incentive structure isn’t there. So as I said, could talk for hours about why I’m excited about it, but I think the ERROR review team have a lot of things to unpack.”

Future of research integrity and open research

What do Leslie and Mark want the research community to take away from this discussion on error reporting and its impact on research integrity and open research?

Leslie wants to shine a light on science communication and its role in helping the public to understand what ERROR represents, and how it fits into the scientific ecosystem.

Leslie: “…one of the ways in which science is being weaponized is to say peer review is dead. You start breaking apart one of the scaffolds of trust that we have within science… So I think that the science communicators here are very important in the narrative of what this is, what it isn’t, and what science is.”

Both Leslie and Mark agreed that while ERROR presents exciting possibilities, scaling the initiative remains a challenge. Mark raised questions about how ERROR could expand beyond its current scope, with only 250 papers reviewed over four years and each successful error detection earning a financial reward. Considering the millions of papers published annually, it is unclear how ERROR can be scaled globally and become a sustainable solution.

Mark: “…my biggest concern about this is, how does it scale? A thousand francs a pop, it’s 250 papers. There [were] two million papers [published] last year. Who’s going to pay for that? How do you make this global? How do you make this all-encompassing?”

Conclusion

It is clear from our discussion that ERROR represents a significant step forward in experimenting to enhance both research integrity and open research through this incentivised bug-hunting system.

Leslie has highlighted how the initiative can act as a robust safeguard, ensuring that research findings are more thoroughly vetted and reliable, but she does remind us that we need to be inclusive in this approach. Mark has also emphasized the potential for a tool like this in making publication processes more efficient – and even finally rewarding researchers for all the additional work that they’re doing – but he does wonder how this can scale up to foster a more transparent and collaborative research environment that aligns perfectly with the ethos of open research as well.

Leslie and Mark’s comments are certainly timely, given that the theme of Digital Science’s 2024 Catalyst Grant program is innovation for research integrity. You can find out more about how different segments of research can and should be contributing to this space by reading our TL;DR article on it here.

We look forward to exploring more innovations and initiatives that are going to shape – or shatter – the future of academia, so if you’d like to suggest a topic we should be discussing, please let us know.

The post The TL;DR on… ERROR appeared first on Digital Science.

]]>
The next serendipitous paradigm shift for drug discovery https://www.digital-science.com/tldr/article/the-next-serendipitous-paradigm-shift-for-drug-discovery/ Thu, 08 Aug 2024 10:20:59 +0000 https://www.digital-science.com/?post_type=tldr_article&p=72854 AI, federated learning and vast swathes of research data available at our fingertips represent paradigm shifts for drug discovery. Our VP Open Research, Dr Mark Hahnel, discusses the serendipity and the science.

The post The next serendipitous paradigm shift for drug discovery appeared first on Digital Science.

]]>
If we were living in a simulation, in order for humanity to continue its drive out towards longer, happier lives, every now and then something drastic should happen. We should get a serendipitous paradigm shift at the most desperate time. The next paradigm shift is AI. AI may be the technological Shangri-La we were crying out for in order to stop the heating of the planet and ultimately, the end of humanity. This may also be the case with drug discovery. The way in which we find and create new drugs may be about to transform forever.

Drug discovery has come a long way. It started with natural remedies and saw landmark serendipitous discoveries like penicillin in 1928. The mid-20th century introduced rational drug design, targeting specific biological mechanisms. Advances in genomics, high-throughput screening, and computational methods have further accelerated drug development, transforming modern medicine. However, despite these advances, fewer than 10% of drug candidates succeed in clinical trials (Thomas, D. et al. Clinical Development Success Rates and Contributing Factors 2011–2020 (BIO, QLS & Informa, 2021)). Challenges like pharmacokinetics and the complexity of diseases hamper progress. While we no longer fear smallpox or polio and have effective treatments for bacterial infections and Hepatitis C, today’s most damaging diseases are complex and hard to treat due to our limited understanding of their mechanisms.

Nature 627, S2-S5 (2024) https://doi.org/10.1038/d41586-024-00753-x

Cue paradigm shift. DeepMind’s AlphaFold has revolutionized biology by accurately predicting protein structures, a task crucial for understanding biological functions and disease mechanisms. The economic prowess of Deepmind is also creating some mind-blowing figures. The estimated replacement cost of current Protein Data Bank archival contents (the dataset from which the AlphaFold models were built) exceeds US$20 billion (assuming an average cost of US$100,000 for regenerating each of the >200,000 experimental structures). AlphaFold has subsequently generated a database of more than 200 million structures. Some back of the envelope maths infers that this would have cost us $20,000,000,000,000 using the original methods.

Number of protein structures in Alphafold. Credit: Deepmind

Of course, there are many simultaneous attempts to move the research needle using AI. A team from AI pharma startup Insilico Medicine, working with researchers at the University of Toronto, took 21 days to create 30,000 designs for molecules that target a protein linked with fibrosis (tissue scarring). They synthesized six of these molecules in the lab and then tested two in cells; the most promising one was tested in mice. The researchers concluded it was potent against the protein and showed “drug-like” qualities. All in all, the process took just 46 days. Scottish spinout Exscientia has developed a clinical pipeline for AI-designed drug candidates.

Not only does the platform generate highly optimized molecules that meet the multiple pharmacology criteria required to enter a compound into a clinical trial, it achieves it in revolutionary timescales, cutting the industry average timeline from 4.5 years to just 12 to 15 months. These companies have the technical know-how to build the models, and most likely some internal data with which to train them on. But they need more.

The Power of Existing Data

Platforms like the Dimensions Knowledge Graph, powered by metaphactory, demonstrate the potential of structured data. With over 32 billion statements, it delivers insights derived from global research and public datasets. Connecting internal knowledge with such vast external data provides a trustworthy, explainable layer for AI algorithms, enhancing their application across the pharma value chain.

Knowledge democratization bridges the gaps in the pharma value chain. Credit: metaphacts

AI is not all there is to be excited about in drug discovery. A further technological, serendipitous paradigm shift could amplify the results of AI alone. Once trained, machine-learning models can be updated as and when more data become available. With ‘federated learning’, separate parties update a shared model using data sets without sharing the underlying data. Advances in federated learning allow for collaborating across organizations without sharing sensitive data, maintaining privacy while pooling diverse datasets. Federated learning is a machine learning technique that allows models to be trained across multiple companies holding local data samples. Instead of sending data to a central server, each device sends its model updates (e.g., weight changes) to the central server. This allows further reduction in time and cost in the drug discovery process by improving predictive models, without leaking private company held datasets. Public data can augment local datasets held in corporate R&D departments, enriching the training process. Public data with similar characteristics can help in creating more comprehensive models. This is why we need more, better described open academic research data.

Pharmaceutical companies of the world should be engaging further with both open academic data aggregators in order to assist in the improvement of metadata quality and highly curated linked datasets like the ones supported by the Dimensions Knowledge Graph and metaphactory. The limiting factor is not the AI capabilities, it is the amount of high-quality, well described data that they can incorporate into their models. They need to:

  1. Acquire: Gather data from diverse sources, including internal external datasets. Make use of federated learning.
  2. Enhance: Enrich data with metadata and standardized formats to improve utility and interoperability.
  3. Analyse: Use new models to establish patterns, trends and drug candidates.

You may be thinking that this isn’t a serendipitous leap. This is the fruition of decades of research moving us to a point where these technologies can be applied. You may be right. Either way, the timing of these paradigm changing tools does feel serendipitous. Without AI and federated learning, we could not tackle today’s complex diseases in such an efficient manner. There is a long way to go, but by continuing to curate and build on top of academic data, we can push the boundaries of what’s possible in modern medicine.

This is part of a Digital Science series on research transformation. Learn about how we’re tracking transformation here.

The post The next serendipitous paradigm shift for drug discovery appeared first on Digital Science.

]]>
Evolving roles in academia https://www.digital-science.com/tldr/article/evolving-roles-in-academia/ Mon, 24 Jun 2024 14:12:03 +0000 https://www.digital-science.com/?post_type=tldr_article&p=72302 There are many different roles in a university, not just researchers. If you have one of these roles, we’d love to hear from you. How did you get there? How is it changing? As part of our commitment to the academic community, we have created a global survey, designed to assess the evolving roles in academia, the challenges on the horizon, and the impact of technology and other external forces on these changes.

The post Evolving roles in academia appeared first on Digital Science.

]]>

At Digital Science, our vision is to create a future where a trusted and collaborative research ecosystem drives progress for all. But academia is a village. There are many different roles in a university, not just researchers. If you have one of these roles, we’d love to hear from you. How did you get there? How is it changing? The skillsets of librarians today are vastly different to the skillsets needed pre-web. University Deans are the equivalent of CEOs of enormous companies.

As part of our commitment to the academic community, we have created a global survey, designed to assess the evolving roles in academia, the challenges on the horizon, and the impact of technology and other external forces on these changes.

The survey is aimed at non-research academic staff. The list of who we would love to hear from is listed below and you can find the survey here. We would love to hear your thoughts and we really appreciate your time.

Academia is at a pivotal juncture. The traditional roles and structures within universities and research institutions are transforming rapidly. Factors such as technological advancements, funding dynamics, and societal expectations are driving this change. To understand these shifts better and anticipate future trends, gathering insights directly from those at the heart of academia is crucial.

Our survey aims to capture a comprehensive picture of these evolving roles. We are interested in learning about your experiences, perspectives, and predictions. By contributing your voice, you can help us identify the key areas that need attention and innovation.

What to expect from the survey

The survey will take approximately 15 minutes to complete. It covers a range of topics, including:

  • Current Roles and Responsibilities: How have your roles and responsibilities evolved in recent years? What new roles have emerged, and what traditional roles are becoming obsolete?
  • Challenges and Opportunities: What are the most significant challenges you face in your role? What opportunities do you see for growth and development?
  • Impact of Technology: How is technology affecting your work? Which tools and platforms are you using, and how effective are they?
  • Future Predictions: What changes do you foresee in the next five to ten years? How do you think your role and the broader academic landscape will evolve?

Your responses will be anonymized and aggregated to ensure privacy and confidentiality. The findings from this survey will be shared with the academic community, providing valuable insights that can inform policy, practice, and innovation.

How to participate

Participating in our survey is easy and can be done at your convenience. Click here to start the survey. Your input is invaluable, and we greatly appreciate the time and effort you take to share your experiences and insights.

Thank you for considering our invitation. We look forward to your participation and to sharing the insights we gain with the academic community. Together, we can drive progress and create a future where research thrives.

The post Evolving roles in academia appeared first on Digital Science.

]]>
TL;DR Shorts: George Dyson on Fostering Innovation https://www.digital-science.com/tldr/article/tldr-shorts-george-dyson-on-fostering-innovation/ Tue, 28 May 2024 10:42:01 +0000 https://www.digital-science.com/?post_type=tldr_article&p=71862 Today's TL;DR Shorts episode with George Dyson is all about how we can foster innovation and nurture it when lightning strikes. George believes that research group sizes reach a critical mass beyond which fast innovation is hampered by bureaucracy. George feels that more can be done to support research transformation by other actors in the research landscape, including governments and funders.

The post TL;DR Shorts: George Dyson on Fostering Innovation appeared first on Digital Science.

]]>
Happy TL;DR Tuesday! Today’s TL;DR Shorts is from George Dyson. George is an American historian of technology and science. He is widely recognized for his work in the history of computing and his exploration of the intersection between science, technology, and society. He has bridged the gap between practical engineering and academic scholarship.

George talks of his interest in innovation and how it works smoothly and productively in small groups before bureaucracy can take hold. However, once group sizes reach a critical mass of around 50, the progress of innovators is hampered by processes and admin. This echoes the thoughts of Clayton M. Christensen in his seminal book “The Innovator’s Dilemma”. It seems that research projects, big business and even grant funding bodies struggle with capturing innovation when lightning seems to strike randomly.

George Dyson discusses the challenges of group size and the availability of governmental support for novel innovations – watch this and other videos on Digital Science’s YouTube channel: https://youtu.be/nAJzx0jSOFM

 “You don’t know where innovation is going to be, you just want to be ready to support it when it happens – and recognise it, and that is a very difficult problem.”

Subscribe now to be notified of each weekly release of the latest TL;DR Short, and watch the entire series here

If you’d like to suggest future contributors for our series or suggest some topics you’d like us to cover, drop Suze a message on one of our social media channels and use the hashtag #TLDRShorts.

The post TL;DR Shorts: George Dyson on Fostering Innovation appeared first on Digital Science.

]]>
Catalysing Change, Embracing Open Research – meet Dr Niamh O’Connor https://www.digital-science.com/tldr/article/meet-dr-niamh-oconnor/ Tue, 07 May 2024 10:30:00 +0000 https://www.digital-science.com/?post_type=tldr_article&p=71730 In this month's Speaker Series episode, Dr Suze Kundu meets Dr Niamh O’Connor, Chief Publishing Officer at PLOS. Niamh and Suze chat about their journeys from chemistry to supporting the research community, and how we can accelerate research culture towards more open ways of working.

The post Catalysing Change, Embracing Open Research – meet Dr Niamh O’Connor appeared first on Digital Science.

]]>
It is time for another TL;DR Tuesday and, as it’s the first Tuesday of the month, we’re excited to share another Speaker Series interview. The series is an opportunity for us to meet and have conversations with people who are advocating for different and often better ways of doing research, and this episode’s guest is no exception. So, grab your drink and snack of choice and settle in to hear a half-hour chat with Dr Niamh O’Connor, a chemist, a sci-fi book nerd, and an open research expert in scholarly communications and publishing.

The recent announcement of the Barcelona Declaration has once again put the focus firmly on the importance of open research – a development that Digital Science has welcomed. Academic publishers are among those making progress on the open research journey.

As Chief Publishing Officer at PLOS, Niamh provides business leadership for the entire PLOS portfolio, ensuring that all outputs have a strong value proposition and advance PLOS’s vision and mission. A pivotal figure in academic publishing, Niamh talks with Dr Suze Kundu about the evolving landscape of scientific research and the push towards open science, including her journey from the early days of advocating for public access to research, to tackling current challenges like making science more inclusive and accessible.

Dr Suze Kundu sits down with Dr Niamh O’Connor to talk about the challenges, successes and opportunities in open research. See the full interview here: https://youtu.be/DWiFXAb1tMo

Academic Adjacent Research Roles

Though Niamh and Suze both started their careers as chemists, they are just two of the many thousands of people who have followed career paths that have taken them out of academic research and into academic-adjacent roles. Such roles are vital in supporting and enabling research to exist as we know it. However, Niamh and Suze are also just two people of many more thousands who knew very little about the existence of this range of alternative academic careers that are crucial to supporting the research ecosystem before moving on to such roles.

Niamh tells us how important it is for researchers to be exposed to this range of different careers at the graduate level. Without good people who understand the challenges and opportunities of all segments of the profession, the whole endeavour of research could be under threat. Niamh also discusses the persistent systemic barriers that hinder equal participation in science and other research. This includes the disparities in accessing research information, which particularly affects those outside of major academic institutions.

Addressing Systemic Barriers

The academic publishing industry isn’t very diverse. Progress in achieving better societal representation is very slow. It used to be the case that there were very few women in leadership roles, however this is changing, and that progress should be acknowledged and celebrated. Inequality is a systemic issue, and access to a career in research is still very skewed, not least in socio-economic terms – there are many financial barriers to navigate when pursuing a career in research.

Niamh calls for a shift in academic publishing to reflect a more diverse workforce and audience, emphasising that representation matters not just at the board level but in every aspect of academic engagement. Niamh shares one particularly persistent sentiment that she often heard from members of the community: “There is nobody like me. I don’t see myself on your board, therefore I feel that my research isn’t welcome.” It is hard to recruit into and retain people in a profession that upholds a culture within which they feel they don’t belong.

Whilst Niamh acknowledges that societal representation and diversity has improved within the research profession during her career, the systemic barriers to entry for curious minds around the globe mean that we still have a long way to go, not only in improving the scope of career paths available, but also for individuals from different backgrounds to enter and thrive within the research and academic publishing industries. Niamh makes the point that although we have seen better representation for underserved populations at the top of the academic publishing career hierarchy, there still seems to be a limit as to how much diversity is too much diversity in some pockets of our community.

Advocating for Open Access

Niamh embarked on her career driven by the belief that publicly funded research should be accessible to all. Her efforts in the early days of PLOS were not just about providing access to research information, but also about maintaining the quality and integrity of scientific experiments, setting a precedent that would challenge the status quo of academic publishing. Niamh reflects on how the dissemination of academic content has moved from an open model, to a more restricted publishing model over time.

PLOS has been a leader in trying to pull us back to an open model, but has encountered the uneasiness in cultural change in academia. Niamh believes that the incentive system really does stifle those who want to drive change in research practices. Niamh sees the inertia within the established systems of scholarly publishing as a significant challenge. The reluctance to abandon old practices is pervasive, and she stresses the importance of creating incentives for adopting new, open methods of publishing and research evaluation – whilst not forgetting the importance of local contexts in understanding and removing barriers to open science. This is a change that she believes would significantly democratise science, making it a more intuitive and accessible venture for everyone.

The Future of Open Research

Niamh sees technology as a crucial ally in advancing open research. She wants to see us move away from the “version of record” concept to a more dynamic and updated format of scholarly communication. The culture within which we work does not lead researchers to work in a way that is collaborative. Open research and technology can help move us further, faster. We often try to use technology to duplicate the analogue version we have, but it also limits us in how we think research should be disseminated. We should look at how we version and release knowledge to the world. Technology has done so much already. It will do so much more. Niamh believes that the changes needed to drive technological change are societal. Business models and incentive systems need to be tweaked and redirected.

Through her leadership and insight, Dr Niamh O’Connor continues to champion the transformation of scientific research into a more open, equitable, and community-focused endeavour. Her commitment to breaking down barriers and envisioning a more inclusive future provides a hopeful outlook for the next generation of scientists and researchers. She reminds us not to lose faith, as change is happening, albeit a little more slowly than we would like to see and feel. However, as long as the academic publishing industry has leaders like Niamh, there is a lot to be hopeful for.

Check out our Speaker Series playlist on YouTube which includes chats with some of our previous speakers, as well as our TL;DR Shorts playlist with short, snappy insights from a range of experts on the topics that matter to the research community.

With thanks to Huw James from Science Story Lab for filming and co-producing this interview. Thanks also to our hosts, Locke at East Side Gallery, Berlin, Germany, and to Niamh for her time.

The post Catalysing Change, Embracing Open Research – meet Dr Niamh O’Connor appeared first on Digital Science.

]]>
Open Access: Mo money, mo problems https://www.digital-science.com/tldr/article/open-access-mo-money-mo-problems/ Wed, 24 Apr 2024 10:31:41 +0000 https://www.digital-science.com/?post_type=tldr_article&p=71227 Can we transform the Open Access Pathway? And who is already showing us how it could be done? Mark Hahnel explores the options - and consequences.

The post Open Access: Mo money, mo problems appeared first on Digital Science.

]]>
At the start of my PhD in Stem Cell Biology, I was not aware of Open Access despite Open Access journals being a thing since the late 1980s. arXiv came along in the early ‘90s, followed by PubMed Central and the first commercial OA journal, Biomed Central in the late ‘90s. PLOS launched in 2001, but it wasn’t until PLOS ONE the conversation sparked in my lab.I was introduced to Open Access publishing not because of the ideals of access for all, but the fact that PLOS ONE does not value the perceived importance of a paper as a criterion for acceptance or rejection and therefore our lab could publish lots of our research there. Perfect in the publish or perish world of academia.

So, it is truly remarkable that just a decade later, the majority of publications were Open Access publications. This idea that research can be akin to a large ship that is slow to change course appears to be outdated. 

Number of Open Access Publications per year

Credit: Dimensions.ai

More recently eLife has taken a huge leap in reviewing its model of publication and becoming a hybrid pre- and post- peer review publishing house.

“We will publish every paper we send out for review as a Reviewed Preprint, a journal-style paper containing the authors’ manuscript, the eLife assessment, and the individual public peer reviews.” 

This is pretty much a professionalised version of Stevan Harnard’s Subversive Proposal, which is now nearly 30 years old and called for preprints on FTP servers as a way to create global Open Access. The difference is that the time is right. eLife has the advantage of online only infrastructure and consumers. The tools that allow eLife to thrive were not available 30 years ago. So 30 years on, was Harnad’s proposal right and does eLife finally have the answer?

Trust comes first 

Recently, I have experimented with visualising optimal academic dissemination. I soon realised my scoring system below was flawed – as trust in research trumps the content being open, quickly disseminated or cost effective. What it did highlight is that newer types of content, such as data or code publishing, benefit from not having legacy workflows, sustainability models or the concept of prestige. The cost and complexity to make data publishing ‘trusted’ is orders of magnitude less than to make traditional paper publication fast, open or cost effective.

Credit: Mark Hahnel.

Herein lies the problem, though. Treating each of these issues as equal can lead to propagation of non-trustworthy and even false research (sometimes with an agenda). Trust needs to come first. At UKSG this month, Chris Bennett of Cambridge University Press & Assessment highlights what has happened in attempting to fix scholarly publishing by ‘solving for x’, where x is ‘open’.

Open Access publishing is complex, partly because that is what it has become, a business model. If we solve for x, where ‘x’ is maximising article processing charges (APCs), we see an explosion of content, a lot in the form of special issues (invited papers). I would make the case that we are currently publishing too many papers.

Credit: Hanson et al.

Gates-keepers?

So perhaps the answer is to move all of the publishing to the other side of peer review and transform the way we talk about the Preliminary Scholarly Record and the Trusted Scholarly Record. The Bill & Melinda Gates Foundation has updated its Open Access policy from January 2025 in a way that lends itself to this transformation:

  • “Requiring preprints and encouraging preprint review to make research publicly available when it’s ready. While researchers and authors can continue to publish in their journal of choice, preprints will help prioritize access to the research itself as opposed to access to a particular journal.
  • Discontinuing publishing fees, such as APCs. By discontinuing to support these fees, we can work to address inequities in current publishing models and reinvest the funds elsewhere.
  • We will work to support an Open Access system and infrastructure that ensures articles and data are readily available to a wider range of audiences.”

In my opinion, the Gates Foundation is taking us in the right direction. eLife is already there. The problem lies in the murky middle. Preprints shouldn’t be treated as academic facts by the general public and news media. Researchers in the field can make their own decisions on whose research they trust in their field and why, as has been the case in arXiv for years. 

If we compare the path we have been on with regards to Open Access with some of the newer experimentation, a proposed transformative pathway emerges. Currently, we do not have a preprint for every paper. The Scholarly Record is made up of preprints, author instigated and publisher instigated peer reviewed publications. Not all peer-reviewed publications are Open Access.

Credit: Mark Hahnel.

If all papers were published as preprints before being submitted for peer review, the problem of un-checked research being picked up by news media or conspiracy theorists could cause a problem. Therefore, having community wide standards around presentation of and language describing the papers pre- and post- peer review should be established. We should have a ‘Preliminary Scholarly Record’ and a ‘Trusted Scholarly Record’. This author argues that a transformation of the Open Access pathway would lead to many benefits compared to the existing model. This is very inline with a proposed “Plan U” – “If all funding agencies were to mandate posting of preprints by grantees—an approach we term Plan U (for “universal”)—free access to the world’s scientific output for everyone would be achieved with minimal effort” (Server et al.)

Credit: Mark Hahnel.

A continuation of the existing Open Access PathwayTransformation of the Open Access Pathway
Pay to openOpen in the preprint/green form by default
Major cost is at the publication levelMajor cost is at the peer review level
Indeterminate innovation in the speed of research disseminationResearch disseminated by default, but with the caveat that misleading, opinionated and unfactual research is published fast
Large corpus of trustworthy researchLarge corpus of trustworthy research
Ambiguity for the general public as to what is rigorous, peer reviewed researchClear delineation between preliminary, unchecked research and what is rigorous, peer reviewed research

Automatic for the people

The transformative approach also opens other areas to focus innovation on. I want to fix peer review. This is the area that I think can have the most impact in academic publishing in the next 20 years – that is not already being aggressively pursued (eg. data publishing). By establishing transparent and open quality checking of academic content, both humans and machines will be able to distinguish genuine research from fake, embellished or exaggerated findings. Automating trust markers can contribute to the bigger picture of framing.

Society will advance at a greater rate if academic research is published faster (maybe even if fewer papers were published faster). Every current model falls down at the point of peer review. Peer review is largely unpaid, and the incentive structures for doing it mean that senior researchers often pass on the work to those less experienced and the benefits to the researcher themselves are a grey area, if they exist at all. As such peer review is slow, a burden. This means that we cannot achieve the Shangri-La of “trusted, fast academic publishing”, without overhauling peer review. My desire for the next transformation in Open Access is to solve for “peer review”. You could argue we will just make things even more complex. However, going back to my Optimal Academic Publishing Model, I argue that it is much more cost effective and less complex to add trust to the existing open and quickly disseminated preprints than the alternative. That is, to reverse engineer openness and fast dissemination to the closed access publishing model.

Open Access has been transformative. That transformation cannot stop here. The job is half done.


References

Hanson MA, Barreiro PG, Crosetto P, Brockington D (2023) The strain on scientific publishing. arXiv https://doi.org/10.48550/arXiv.2309.15884

Sever R, Eisen M, Inglis J (2019) Plan U: Universal access to scientific and medical research via funder preprint mandates. PLoS Biol 17(6): e3000273. https://doi.org/10.1371/journal.pbio.3000273

The post Open Access: Mo money, mo problems appeared first on Digital Science.

]]>