Artificial Intelligence (AI) Articles - TL;DR - Digital Science https://www.digital-science.com/tldr/articles/topics/ai/ Advancing the Research Ecosystem Mon, 31 Mar 2025 14:00:49 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 The Perpetual Research Cycle: AI’s Journey Through Data, Papers, and Knowledge https://www.digital-science.com/tldr/article/the-perpetual-research-cycle-ais-journey-through-data-papers-and-knowledge/ Fri, 21 Mar 2025 09:45:00 +0000 https://www.digital-science.com/?post_type=tldr_article&p=75639 A "collaborative synergy" between AI and researchers "will define the next era of scientific progress", writes Mark Hahnel. How will AI enhance human intellect?

The post The Perpetual Research Cycle: AI’s Journey Through Data, Papers, and Knowledge appeared first on Digital Science.

]]>
Academics hypothesize, generate data, make sense of it and then communicate it. If AI can help to generate, mine, and refine knowledge faster than human researchers, what does the future of academia look like? The answer lies not in replacing human intellect but in enhancing it, creating a collaborative synergy between AI and human researchers that will define the next era of scientific progress. I’ve been playing around with chatGPT, Google Gemini and Claude.ai to see how well they all do at creating academic papers from datasets. 

AI can also serve as a tool to aid humans in data extraction from many papers. Consider a scenario where AI synthesizes information from hundreds of studies to create a refined dataset. That dataset then feeds back into the system, sparking new research papers.

This cycle—dataset to paper, paper to knowledge extraction, knowledge to new datasets—propels an accelerating loop of discovery. Instead of a linear research pipeline, AI enables a continuous, self-improving knowledge ecosystem.

From data to papers

I looked for interesting datasets on Figshare. The criteria was a) that I knew they would be re-usable as they had been cited several times. And b) the files were relatively small (<100MB) so as not to hit the limits of the common AI tools. 

This one fit the bill:

Rivers, American (2019). American Rivers Dam Removal Database. figshare. Dataset. https://doi.org/10.6084/m9.figshare.5234068.v12

From there I asked Claude 3.7 Sonnet “Based on the attached files, can you create a full length academic paper with an abstract, methods results, discussion and references”. Followed by “Can you convert the whole paper to latex so I can copy and paste it into Overleaf?”

The resulting paper needs a little tweaking in the layout of the results and graphs, but other than that, has done a great job.

Papers to new data/knowledge

A single paper is just the beginning. The real challenge is synthesizing knowledge from the ever-growing volume of research. This is where specialized knowledge extraction tools become crucial. How do we effectively mine this knowledge? This is where ReadCube shines. ReadCube helps researchers manage and discover scholarly literature, but its real power lies in its knowledge extraction capabilities. Imagine ReadCube as a powerful filter, sifting through countless pages to extract the nuggets of wisdom.

Tools like ReadCube can then analyze vast collections of papers, uncovering patterns and relationships that human researchers might miss. This process involves:

  • Text and citation mining: AI can analyze papers to identify emerging trends, inconsistencies, or knowledge gaps.
  • Automatic synthesis: AI can compare findings across thousands of studies, synthesizing insights into new, high-level conclusions.
  • Hypothesis generation: By recognizing correlations between disparate research areas, AI can propose new research directions.

The Flywheel Effect: How the Cycle Accelerates

The true magic happens when this extracted knowledge becomes the input for the next iteration. Each cycle follows this pattern:

  1. Raw data is processed by AI to generate initial research outputs
  2. Knowledge extraction tools mine these outputs for higher-order insights
  3. These insights form a new, refined dataset
  4. AI processes this refined dataset, generating more precise analyses
  5. The cycle continues, with each rotation producing more valuable knowledge


With each turn of this flywheel, the insights become more refined, more interconnected, and more actionable. The initial analyses might focus on direct correlations in the data, while later iterations can explore complex causal relationships, predict future trends, or suggest optimal intervention strategies.

This AI-driven, data-to-knowledge cycle represents a paradigm shift in research. Imagine the possibilities in fields like medicine, climate science, and economics. We’re moving towards a future where AI and human researchers work in synergy, pushing the boundaries of discovery. Rather than replacing researchers, AI acts as a force multiplier, enabling deeper faster knowledge generation.

The post The Perpetual Research Cycle: AI’s Journey Through Data, Papers, and Knowledge appeared first on Digital Science.

]]>
TL;DR Shorts: Dr Danny Hillis on the Evolution of AI https://www.digital-science.com/tldr/article/tldr-shorts-dr-danny-hillis-on-the-evolution-of-ai/ Tue, 28 Jan 2025 01:40:24 +0000 https://www.digital-science.com/?post_type=tldr_article&p=75178 Welcome to week 17 of January 2025, the month that seems never to end - however, I have been reliably informed that this IS, in fact, the LAST week of the month so we thought we’d reward you with an exclusive TL;DR Long from Dr Danny Hillis. In this episode, Danny chats about the history of AI, from working with the field’s founding fathers to predictions that have come true, and what we can really expect from AI in the coming years.

The post TL;DR Shorts: Dr Danny Hillis on the Evolution of AI appeared first on Digital Science.

]]>
Welcome to week 17 of January 2025, the month that seems never to end – however, I have been reliably informed that this IS, in fact, the LAST week of the month. Since time appears to be standing still, we thought we’d reward you with something special! TL;DR Tuesdays are famed for our TL;DR Shorts, but Dr Danny Hillis, founder of Applied Invention, becomes only the second contributor in a year to be awarded an exclusive TL;DR Long – and our longest non-Speaker Series offering so far. To explain why he had so many thoughts, all I need to say is Artificial Intelligence. In this episode, Danny chats about the history of AI, from working with the field’s founding fathers to predictions that have come true, and what we can really expect from AI in the coming years.

Dr Danny Hillis talks about the history and future of artificial intelligence. Check out the video on the Digital Science YouTube channel: https://youtu.be/xH6-DUBKKEM

Although AI feels like a recent tech development, Danny reminds us that it has a long-established history. Danny worked alongside the likes of Marvin Minsky, and Claude Shannon – no, they’re not Bugsy Malone characters but are two of the team members who established the field of artificial intelligence. Working with them, Danny and the crew discovered that what they thought would be easy was much harder than expected, while what they were wary of was much easier to achieve. Pattern recognisers were developed with little effort, but creating a computer that could beat a human at Chess was much harder.

It turned out that the main barriers to success were a lack of data and, the most limiting factor of all, a lack of computational power. But that’s OK because Danny’s PhD focused on what would be required to build the biggest computer. He discussed his Thinking Machines in our Speaker Series chat which we shared last month.

Danny notes that today’s AI researchers are working on algorithms that are very close to the ones the team imagined back at the start of this area of research, however, he reminds us that we are still way off machines that can replace humans. While well-trained machines can carry out specific talks well, they are missing the critical thinking part of intelligence, however good they are becoming in mimicking intelligence, as evidenced in numerous case studies of AIs that hallucinate, or create solutions that look and sound right based on the fact that the machine has recognised patterns and attempts to apply those rules but that, without real meaning or understanding, are factually incorrect. Danny tells the story of how his granddaughter can recognise patterns in visiting contractors and become someone who sounds like an expert in moments, but scratch the surface and there is no real knowledge of the area to be able to make logical decisions. I too am reminded of the time I accidentally found myself co-piloting an island-hopper propellor plane across Belize, having curiously followed the actions of the pilot for the first two stops – but we’ll save that story for another time. The year is young, and we’ve got lots more to chat about, and many more stories to share.

Danny reflects that, while to experts it doesn’t feel like AI has moved on much since the development of supercomputational power, there is a change coming, as evidenced by the ever-increasing rate of development in the area. The difference this time around is funding, which is attracting the smartest minds in their droves, catalysing this progress by exploring the intuitive aspects of this technology.

To make this technology truly good, Danny firmly believes that a source of truth is required. One of his interests is building a knowledge graph of the provenance of information, which he further expanded on in last month’s Speaker Series. This would go some way to building technology that is as robust and trustworthy as possible, while attempting to eliminate biases or building on questionable knowledge that can bed into the foundations, creating points of future weakness and instability.

The great thing about building good technology is that it in turn starts to iteratively learn and teach itself, generating more knowledge, even about that knowledge itself. These are exciting times for AI, but public and research community engagement remains vital to ensure that developments do not double down on historically discriminatory narratives or unscientific knowledge that have no place in today’s society.

Subscribe now to be notified of each weekly release of the latest TL;DR Short, and catch up with the entire series here

If you’d like to suggest future contributors for our series or suggest some topics you’d like us to cover, drop Suze a message on one of our social media channels and use the hashtag #TLDRShorts.

The post TL;DR Shorts: Dr Danny Hillis on the Evolution of AI appeared first on Digital Science.

]]>
The 12 Days of DSmas https://www.digital-science.com/tldr/article/12-days-of-dsmas-2024/ Mon, 23 Dec 2024 12:34:07 +0000 https://www.digital-science.com/?post_type=tldr_article&p=74724 Every Muppets fan knows that Christmas is all about being revisited by people you've previously encountered. So from 25th December to 5th January we'll be sharing our 12 Days of DSmas. Check back daily as we share a Speaker Series 2024 chat each and every day. Happy Holidays from the Digital Science Thought Leadership Team!

The post The 12 Days of DSmas appeared first on Digital Science.

]]>

Every Muppets fan knows that Christmas is all about being revisited by people you’ve previously encountered. So from 25th December to 5th January we’ll be sharing our 12 Days of DSmas. Check back daily as we share a Speaker Series chat each and every day. Happy Holidays from the Digital Science Thought Leadership Team!

And if you just can’t wait, you can catch up on our entire 2024 Speaker Series season on-demand:

Merry Dr Chris Van Tulleken-mas! We chatted with Chris online about research integrity, impact, openness, and investigative research. Catch his interview here, and don’t forget to watch his Xmas Lectures on BBC for The Royal Institution this year!

As a Nobel laureate and former president of The Royal Society, Professor Venki Ramakrishnan has long played a role in shaping a more innovative, inclusive and impactful research culture, which we chatted about during his live Speaker Series lecture at the Ri. We went to Cambridge, UK to hear his thoughts on curiosity, competition and collaboration.

As Chief Publishing Officer at PLOS, Niamh provides business leadership for the entire PLOS portfolio to advance PLOS’s vision and mission. In this episode Niamh talks about the evolving landscape of scientific research and the push towards open science, including her journey from the early days of advocating for public access to research, to tackling current challenges like making science more inclusive and accessible.

Building communities is hard, but Alice Meadows has worked hard to make it look effortless. Here she is in Boston, MA, USA, telling us about the power of persistent identifiers.

It’s New Year’s Eve, and a time to reflect on the past and make plans for the months ahead. When we visited the Max Planck Institute in Berlin, Germany, we added to the echoes of amazing research conversations resonating around their iconic library when we chatted about the history, philosophy and future of research with Dr Maria Avxentevskaya and Dr Ben Johnson.

Happy New Year! We caught up with pro-skater Rodney Mullen at his home in Los Angeles, USA to hear his thoughts on why we need diverse minds to innovate in all walks – and ollies – of life. And, since it’s the new year and you’re probably feeling a little “sleep deprived”, you can also follow this up with his live Speaker Series lecture at the Ri.

If you’ve been eating as much cheese as this author, dearest gentle reader, you too will be experiencing a fascinatingly slippery grasp on reality – which brings us to Day 9’s featured speaker. “Is Maths Real?” was the question that Dr Eugenia Cheng posed in her live Speaker Series lecture at the Ri. I caught up with her ahead of her lecture in the iconic Faraday lecture theatre in London, UK to talk about why we need to break down barriers of knowledge in research, and reunite STEM and the humanities for impactful change.

2024 was a wild ride for global politics, and research is not immune to its changes. I caught up with Professor Jenny Reardon in Cambridge, UK, to learn more about how we can work with politics, and not against it, to provide solutions for everyone across the world, and where red tape remains to be overcome.

Our final Speaker Series guest of 2024 was Dr Danny Hillis. We visited the Applied Invention offices in Cambridge, MA, USA, where innovator, inventor, and Imagineer Danny shared his thoughts on how we can use novel technology to combat novel challenges in mis- and disinformation and make the most meaningful impact from data.

Catch up on our entire 2024 Speaker Series season on-demand and watch this space for our 2025 series featuring more impactful innovators from across the research landscape. Happy Holidays, and Happy New Year!

The post The 12 Days of DSmas appeared first on Digital Science.

]]>
Technology and Truth – meet Dr Danny Hillis https://www.digital-science.com/tldr/article/technology-and-truth-meet-dr-danny-hillis/ Tue, 03 Dec 2024 11:30:00 +0000 https://www.digital-science.com/?post_type=tldr_article&p=74496 For me, the countdown to Christmas means three things - excitement, new gadgets, and an almost daily dose of Disney in one form or another. So, since it's December, let's tick off a hattrick of all three of those boxes with this special extra-long conversation with Dr Danny Hillis - inventor, innovator, and the founder of Applied Invention. Want to avoid that annual appraisal or procrastinate around your 2025 planning? Grab a drink and a chocolate orange, and spend 50 minutes hearing from the legend that developed the knowledge graph that provides the foundations for Google Search.

The post Technology and Truth – meet Dr Danny Hillis appeared first on Digital Science.

]]>
For me, the countdown to Christmas means three things – excitement, new gadgets, and an almost daily dose of Disney in one form or another. So, since it’s December, let’s tick off a hattrick of all three of those boxes with this special extra-long conversation with Dr Danny Hillis – inventor, innovator, and the founder of Applied Invention. Want to avoid that annual appraisal or procrastinate around your 2025 planning? Grab a drink and a chocolate orange, and spend 50 minutes hearing from the legend that developed the knowledge graph that provides the foundations for Google Search.

Danny chats with Suze about technology and truth. See the full interview here: https://youtu.be/cI8kQ_GNV50

Danny is a renowned computer scientist, engineer, and entrepreneur. As the co-founder of Applied Invention, and before that of Thinking Machines, Danny is also a pioneer in parallel computing, which helped shape the trajectory of technology and its applications across diverse fields. His CV includes a long list of outputs and achievements, including designing the Connection Machine, a supercomputer that was way ahead of its time, and his work on the Long Now Foundation’s 10,000-Year Clock, an example of how art and science can promote long-term thinking. In this chat, Danny and I discuss the exciting inventions that thrive at the intersections of technology, research, and innovation, and what we can expect regarding the future of science and its role in solving humanity’s greatest challenges for everyone.

The goal of Thinking Machines was to build computers with a lot of power that could develop AI, large language models, and so on. According to Amdahl’s Law – a problem doesn’t become easier to solve when you apply more processing power, rather it becomes less efficient – many people presumed that the problem couldn’t be overcome, and parallel computers got little to no interest. However, Danny wouldn’t let it go. He understood that brains are parallel computers – they may consist of slow components, but they still do things quickly. Danny wanted to build a machine that proved this rule wrong and set about building something capable of doing so. He proved that Amdahl’s Law can be broken, and made some very fast machines in the process. These machines were responsible for creating the first-ever global climate models and also three-dimensional seismic models of the Earth. Whichever problem you brought the supercomputer, it found a way to apply its power usefully. More importantly, the supercomputer and its varied associated projects were a magnet for very smart people who liked solving a range of problems – including someone called Richard Feynman who apparently got very excited about physics.

Because of its varied applications and capabilities, a very interdisciplinary community of users developed around the Connection Machine. Most other research endeavours reward specialists with deep knowledge of one small area, but the supercomputer set the tone for Thinking Machines as being a place where being a generalist pays off.

I was thrilled to learn in our chat that one of my favourite authors Neal Stephenson was also inspired by Danny’s invention. In the original idea for his first published SciFi book, Snowcrash was a computer program. However, he pivoted slightly after he tried and failed to program the Connection Machine. Sergey Brin was another member of the Connection Machine’s research community – he already had an idea for search engines, which turned into Google. By using this new and powerful technology, these researchers were able to stretch the possibility of what they could do, only just beyond the realms of what they were doing, but enough to fuel further exploration.

Knowledge graphs for humanity

Danny realised that having a computer-readable graphic representation of general knowledge – people, places, things, schedules, etc – would become important. It would be a tool to help computers navigate questions posed to them. His company Metaweb therefore set about building a knowledge graph and, as predicted, people started using it. It became a critical piece of technology that enabled search engines, especially once Google bought it. That same technology became the Google Knowledge Graph, and its legacy is seen in Google Maps for example. The connected, related information presented in response to a search query is coming out of the knowledge graph even now.

Data is useless unless you can make sense of it. By using knowledge graphs, information can be associated through connection and meaning, presenting results that don’t contain the words in your keyword search query but are closely related to it in meaning and semantics. This is a sentiment that Sebastian Schmidt echoed in our chat back in May. Danny says that right now there are more than 100s of billion relationships between entities means lots of connected information. However, despite its continued widespread use, Danny feels that for him the project was still a failure in some senses – he wanted to create this resource for the world. Instead, every company makes their own knowledge graph which feels like a waste of human effort, and also biases progress towards wealthier companies.

Danny also believes there is scope to make knowledge graphs even better and more fit for purpose in today’s society. He says that there is not a rich enough representation of the provenance of information that is included, and humanity is facing a crisis over what they can and can’t believe. There is so much incorrect information out there, and AI is supercharging the capacity to create what sound like plausible truths but are instead contributing to misinformation and disinformation. One tool to help people determine the truth in information could be a graph of public assertions of knowledge. By including a range of cultural lenses through which to view this knowledge, some information could be contradictory, but we are giving people the power to decide what to believe by also including information about how that knowledge came to be created. In an age where truth is a complicated concept, there are more tools available to us than ever before to help us remain as scientific as possible.

Interdisciplinarity for innovation

After his time at Thinking Machines, Danny wanted a break from computers. He had wanted to be a Disney Imagineer ever since he heard the term, and was offered a role at the company. However, he needed a new title to reflect the varied work he would be doing. Thus Danny and some of his key collaborators became the first cohort of Disney Fellows, with Danny also taking up the position of VP of Imagineering. This role opened Danny up to thinking about storytelling, art and audience engagement in a way he had never done before.

Danny says it was fun to turn make-believe into a magical reality, but he missed pushing the boundaries of truth-oriented work, and so took what he had learned from problem-solving during his time at Thinking Machines and combined it with Disney’s studio approach whereby they would call on a network of experts to solve different problems and created Applied Invention, one of the few interdisciplinary innovation organisations around, alongside the likes of Google’s moonshot factory X.

Applied Invention exists as a “company of last resort”, If a problem can’t be solved in any other way, Danny and his interdisciplinary team will give it a good go. He says that they do this because rather than use the same hammer to hit the same nail, they have a range of different tools with which to hit the nail instead, potentially giving rise to a different solution. For the team at Applied Invention, each project they take on has to fulfil three criteria – someone has to be excited by the project, someone else has to determine that it won’t lose the company too much money, and finally, someone determines whether the team will be able to create a solution that is better than one anyone else could come up with. If all three criteria are met, the project is live.

When I asked Danny about how this way of working differs from academia, he said that while a PhD will train you on how to do something well with a deep knowledge of a very small area, organisations like applied minds work across all disciplines, and that taking a more generalistic approach means that while each person’s knowledge may be broad but not quite expert-level, the teams know of many other experts with that depth of knowledge that they can bring in to work with them. Danny says that academics may still have that experimental curiosity but because of how research is rewarded in academia, most researchers are unable to pursue such blue skies ideas unless or until they have a free pass in the form of a Nobel Prize or other such ticket to academic adventure.

Art and science

Danny reflects on the inner beauty of the workings of technology and the outer beauty of how it looks and what it can do. Danny believes that scientists have long understood that you need to care about how something looks. If we go back to Faraday’s time in the lab (Michael Faraday, and not my cat Faraday, creative though he also is) we are reminded that even then people knew that societal solutions had to look good in order to improve their chances of being adopted. This is something that Danny learned more about during his time at Disney. Inventions and interventions succeed and fail based on whether or not they have the right story behind them. There is scope to use art for more public engagement with science too – in fact one of Danny’s other projects, the Long Now Clock, is a piece of technology designed to remind people of the human story and our place in this universe. 

Science is a new way of deciding what to believe

As our conversation came to a close, Danny revisited the idea of using technology to determine the best truth in an age of mis- and disinformation, when you don’t know what to believe, whether you work within research or are impacted by it. While linking information has gone a huge way to democratising access to information about all of human knowledge, we are also muddying the waters by allowing the technology we have developed to further contribute to mistruths. However, where there is challenge, there is opportunity, and Danny encourages researchers to embrace the opportunity to build better infrastructures to give people the tools to determine what to believe. Danny believes this will have the single biggest impact on the human condition. He says that knowledge is the best thing humans have created. And, while human-made technology can make it bad, we can also use those very tools to strengthen the robustness and integrity of this knowledge to rescue ourselves out of this alternative truth crisis we now face.

You can watch the full interview with Danny on our YouTube channel, and check out our Speaker Series playlist on YouTube which includes chats with some of our previous speakers, as well as our TL;DR Shorts playlist with short, snappy insights from a range of experts on the topics that matter to the research community.

With thanks to Huw James from Science Story Lab for filming and co-producing this interview. Filmed at the Applied Invention offices in Cambridge, Massachusetts, USA in April 2024.

The post Technology and Truth – meet Dr Danny Hillis appeared first on Digital Science.

]]>
Deep Minds: Reflections from the AI for Science Forum https://www.digital-science.com/tldr/article/ai-for-science-forum/ Thu, 28 Nov 2024 14:00:00 +0000 https://www.digital-science.com/?post_type=tldr_article&p=74440 Last week, Suze attended the AI for Science Forum, a gathering of incredible minds from across disciplines, each sharing perspectives on how AI is transforming research and impacting society. Organised by Google DeepMind and The Royal Society, the event brought together invited guests from across all segments of the research community to share their experiences and expertise, while also giving opportunities for attendees to meaningfully discuss how we can best wrangle this novel technology to increase the impact and reach of research with the resources currently at our disposal. Here's a reflection on the day's proceedings.

The post Deep Minds: Reflections from the AI for Science Forum appeared first on Digital Science.

]]>
Last week, I attended the AI for Science Forum, a gathering of incredible minds from across disciplines, each sharing perspectives on how AI is transforming research and impacting society. Organised by Google DeepMind and The Royal Society, the event brought together invited guests from across all segments of the research community to share their experiences and expertise, while also giving opportunities for attendees to meaningfully discuss how we can best wrangle this novel technology to increase the impact and reach of research with the resources currently at our disposal.

The Power of AI in Research

Fresh-faced after the previous evening’s reception at The Royal Society, James Manyika, Senior Vice President at Google Alphabet, set the tone for the conference with a powerful opening session on the tangible benefits of AI. From AlphaFold’s protein-folding revolution to AI’s role in flood forecasting in Bangladesh which has already impacted millions of people, his talk was a staggering reminder that AI is already being used in various innovative and impactful ways beyond those we are more familiar with. He also discussed AI-enabled solutions through the lens of a public health research focus, reminding attendees of advancements including diabetic retinopathy detection in low-resource settings and the groundbreaking atmospheric simulations helping better prepare agricultural workers with advanced warning of what interventions may be required. This reminded me of our recent TL;DR Shorts episode with Dr Danny Hillis of Applied Invention who talked about the potential impact that automated research could have in helping us help non-traditional researchers. But James’s outlook for AI wasn’t all rosy as he underscored the limitations of this tool and emphasised the need for responsible approaches and equitable access to AI-powered tools, echoing his colleague Dr Astro Teller’s thoughts on this.

CRISPR Meets AI

Nobel Prize-winning Chemist Professor Jennifer Doudna, Professor of Biochemistry, Biophysics and Structural Biology at UC Berkeley, and James Manyika picked up on this theme as they explored the synergy between CRISPR and AI. They discussed how CRISPR’s one-and-done gene therapies are accelerating in application thanks to AI’s ability to identify the genetic changes that drive outcomes, and democratising treatment options by providing a range of more affordable therapeutic options. Beyond healthcare, they also chatted about the impressive potential impact that AI will have on climate-related research, from drought-resistant crops to better carbon storage systems. What resonated most with me was their call to reduce the barriers – financial, technical, and geographic – to accessing the outcomes of this technology, making it truly global, and reflecting the recent thought we shared from Professor Lord Martin Rees.

Collaborating Across Disciplines

The focus shifted to the future of collaboration in a panel led by Eric Topol, author and Executive Vice President of Scripps Research, and featuring Fiona Marshall, President of Biomedical Research at Novartis, Alison Noble, Oxford University Technikos Professor of Biomedical Engineering, and Vice President & Foreign Secretary, The Royal Society, and Pushmeet Kohli, Vice President of Science, Google DeepMind. From AI revealing 2.2 million new stable inorganic crystals for potential use in everything from energy to electronics, to revolutionising biomedical imaging through natural image processing, the discussion highlighted how AI forces us to rethink and redefine what collaboration and trust look like. Alison’s comment on the importance of training scientists to understand AI’s errors stood out for me. The panel emphasised the shifting hierarchies and power dynamics in research, with data scientists increasingly leading labs – a significant cultural change, given that they were often seen as collaborators and service providers who were rarely even named on papers that couldn’t have been published without their expertise.

Climate, Complexity, and Community

Thomas Friedman gave an evocative talk on “climate weirding”, highlighting how we’re hitting many tipping points all at once – AI’s massive breakthroughs, climate change chaos, and global instability – while also linking AI capabilities to potentially help us with carbon emissions and societal disorder. His call for politics to embrace science felt especially urgent amid discussions of mass migration and global conflict. His optimism about how AI can solve big problems, like making farming more efficient or cutting healthcare costs, came with a warning that we also need solid ethics and politics to go hand in hand with these developments to keep things on track, something that our recent Speaker Series guest Professor Jenny Reardon touched upon.

The next panel discussion on building research infrastructure echoed these sentiments. Chaired by Paul Hofheinz, President and Co-Founder, Lisbon Council, and featuring Asmeret Asefaw Berhe, Professor of Soil Biogeochemistry and Falasco Chair in Earth Sciences, University of California, Merced, Bosun Tijani, Minister of Communications, Innovation & Digital Economy, The Federal Republic of Nigeria, and Fabian J. Theis, Director of the Institute of Computational Biology and Professor at TUM Mathematics & Life Sciences, both Paul and Asmeret stressed the need for equity and inclusion to be at the forefront of people’s agendas as they develop solutions using AI, to ensure that advancements don’t deepen the digital divide. Bosun Tijani’s discussion of Africa’s talent acceleration programs was inspiring – and proof that we can nurture talent globally if we commit to the cause. We recently heard from Joy Owango about how important it is to build infrastructure that ensures the persistence and visibility of research contributions from all across the globe, and how impactful this has already been in Africa and other parts of the Global South.

This theme continued as Lila Ibrahim, Chief Operating Officer at Google DeepMind chaired a conversation with Dame Angela McLean, UK Government Chief Scientific Adviser, Ilan Gur, CEO, Advanced Research and Invention Agency (ARIA), and Sir Paul Nurse, Director of the Francis Crick Institute, Nobel laureate, and returning President of the Royal Society chatted about collaborating for impact. The panellists discussed the importance of thinking big and including diverse perspectives through better community engagement in science, something that X’s Dr Astro Teller talked about in a previous TL;DR Shorts episode. Dame Angela talked about how the government needs to aim higher, pushing for more thoughtful use of AI and predictive models, while Sir Paul stressed the need for mixing disciplines to boost innovation. Ilan shared his excitement about creating spaces where scientists from different fields can cross paths, sparking unexpected ideas. Recorded at Sci Foo, a perfect example of a catalyst for collaboration, Dr Etosha Cave echoes this sentiment and the need for interdisciplinarity for innovation. The panel also discussed building trust, with Angela and Ilan both emphasising the importance of transparency in science and technology. All panellists highlighted the role of public engagement in encouraging people to engage with and trust these cutting-edge advances.

Public Engagement and Trust

The conference ended with a final discussion featuring recent Nobel Prize winners in Chemistry, Sir Demis Hassabis and John Jumper, as well as former winners Professor Jennifer Doudna, and Sir Paul Nurse. Their reflections on public engagement were poignant: how do we bridge the gap between experts and the public? Sir Paul’s call for deliberate public dialogue reminded me how crucial it is to address fears and misconceptions about AI before they grow into barriers. However, one issue that continually cropped up, and one I may have mentioned once or a million times in the past, is that as it stands, the framework within which we reward research success does not make space for valued and impactful public engagement, or even innovation and entrepreneurship. Mariette DiChristina had a few thoughts on this, and we’ll be hearing more from her in 2025 about the value of effective communication of, and engagement with research in the age of open research, research integrity, and novel technology such as AI.

Some Key Takeaways

  • AI isn’t just transforming research; it’s reshaping the cultures around it. We’re seeing shifts in leadership, collaboration, and the ethical frameworks underpinning research.
  • Accessibility remains a challenge. Whether it’s CRISPR or AI infrastructure, we need to ensure the benefits reach everyone, not just the privileged few.
  • Collaboration is more vital than ever. From breaking disciplinary silos to engaging the public, success hinges on our ability to connect diverse voices.

In a world increasingly shaped by AI, this conference left me both hopeful and reflective. Science thrives when it’s inclusive, transparent, and collaborative – and AI could give us a chance to embrace those ideals like never before, provided we build research methods and applications in thoughtful, considerate, trustworthy and community-minded ways. My teammates recently authored a report that, in true Digital Science style, was informed by reflections from our own research community. The report looks at the changing research landscape in the age of AI and echoes the many challenges and opportunities discussed at this conference. AI is an exciting development that is already changing the way we do research. However, we must hold each other accountable to ensure that its development and application are open to all.

With thanks to Google DeepMind and The Royal Society for hosting this event. You can watch all sessions on Google DeepMind’s YouTube channel.

The post Deep Minds: Reflections from the AI for Science Forum appeared first on Digital Science.

]]>
TL;DR Shorts: Professor Lord Martin Rees on Artificial Intelligence https://www.digital-science.com/tldr/article/tldr-shorts-professor-lord-martin-rees-on-artificial-intelligence/ Tue, 19 Nov 2024 11:30:00 +0000 https://www.digital-science.com/?post_type=tldr_article&p=74323 Following yesterday’s engaging and inspiring AI For Science Forum hosted by Google DeepMind and The Royal Society, this week’s TL;DR Shorts episode features Professor Lord Martin Rees. A physicist, Astronomer Royal, former President of The Royal Society and author of many books that focus on the future, Martin shares his thoughts on the rise of AI in science and society.

The post TL;DR Shorts: Professor Lord Martin Rees on Artificial Intelligence appeared first on Digital Science.

]]>
Hot on the heels of yesterday’s AI For Science Forum hosted by Google DeepMind and The Royal Society, this week’s TL;DR Shorts episode features Professor Lord Martin Rees, one of the world’s leading cosmologists and the UK’s Astronomer Royal. Martin has spent decades exploring the vast mysteries of the universe and the future of humanity. A former President of the Royal Society and Master of Trinity College, Cambridge, he is known for his work on existential risks, science policy, and our place in the cosmos. In our latest episode, Martin discusses the transformative rise of AI in science and society and shares his thoughts on how it could revolutionise research, tackle global challenges, or amplify our risks.

Lord Professor Martin Rees shares his thoughts on the rise of AI and what it means for science and society. Check out the video on the Digital Science YouTube channel: https://youtu.be/YzEXLorZOwk

Martin acknowledges that AI is an exciting development, for research and society as a whole. The attention it receives speaks to the tremendous potential of AI-enabled tools. When pondering the more worrying aspects of this novel technology, Martin says that he feels the hype is exaggerated. Rather than worry about a superintelligence taking over, Martin believes that we would be better off worrying about the downsides of things going wrong.

Martin reflects on the pace of progress in AI and how it mirrors previous transformative innovations; progress doesn’t go uniformly and exponentially, rather it goes up quickly and then levels off. Comparing the rise of AI to space travel, Martin reminds us that there were only 12 years between Sputnik and the first Moon Landing, but since then there has been very little progress in space flight. There were also 50 years between the first transatlantic flight and the development jumbo jet. However,x since then commercial flight has barely changed. While AI is surging now, Martin reminds us that we shouldn’t presume that this trend will continue exponentially.

Subscribe now to be notified of each weekly release of the latest TL;DR Short, and catch up with the entire series here

If you’d like to suggest future contributors for our series or suggest some topics you’d like us to cover, drop Suze a message on one of our social media channels and use the hashtag #TLDRShorts.

The post TL;DR Shorts: Professor Lord Martin Rees on Artificial Intelligence appeared first on Digital Science.

]]>
TL;DR Shorts: Dr Danny Hillis on the Automated Future of Research https://www.digital-science.com/tldr/article/tldr-shorts-dr-danny-hillis-on-automated-research-future/ Tue, 12 Nov 2024 11:30:00 +0000 https://www.digital-science.com/?post_type=tldr_article&p=74223 New eras of technology have always enabled novel waves of research. This week's TL;DR Tuesday contribution comes from the co-founder of Applied Invention Dr Danny Hillis, an innovator who has witnessed and indeed driven the evolution of many such waves of novel tech. Danny shares his thoughts on an automated research future.

The post TL;DR Shorts: Dr Danny Hillis on the Automated Future of Research appeared first on Digital Science.

]]>
New eras of technology have always enabled novel waves of research. This week’s TL;DR Tuesday contribution comes from an innovator who has witnessed and indeed driven the evolution of many such waves of novel tech. In this week’s TL;DR Shorts episode, we hear from the co-founder of Applied Invention, Dr Danny Hillis. Danny and his team tackle big ideas across science, tech, and public policy. A true pioneer in AI and parallel computing, Danny has a passion for exploring complex systems and finding creative ways to solve tough problems.

Dr Danny Hillis talks about the automated future of research. Check out the video on the Digital Science YouTube channel: https://youtu.be/nRS5uIvXH4o

Danny uses agriculture as one example of an area of research vital to the survival of humanity where we aren’t doing enough research. Any fellow BBC Countryfile fan will know that farmers work incredibly hard tending to their agricultural land and responding to the dynamic needs placed on them by the changing climate and other factors. Though they may like to, they often don’t have time to do experiments and contribute to the corpus of research information in this space in a way they would like to.

However, if we start to collect data from the automation of the mechanisation farmers used to work the land, we can allow these “robots” to conduct a series of experiments that humans don’t have the time to do.

Danny believes that in the future these machines will also contribute to planning future experiments to explore such research spaces. He believes that the future of automated science will be done by AI – allowing humans to increase the number of experiments they can conduct, increase the amount of data gathered, and increase the number of hypotheses being tested.

Subscribe now to be notified of each weekly release of the latest TL;DR Short, and catch up with the entire series here

If you’d like to suggest future contributors for our series or suggest some topics you’d like us to cover, drop Suze a message on one of our social media channels and use the hashtag #TLDRShorts.

The post TL;DR Shorts: Dr Danny Hillis on the Automated Future of Research appeared first on Digital Science.

]]>
Presenting: Research Transformation: Change in the era of AI, open and impact https://www.digital-science.com/tldr/article/presenting-research-transformation-change-in-ai-open-and-impact/ Mon, 28 Oct 2024 09:45:00 +0000 https://www.digital-science.com/?post_type=tldr_article&p=73965 Mark Hahnel and Simon Porter introduce Digital Science's new report as part of our ongoing investigation into Research Transformation: Change in the era of AI, open and impact.

The post Presenting: Research Transformation: Change in the era of AI, open and impact appeared first on Digital Science.

]]>
Research Transformation report graphic
Research Transformation: Change in the era of AI, open and impact.

As part of our ongoing investigation into Research Transformation, we are delighted to present a new report, Research Transformation: Change in the era of AI, open and impact.

Within the report, we sought to understand from our academic research community how research transformation is experienced across different roles and responsibilities. The report, which is a mixture of surveys and interviews across libraries, research offices, leadership and faculty, reflects transformations in the way we collaborate, assess, communicate, and conduct research.

The positions that we hold towards these areas are not the same as those we held a decade or even five years ago. Each of these perspectives represent shifts in the way that we perceive ourselves and the roles that we play in the community. Although there is concern about the impact that AI will have on our community, our ability to adapt and change is reflected strongly across all areas of research, including open access, metrics collaboration and research security. That such a diverse community is able to continually adapt to change reflects well on our ability to respond to future challenges.

Key findings from the report:

  • Open research is transforming research, but barriers remain
  • Research metrics are evolving to emphasize holistic impact and inclusivity
  • AI’s transformative potential is huge, but bureaucracy and skill gaps threaten progress
  • Collaboration is booming, but increasing concerns over funding and security
  • Security and risk management need a strategic and cultural overhaul

We do these kinds of surveys to understand where the research community is moving and how we can tweak and adapt our approach as a company. We were very grateful to the great minds who helped us out with a deep dive into what has affected their roles and will affect their roles going forward. Metrics, Open Research and AI are very aligned with the tools that we provide for academics, and the strategy we have to make research more inclusive, transparent and trustworthy.

The post Presenting: Research Transformation: Change in the era of AI, open and impact appeared first on Digital Science.

]]>
Welcome to… Research Transformation!  https://www.digital-science.com/tldr/article/welcome-to-research-transformation/ Mon, 21 Oct 2024 13:15:18 +0000 https://www.digital-science.com/?post_type=tldr_article&p=70432 Transformation via and within research is a constant in our lives. But with AI, we now stand at a point where research (and many other aspects of our working life) will be transformed in a monumental way. As such, we are taking this moment to reflect on the activity of Research Transformation itself, and celebrating the art of change. Our campaign will show how research data can be transformed into actionable insights, how the changing role of research is affecting both those in academia and industry, and exploring innovative ways to make research more open, inclusive and collaborative, for all – especially for those beyond the walls of academia.

The post Welcome to… Research Transformation!  appeared first on Digital Science.

]]>

Open research is transforming the way research findings are discovered, shared and reproduced. As part of our commitment to the Open Principles and research transformation, we are looking into how open research is transforming roles, approaches, policies and, most importantly, mindsets for everyone across the research landscape. See our inspiring transformational stories so far.

Academia is at a pivotal juncture. It has often been criticized as slow to change, but external pressures from an increasingly complex world are forcing rapid change in the sector. To understand more about how the research world is transforming, what’s influencing change, and how roles are impacted, we reached out to the research community through a global survey and in-depth interviews.

Research Transformation stories so far…

Academic Survey Report Pre-registration

State of Open Data 2024 – Special Edition

Will 2025 be a turning point for Open Access? – Digital Science

How has innovation shaped Open Research? What does the future hold – especially with the impact of AI? Here’s Dan Valen speaking about Figshare’s key role, with innovation helping to transform the research landscape.

Digital Science has always understood its role as a community partner – working towards open research together. Here’s some ways in which we have helped to transform research over the last 14 years.

In our first piece, Simon Porter and Mark Hahnel introduce the topic and detail the three areas the campaign will focus on.

  • Making data more usable
  • Opening up channels & the flow of information
  • Transforming data through innovation & AI
  • Maintaining trust & integrity
  • Seeing both perspectives
  • What success looks like for knowledge transfer
  • Evolving roles and the role of people in bridging gaps
  • Research Transformation White Paper
  • How have roles changed:
    • In Academia?
    • In Publishing?
    • In Industry?
  • State of AI Report
  • How are we using AI in our research workflows?

Research Transformation

The way we interact with information can amplify our ability to make connections, and in doing so transforms how we understand the world. Supercharged by the AI moment that we are in, the steady march of digital transformation in society over the last three decades is primed for rapid evolution. What is true for society, is also doubly so for research. Alongside ground-breaking research and discoveries is the constant invitation to adapt to new knowledge and abilities. Combine the general imperative within the research sector to innovate with the rapidly evolving capabilities of generative AI and it is safe to say that expectations are high. Taking effective advantage of new possibilities as they arise however, requires successful coordination within society and systems. 

There is an art to transformation, and understanding the mechanisms of transformation places us in the best position to take advantage of the opportunities ahead.

In this series, we specifically seek to explore Research Transformation with an eye to adapting what we already know to the present AI moment. Transformation in Research is not just about digital systems, but it is also about people and organisations – crossing boundaries from research to industry, emerging new research sectors, creating new narratives and adapting to the possibilities that change brings.

At Digital Science, we have always sought to be an integral part of research transformation, aiming to provide products that enable the research sector to evolve research practice – from collaboration and discovery through to analytics and administration. Our ability to serve clients from research institutions to funders, publishers, and industry has placed us in a unique position to facilitate change across the sector, not simply within silos, but between them. In this series, we will be drawing on our own experiences of research transformation, as well as inviting perspectives from the broader community. As we proceed we hope to show that Research Transformation isn’t just about careful planning, but requires a sense of playfulness – a willingness to explore new technology, a commitment to a broader vision for better research, as well as an ability to build new bridges between communities.

1. The story of research data transformation

In the first of three themes, we will cover Research Transformation from the perspective of the data and metadata of research. How do changes to the metadata of research transform our ability to make impact, as well as see the research community through new lenses? How does technology enable these changes to occur? Starting almost from the beginning, we will look at how transitions in publishing practice have enabled the diversity of the research workforce to become visible. We will also trace the evolving story of the structure of a researcher’s papers, from the critical use of identifiers, to adoption of the credit ontology, through to the use of trust markers (including ethics statements and data and code availability, and conflict of interest statements.) The evolving consensus on structured and semi structured nature of research articles changes not only the way we discover, read and trust individual research papers, but also transforms our ability to measure and manage research itself.

Our focus will not only be reflective, but will also look forward to the emerging challenges and opportunities that generative AI offers. We will ask deep questions about how research should make its way into large language models. We also explore the new field of Forensic Scientometrics that has arisen in response to the dramatic increase in bad faith science in part enabled by generative AI, and the new research administration collaborations that this implies – both with research institutions and across publishing. We will aso offer more playful, experimental investigations.  For example, a series on ‘prompt engineering for librarians’ draws on the original pioneering spirit of the 1970’s MEDLARS Analysts to explore the possibilities that tools such as OpenAI can offer. 

2. The story of connection

Lifting up from the data, we note that a critical part of our experience of research transformation has been the ability to experience and connect with research fromshifting perspectives. In this second theme exploring research transformation, we aim to celebrate the art of making connections, from the personal transformations required  to make the shift from working within research institutions to industry, through to the art of building research platforms that support multiple sectors. We also cover familiar topics from new angles, For instance, how do the FAIR data principles benefit the pharmaceutical industry? How do we build effective research collaborations with emerging research sectors in Africa?

3. The story of research innovation

In our third theme, we will explore Research Transformation from the perspective of innovation, and how it has influenced the way research is conducted. Culminating in a  Research Transformation White Paper we will explore how roles have changed in academia, publishing, and industry.  Within this broader context of Research transformation, we ask ‘How are we using AI in our research workflows?’ How do we think we will be using AI in years to come?

Of course, many of us in the Digital Science community have been engaging with different aspects of research transformation over many years. If you are keen to explore our thinking to date, one place that you might like to start is at our Research Transformation collection on Figshare. Here we have collated what we think are some of our most impactful contributions to Research Transformation so far. We are very much looking forward to reflecting on research transformation throughout the year. If you are interested in contributing, or just generally finding out more, why not get in touch?

The post Welcome to… Research Transformation!  appeared first on Digital Science.

]]>
The TL;DR on… ERROR https://www.digital-science.com/tldr/article/tldr-error/ Wed, 25 Sep 2024 17:02:11 +0000 https://www.digital-science.com/?post_type=tldr_article&p=72358 We love a good deep dive into the awkward challenges and innovative solutions that are transforming the world of academia and industry. In this article and in the full video interview, we're discussing an interesting new initiative that's been making waves in the research community: ERROR.

Inspired by bug bounty programs in the tech industry, ERROR offers financial rewards to those who identify and report errors in academic research. ERROR has the potential to revolutionise how we approach, among other things, research integrity and open research by incentivising the thorough scrutiny of published research information and enhancing transparency.

Suze sat down with two other members of the TL;DR team, Leslie and Mark, to shed light on how ERROR can bolster trust and credibility in scientific findings, and explore how this initiative aligns with the principles of open research and how all these things can drive a culture of collaboration and accountability. They also discussed the impact that ERROR could have on the research community and beyond.

The post The TL;DR on… ERROR appeared first on Digital Science.

]]>
We love a good deep dive into the awkward challenges and innovative solutions transforming the world of academia and industry. In this article and in the full video interview, we’re discussing an interesting new initiative that’s been making waves in the research community: ERROR.

Inspired by bug bounty programs in the tech industry, ERROR offers financial rewards to those who identify and report errors in academic research. ERROR has the potential to revolutionize how we approach, among other things, research integrity and open research by incentivizing the thorough scrutiny of published research information and enhancing transparency.

I sat down with two other members of the TL;DR team, VP of Research Integrity Leslie McIntosh and VP of Open Research Mark Hahnel, to shed light on how ERROR can bolster trust and credibility in scientific findings, and explore how this initiative aligns with the principles of open research – and how all these things can drive a culture of collaboration and accountability. We also discussed the impact that ERROR could have on the research community and beyond.

ERROR is a brand new initiative created to tackle errors in research publications through incentivized checking. The TL;DR team sat down for a chat about what this means for the research community through the lenses of research integrity and open research. Watch the whole conversation on our YouTube channel: https://youtu.be/du6pEulN85o

Leslie’s perspective on ERROR

Leslie’s initial thoughts about ERROR were cautious, recognizing its potential to strengthen research integrity but also raising concerns about unintended consequences.

She noted that errors are an inherent part of the scientific process, and over-standardization might risk losing the exploratory nature of discovery. Drawing parallels to the food industry’s pursuit of efficiency leading to uniformity and loss of nutrients, Leslie suggested that aiming for perfection in science could overlook the value of learning from mistakes. She warned that emphasizing error correction too rigidly might diminish the broader mission of science – discovery and understanding.

Leslie: “Errors are part of science and part of the discovery… are we going so deep into science and saying that everything has to be perfect, that we’re losing the greater meaning of what it is to search for truth or discovery [or] understand that there’s learning in the errors that we have?”

Leslie also linked this discussion to open research. While open science encourages interpretation and influence from diverse participants, the public’s misunderstanding of scientific errors could weaponize these mistakes, undermining trust in research. She stressed that errors are an integral, even exciting, part of the scientific method and should be embraced rather than hidden.

Mark’s perspective on ERROR

Mark’s initial thoughts were more optimistic, especially within the context of open research.

Mark: “…one of the benefits of open research is we can move further faster and remove any barriers to building on top of the research that’s gone beforehand. And the most important thing you need is trust, [which] is more important than speed of publication, or how open it is, [or] the cost-effectiveness of the dissemination of that research.”

Mark also shared his excitement about innovation in the way we do research. He was particularly excited about ERROR’s approach to addressing the problem of peer review, as the initiative offers a new way of tackling longstanding issues in academia by bringing in more participants to scrutinize research.

He thought the introduction of financial incentives to encourage error reporting could lead to a more reliable research landscape.

“I think the payment for the work is the most interesting part for me, because when we look at academia and perverse incentives in general, I’m excited that academics who are often not paid for their work are being paid for their work in academic publishing.”

However, Mark’s optimism was not entirely without wariness. He shared Leslie’s caution about the incentives, warning of potential unintended outcomes. Financial rewards might encourage individuals to prioritize finding errors for profit rather than for the advancement of science, raising ethical concerns.

Ethical concerns with incentivization

Leslie expressed reservations about the terminology of “bounty hunters”, which she felt criminalizes those who make honest mistakes in science. She emphasized that errors are often unintentional.

Leslie: “It just makes me cringe… People who make honest errors are not criminals. That is part of science. So I really think that ethically when we are using a term like bounty hunters, it connotes a feeling of criminalization. And I think there are some ethical concerns there with doing that.”

Leslie’s ethical concerns extended to the global research ecosystem, noting that ERROR could disproportionately benefit well-funded researchers from the Global North, leaving under-resourced researchers at a disadvantage. She urged for more inclusive oversight and diversity in the initiative’s leadership to prevent inequities.

She also agreed with Mark about the importance of rewarding researchers for their contributions. Many researchers do unpaid labor in academia, and compensating them for their efforts could be a significant positive change.

Challenges of integrating ERROR with open research

ERROR is a promising initiative, but I wanted to hear about the challenges in integrating a system like this alongside existing open research practices, especially when open research itself is such a broad, global and culturally diverse endeavor.

Both Leslie and Mark emphasized the importance of ensuring that the system includes various research approaches from around the world.

Mark: “I for one think all peer review should be paid and that’s something that is relatively controversial in the conversations I have. What does it mean for financial incentivization in countries where the economics is so disparate?”

Mark extended this concept of inclusion to the application of artificial intelligence (AI), machine learning (ML) and large language models (LLMs) in research, noting that training these technologies requires access to diverse and accurate data. He warned that if certain research communities are excluded, their knowledge may not be reflected in the datasets used to build future AI research tools.

“What about the people who do not have access to this and therefore their content doesn’t get included in the large language models, and doesn’t go on to form new knowledge?”

He also expressed excitement about the potential for ERROR to enhance research integrity in AI and ML development. He highlighted the need for robust and diverse data, emphasizing that machines need both accurate and erroneous data to learn effectively. This approach could ultimately improve the quality of research content, making it more trustworthy for both human and machine use.

Improving research tools and integrity

Given the challenges within research and the current limitations of tools like ERROR, I asked Leslie what she would like to see in the development of these and other research tools, especially within the area of research integrity. She took the opportunity to reflect on the joy of errors and failure in science.

Leslie: “If you go back to Alexander Fleming’s paper on penicillin and read that, it is a story. It is a story of the errors that he had… And those errors were part of or are part of that seminal paper. It’s incredible, so why not celebrate the errors and put those as part of the paper, talk about [how] ‘we tried this, and you know what, the refrigerator went out during this time, and what we learned from the refrigerator going out is that the bug still grew’, or whatever it was.

“You need those errors in order to learn from the errors, meaning you need those captured, so that you can learn what is and what is not contributing to that overall goal and why it isn’t. So we actually need more of the information of how things went wrong.”

I also asked Mark what improvements he would like to see from tools like ERROR from the open research perspective. He emphasized the need for better metadata in research publishing, especially in the context of open data. Drawing parallels to the open-source software world, where detailed documentation helps others build on existing work, he suggested that improving how researchers describe their data could enhance collaboration.

Mark also feels that the development of a tool like ERROR highlights other challenges in the way we are currently publishing research, such as deeper issues with peer review, or incentives for scholarly publishing.

Mark: “…the incentive structure of only publishing novel research in certain journals builds into that idea that you’re not going to publish your null data, because it’s not novel and the incentive structure isn’t there. So as I said, could talk for hours about why I’m excited about it, but I think the ERROR review team have a lot of things to unpack.”

Future of research integrity and open research

What do Leslie and Mark want the research community to take away from this discussion on error reporting and its impact on research integrity and open research?

Leslie wants to shine a light on science communication and its role in helping the public to understand what ERROR represents, and how it fits into the scientific ecosystem.

Leslie: “…one of the ways in which science is being weaponized is to say peer review is dead. You start breaking apart one of the scaffolds of trust that we have within science… So I think that the science communicators here are very important in the narrative of what this is, what it isn’t, and what science is.”

Both Leslie and Mark agreed that while ERROR presents exciting possibilities, scaling the initiative remains a challenge. Mark raised questions about how ERROR could expand beyond its current scope, with only 250 papers reviewed over four years and each successful error detection earning a financial reward. Considering the millions of papers published annually, it is unclear how ERROR can be scaled globally and become a sustainable solution.

Mark: “…my biggest concern about this is, how does it scale? A thousand francs a pop, it’s 250 papers. There [were] two million papers [published] last year. Who’s going to pay for that? How do you make this global? How do you make this all-encompassing?”

Conclusion

It is clear from our discussion that ERROR represents a significant step forward in experimenting to enhance both research integrity and open research through this incentivised bug-hunting system.

Leslie has highlighted how the initiative can act as a robust safeguard, ensuring that research findings are more thoroughly vetted and reliable, but she does remind us that we need to be inclusive in this approach. Mark has also emphasized the potential for a tool like this in making publication processes more efficient – and even finally rewarding researchers for all the additional work that they’re doing – but he does wonder how this can scale up to foster a more transparent and collaborative research environment that aligns perfectly with the ethos of open research as well.

Leslie and Mark’s comments are certainly timely, given that the theme of Digital Science’s 2024 Catalyst Grant program is innovation for research integrity. You can find out more about how different segments of research can and should be contributing to this space by reading our TL;DR article on it here.

We look forward to exploring more innovations and initiatives that are going to shape – or shatter – the future of academia, so if you’d like to suggest a topic we should be discussing, please let us know.

The post The TL;DR on… ERROR appeared first on Digital Science.

]]>