Leslie McIntosh Articles - TL;DR - Digital Science https://www.digital-science.com/tldr/people/leslie-mcintosh/ Advancing the Research Ecosystem Wed, 30 Apr 2025 08:39:21 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 The TL;DR on… ERROR https://www.digital-science.com/tldr/article/tldr-error/ Wed, 25 Sep 2024 17:02:11 +0000 https://www.digital-science.com/?post_type=tldr_article&p=72358 We love a good deep dive into the awkward challenges and innovative solutions that are transforming the world of academia and industry. In this article and in the full video interview, we're discussing an interesting new initiative that's been making waves in the research community: ERROR.

Inspired by bug bounty programs in the tech industry, ERROR offers financial rewards to those who identify and report errors in academic research. ERROR has the potential to revolutionise how we approach, among other things, research integrity and open research by incentivising the thorough scrutiny of published research information and enhancing transparency.

Suze sat down with two other members of the TL;DR team, Leslie and Mark, to shed light on how ERROR can bolster trust and credibility in scientific findings, and explore how this initiative aligns with the principles of open research and how all these things can drive a culture of collaboration and accountability. They also discussed the impact that ERROR could have on the research community and beyond.

The post The TL;DR on… ERROR appeared first on Digital Science.

]]>
We love a good deep dive into the awkward challenges and innovative solutions transforming the world of academia and industry. In this article and in the full video interview, we’re discussing an interesting new initiative that’s been making waves in the research community: ERROR.

Inspired by bug bounty programs in the tech industry, ERROR offers financial rewards to those who identify and report errors in academic research. ERROR has the potential to revolutionize how we approach, among other things, research integrity and open research by incentivizing the thorough scrutiny of published research information and enhancing transparency.

I sat down with two other members of the TL;DR team, VP of Research Integrity Leslie McIntosh and VP of Open Research Mark Hahnel, to shed light on how ERROR can bolster trust and credibility in scientific findings, and explore how this initiative aligns with the principles of open research – and how all these things can drive a culture of collaboration and accountability. We also discussed the impact that ERROR could have on the research community and beyond.

ERROR is a brand new initiative created to tackle errors in research publications through incentivized checking. The TL;DR team sat down for a chat about what this means for the research community through the lenses of research integrity and open research. Watch the whole conversation on our YouTube channel: https://youtu.be/du6pEulN85o

Leslie’s perspective on ERROR

Leslie’s initial thoughts about ERROR were cautious, recognizing its potential to strengthen research integrity but also raising concerns about unintended consequences.

She noted that errors are an inherent part of the scientific process, and over-standardization might risk losing the exploratory nature of discovery. Drawing parallels to the food industry’s pursuit of efficiency leading to uniformity and loss of nutrients, Leslie suggested that aiming for perfection in science could overlook the value of learning from mistakes. She warned that emphasizing error correction too rigidly might diminish the broader mission of science – discovery and understanding.

Leslie: “Errors are part of science and part of the discovery… are we going so deep into science and saying that everything has to be perfect, that we’re losing the greater meaning of what it is to search for truth or discovery [or] understand that there’s learning in the errors that we have?”

Leslie also linked this discussion to open research. While open science encourages interpretation and influence from diverse participants, the public’s misunderstanding of scientific errors could weaponize these mistakes, undermining trust in research. She stressed that errors are an integral, even exciting, part of the scientific method and should be embraced rather than hidden.

Mark’s perspective on ERROR

Mark’s initial thoughts were more optimistic, especially within the context of open research.

Mark: “…one of the benefits of open research is we can move further faster and remove any barriers to building on top of the research that’s gone beforehand. And the most important thing you need is trust, [which] is more important than speed of publication, or how open it is, [or] the cost-effectiveness of the dissemination of that research.”

Mark also shared his excitement about innovation in the way we do research. He was particularly excited about ERROR’s approach to addressing the problem of peer review, as the initiative offers a new way of tackling longstanding issues in academia by bringing in more participants to scrutinize research.

He thought the introduction of financial incentives to encourage error reporting could lead to a more reliable research landscape.

“I think the payment for the work is the most interesting part for me, because when we look at academia and perverse incentives in general, I’m excited that academics who are often not paid for their work are being paid for their work in academic publishing.”

However, Mark’s optimism was not entirely without wariness. He shared Leslie’s caution about the incentives, warning of potential unintended outcomes. Financial rewards might encourage individuals to prioritize finding errors for profit rather than for the advancement of science, raising ethical concerns.

Ethical concerns with incentivization

Leslie expressed reservations about the terminology of “bounty hunters”, which she felt criminalizes those who make honest mistakes in science. She emphasized that errors are often unintentional.

Leslie: “It just makes me cringe… People who make honest errors are not criminals. That is part of science. So I really think that ethically when we are using a term like bounty hunters, it connotes a feeling of criminalization. And I think there are some ethical concerns there with doing that.”

Leslie’s ethical concerns extended to the global research ecosystem, noting that ERROR could disproportionately benefit well-funded researchers from the Global North, leaving under-resourced researchers at a disadvantage. She urged for more inclusive oversight and diversity in the initiative’s leadership to prevent inequities.

She also agreed with Mark about the importance of rewarding researchers for their contributions. Many researchers do unpaid labor in academia, and compensating them for their efforts could be a significant positive change.

Challenges of integrating ERROR with open research

ERROR is a promising initiative, but I wanted to hear about the challenges in integrating a system like this alongside existing open research practices, especially when open research itself is such a broad, global and culturally diverse endeavor.

Both Leslie and Mark emphasized the importance of ensuring that the system includes various research approaches from around the world.

Mark: “I for one think all peer review should be paid and that’s something that is relatively controversial in the conversations I have. What does it mean for financial incentivization in countries where the economics is so disparate?”

Mark extended this concept of inclusion to the application of artificial intelligence (AI), machine learning (ML) and large language models (LLMs) in research, noting that training these technologies requires access to diverse and accurate data. He warned that if certain research communities are excluded, their knowledge may not be reflected in the datasets used to build future AI research tools.

“What about the people who do not have access to this and therefore their content doesn’t get included in the large language models, and doesn’t go on to form new knowledge?”

He also expressed excitement about the potential for ERROR to enhance research integrity in AI and ML development. He highlighted the need for robust and diverse data, emphasizing that machines need both accurate and erroneous data to learn effectively. This approach could ultimately improve the quality of research content, making it more trustworthy for both human and machine use.

Improving research tools and integrity

Given the challenges within research and the current limitations of tools like ERROR, I asked Leslie what she would like to see in the development of these and other research tools, especially within the area of research integrity. She took the opportunity to reflect on the joy of errors and failure in science.

Leslie: “If you go back to Alexander Fleming’s paper on penicillin and read that, it is a story. It is a story of the errors that he had… And those errors were part of or are part of that seminal paper. It’s incredible, so why not celebrate the errors and put those as part of the paper, talk about [how] ‘we tried this, and you know what, the refrigerator went out during this time, and what we learned from the refrigerator going out is that the bug still grew’, or whatever it was.

“You need those errors in order to learn from the errors, meaning you need those captured, so that you can learn what is and what is not contributing to that overall goal and why it isn’t. So we actually need more of the information of how things went wrong.”

I also asked Mark what improvements he would like to see from tools like ERROR from the open research perspective. He emphasized the need for better metadata in research publishing, especially in the context of open data. Drawing parallels to the open-source software world, where detailed documentation helps others build on existing work, he suggested that improving how researchers describe their data could enhance collaboration.

Mark also feels that the development of a tool like ERROR highlights other challenges in the way we are currently publishing research, such as deeper issues with peer review, or incentives for scholarly publishing.

Mark: “…the incentive structure of only publishing novel research in certain journals builds into that idea that you’re not going to publish your null data, because it’s not novel and the incentive structure isn’t there. So as I said, could talk for hours about why I’m excited about it, but I think the ERROR review team have a lot of things to unpack.”

Future of research integrity and open research

What do Leslie and Mark want the research community to take away from this discussion on error reporting and its impact on research integrity and open research?

Leslie wants to shine a light on science communication and its role in helping the public to understand what ERROR represents, and how it fits into the scientific ecosystem.

Leslie: “…one of the ways in which science is being weaponized is to say peer review is dead. You start breaking apart one of the scaffolds of trust that we have within science… So I think that the science communicators here are very important in the narrative of what this is, what it isn’t, and what science is.”

Both Leslie and Mark agreed that while ERROR presents exciting possibilities, scaling the initiative remains a challenge. Mark raised questions about how ERROR could expand beyond its current scope, with only 250 papers reviewed over four years and each successful error detection earning a financial reward. Considering the millions of papers published annually, it is unclear how ERROR can be scaled globally and become a sustainable solution.

Mark: “…my biggest concern about this is, how does it scale? A thousand francs a pop, it’s 250 papers. There [were] two million papers [published] last year. Who’s going to pay for that? How do you make this global? How do you make this all-encompassing?”

Conclusion

It is clear from our discussion that ERROR represents a significant step forward in experimenting to enhance both research integrity and open research through this incentivised bug-hunting system.

Leslie has highlighted how the initiative can act as a robust safeguard, ensuring that research findings are more thoroughly vetted and reliable, but she does remind us that we need to be inclusive in this approach. Mark has also emphasized the potential for a tool like this in making publication processes more efficient – and even finally rewarding researchers for all the additional work that they’re doing – but he does wonder how this can scale up to foster a more transparent and collaborative research environment that aligns perfectly with the ethos of open research as well.

Leslie and Mark’s comments are certainly timely, given that the theme of Digital Science’s 2024 Catalyst Grant program is innovation for research integrity. You can find out more about how different segments of research can and should be contributing to this space by reading our TL;DR article on it here.

We look forward to exploring more innovations and initiatives that are going to shape – or shatter – the future of academia, so if you’d like to suggest a topic we should be discussing, please let us know.

The post The TL;DR on… ERROR appeared first on Digital Science.

]]>
FoSci – The emerging field of Forensic Scientometrics https://www.digital-science.com/tldr/article/forensic-scientometrics/ Wed, 08 May 2024 09:00:00 +0000 https://www.digital-science.com/?post_type=tldr_article&p=71722 Our VP Research Integrity, Dr Leslie McIntosh, explains forensic scientometrics (FoSci) - the emerging field focused on inspecting and upholding the integrity of scientific research.

The post FoSci – The emerging field of Forensic Scientometrics appeared first on Digital Science.

]]>
Forensic Scientometrics (FoSci) graphic

Major news stories have recently covered journal editors being bribed, university presidents resigning for questionable research integrity standards, and undisclosed conflicts of interests by faculty and researchers. Each day it seems a new story about lapses in research integrity in science is published, which sometimes brings to light those individuals and groups discovering the misconduct.

This work of inspecting and upholding the integrity of scientific research has long been conducted in the background of science and scholarly communication, carried out by passionate individuals driven by ethics and principles to ensure the veracity of the scientific method or record and the downstream impacts on policy, practice, and theory. 

Groups and individuals, such as RetractionWatch, have monitored, collected, classified, and written about retractions for over ten years. Many of those doing the verifying have specialization detection – from nefarious networks to image manipulation to tortured phrases. Additionally, many organizations have developed specific offices and infrastructure to support research integrity – such as the US Office of Research Integrity at the National Institutes of Health, institutionally-based research integrity and security offices, as well as the newly formed research integrity offices located in major publishing organizations.  

Despite the growth of investigative research and methods on scientific misconduct, the discipline itself lacks a common definition and description of its field.  So, what do you call the collective work of analyzing publications,  data, images, and statistics to uncover errors, misinformation, or manipulations in scientific publications? We propose calling this emerging field forensic scientometrics – FoSci for short.

Research integrity experts call for new forensics discipline

Forensic scientometrics


Dr Leslie McIntosh

By embracing FoSci as a specialized and necessary field, we can galvanize interest, foster the development of a community of practice, and signal the importance of this crucial work

Dr Leslie McIntosh | VP Research Integrity | Digital Science

Why use forensics? Forensics refers to applying scientific knowledge and methods to matters of law, particularly for investigating crimes and providing evidence in legal proceedings.

First, there is an investigative nature of the work we do even if the results do not end up in the court of law. While fraud investigations and syndicate networks encompass legal realms a, the intentional manipulation of anything scientific (as in the process of scientific discovery or manipulation) does not have a special place. That doesn’t mean it shouldn’t be there.

Second, scientific publications are being used in the court of law as evidence, but the papers themselves and their veracity do not get scrutinized by expert scientometricians. A common belief is that a peer-reviewed academic paper indicates that the research has passed the scientific method stress test. Yet, peer-reviewers vet (or should) the scholarly question within the paper, not the weight of evidence in a societal context. 

Scientometrics involves the quantitative analysis of scientific publications and research outputs in this larger context. It encompasses the measurement and evaluation related to scientific activities, such as the impact of research, patterns of collaboration among researchers, citation analysis, and the productivity assessment and influence of individuals, institutions, or scientific journals. Scientometrics employs statistical and mathematical methods to derive meaningful insights into the structure and dynamics of scientific knowledge, contributing to our understanding of the scientific community’s development and impact over time.

As we navigate the evolving landscape of scientific inquiry, the emergence of forensic scientometrics as a distinct field reflects a collective commitment to upholding research integrity, from the pioneers who have tirelessly exposed misconduct to the institutional changes taking place, the journey towards a recognized field is well underway. In this era of increased scrutiny, defining and embracing forensic scientometrics – FoSci –  becomes essential in strengthening trust in and around science.

Bio

Leslie D. McIntosh, PhD is the VP of Research Integrity at Digital Science and dedicates her work to improving research, reducing disinformation, and increasing trust in science.

As an academic turned entrepreneur, she founded Ripeta in 2017 to improve research quality and integrity. Now part of Digital Science, the Ripeta algorithms lead in detecting Trust Markers of research manuscripts. She works with governments, publishers, institutions, and companies around the globe to improve research and scientific decision-making. She has given hundreds of talks, including to the US-NIH, NASA, and World Congress on Research Integrity, and consulted with the US, Canadian, and European governments. Dr. McIntosh’s work was the most-read RetractionWatch post of 2022. In 2023, her influential ideas on achieving equity in research were highlighted in the Guardian and Science.

Publications and Preprints

McIntosh, Leslie and Hudson Vitale, Cynthia. 2024. Forensic Scientometrics — An emerging discipline to protect the scholarly record. arXiv https://doi.org/10.48550/arXiv.2311.11344 

Porter, Simon and McIntosh, Leslie. 2024. Identifying Fabricated Networks within Authorship-for-Sale Enterprises. arXiv https://doi.org/10.48550/arXiv.2401.04022

McIntosh, L.D. and Hudson Vitale, C., 2023. Safeguarding scientific integrity: A case study in examining manipulation in the peer review process. Accountability in Research, pp.1-19. https://doi.org/10.1080/08989621.2023.2292043 

Blogs

McIntosh, Leslie D. (2024): FoSci – The Emerging Field of Forensic Scientometrics The Scholarly Kitchen

McIntosh, Leslie D. (2024): Science Misused in the Law 

The post FoSci – The emerging field of Forensic Scientometrics appeared first on Digital Science.

]]>
Science Misused in the Law https://www.digital-science.com/tldr/article/science-misused-in-the-law/ Wed, 07 Feb 2024 08:15:55 +0000 https://www.digital-science.com/?post_type=tldr_article&p=69386 Digital Science has conducted an investigation into 11 papers used in evidence in a prominent abortion drug legal case in the United States. Here are the findings.

The post Science Misused in the Law appeared first on Digital Science.

]]>
A Case Study of the Scientific Articles Cited in the US Mifepristone Court Case

Statue of Lady Justice with blindfold and scales

The recent retraction of three published papers, two that were instrumental in a prominent abortion drug legal case in the United States, highlights the implications of potentially biased research influencing critical legal decisions. This post shares details of a Digital Science-conducted investigation into 11 papers used in court evidence – including the now-retracted papers – and what that means for both science and law.

Science as evidence

As scientists investigating trust in and of science, we began to explore how science – specifically published research – is used in the law. A notable concern is if the court system utilizes scholarly articles to establish scientific truths, then it relies on a system that may be vulnerable to influence by stakeholders who aim to advance their agendas rather than advancing impartial science.

When science is incorrectly used – specifically in the case of law – there may be far-reaching consequences. Science can be misused or manipulated to add gravitas, particularly in cases that are polarizing in society, are emotionally charged and fall along societal and political lines. Yet, ignorance of science is no excuse for misuse of science in the law.

For scholarly information to be considered credible and trustworthy science, it must meet multiple criteria, including: be a representative sample of papers over a given topic and delivered through trustworthy mechanisms. After those initial checks, the science itself must be sound. Then, the evidence must also be appropriately understood by the courts.

If the court system utilizes scholarly articles to establish scientific truths, then it relies on a system that may be vulnerable to influence by stakeholders who aim to advance their agendas rather than advancing impartial science.

Mifepristone legal challenge

Our case study questions whether the scientific evidence presented in a high-profile court case in the United States (Alliance for Hippocratic Medicine, et al vs the Federal Drug Administration) provides an unbiased presentation of the science. 

Heard earlier in 2023 in a Texas federal court, the legal action was taken against the US Food and Drug Administration (FDA) and was aimed at overturning FDA approval of the abortion drug Mifepristone. The case will be heard before the US Supreme Court on 26 March 2024.

Eleven published research articles were offered as evidence in this court case. (For the full list, see our Methods and Data section below). The federal judge ordered a suspension of mifepristone’s FDA approval, citing scientific evidence presented to the court.

This case was interesting to us as it made US and international news in the wake of the 2022 Dobbs case in the US Supreme Court, which had overturned Roe v. Wade. Because of our interest in research integrity, we felt this court case was worthy of further investigation, in which we asked ourselves: How were scientific papers and science presented and used as evidence?

What we would have expected: The strongest research on mifepristone, published in quality journals, with rigorous peer reviews and any conflicts of interests described. What we found, however, did not meet these standards.

Retracted articles

Three of the 11 articles cited in the court case were published in one journal, Health Services Research and Managerial Epidemiology. This journal has a recognized publisher behind it who has the ability to investigate possible manipulation of the scientific process. And after six months since one expression of concern, the publisher retracted three of Dr Studnicki’s papers based on undisclosed conflicts of interest and unreliable research methodology. Two of those papers were cited in the Texas court case. In the retraction Sage “confirmed that all but one of the article’s authors had an affiliation with one or more of Charlotte Lozier Institute, Elliot Institute, and American Association of Pro-Life Obstetricians and Gynecologists, all pro-life advocacy organizations, despite having declared they had no conflicts of interest when they submitted the article for publication or in the article itself.”

Expected results

What should we have expected from credible published research in this field?

Representative sample of papers over a given topic

Using the world’s largest linked database, Dimensions, we queried for (mifepristone OR RU-486) AND “medical abortion” over the years of 2019-2022, to assess the experts, organizations, and journals in this field of study. We also queried for (mifepristone OR RU-486) AND “medication abortion” over the years of 2019-2022. The results varied slightly but not significantly in the conclusions and are not shown below.

Experts

Due to the sensitivity of this research topic, we have not shown the authors’ names but summarize the findings.

Expected authors of the publications

When we query for mifepristone (or RU-486) and “medical abortion” over the years of 2019-2022, we find 25 top researchers (by number of publications, citations, citation mean, FCR to those publications). We note that none of the top 25 researchers – or their papers – were cited in the US court case on mifepristone.

Actual authors of the publications used in court

Using the data from Dimensions, we ascertained that none of the authors of publications cited in the mifepristone court case are among the top 24 in their field in the world. The authors also do not appear to be connected with the top 24 researchers in the field. Instead, the authors of four of the 11 papers presented in the mifepristone court case are all well-connected co-authors with each other. 

Organizations

Expected organizations to have affiliations on the publications

We identified 500 institutions (by number of publications and citations) publishing research in this field. We would have expected to see these institutions and their work cited in the mifepristone court case.

Figure 1: Co-authorship networks of the 500 organizations associated with mifepristone OR RU-486 and “medical abortion” from 2019-2022. Organizational co-authorship networks refer to collaboration networks between researchers across different organizations, measured through co-authored academic papers. Nodes represent organizations. Edges between nodes indicate co-authored papers between researchers from those organizations. The network structure demonstrates collaboration patterns between organizations. Source: VOSviewer image using data from Dimensions.

Actual organizations to have affiliations on the publications

The authors of the publications used in the mifepristone court case are primarily affiliated with the Charlotte Lozier Institute (CLI), who have co-authored publications with members of the American Association of Pro Life OB/GYNS (AAPLOG) among other organizations. These authors do not have connections to those highly-cited organizations shown in the previous image. Two of the 11 papers have authors from those institutions in the above figure – those from Planned Parenthood and Princeton University.

Figure 2: Organizational co-authorship networks of the 11 papers cited in the Texas mifepristone case. Source: VOSviewer image using data from Dimensions.

Journals

Expected publication journals to have been represented 

We would have expected journal representation from the top 20 journals by mean number of citations. ‘Citations (Mean)’ is the mean average citation number for a given group of publications being analyzed. Other metrics can also be used to determine the top journals of the field.

IDJournal NameCitations (mean)
1Current Opinion in Cell Biology103
2Nature102
3Frontiers in Immunology82
4Endocrine Reviews74
5MMWR Surveillance Summaries58
6Human Reproduction Update46
7Social Science & Medicine44
8The Lancet Regional Health – Americas35
9Advances in Pediatrics33
10Stem Cell Research & Therapy33
11Human Reproduction Open32
12Journal of Women’s Health32
13Reproductive Toxicology31
14Journal of Clinical Nursing30
15The Lancet Global Health29
16JAMA Network Open28
17Pharmaceuticals28
18Journal of Health Economics26
19Drug Delivery and Translational Research26
20Social Science Research24

Which journals were used in the case

Source: Titles in Mifepristone 2023 Texas CaseCitation MeanRank
Obstetrics and Gynecology1178
Health Communication1185
Contraception774
Health Services Research and Managerial Epidemiology3225
The Linacre Quarterly1310
Human Reproduction0436
BMJ Evidence-Based Medicine
Issues in Law & Medicine

Science should be delivered through trustworthy mechanisms

We would have expected that authors, editors, and peer reviewers abide by the guidelines for conflicts of interest

For assessing conflict of interest, we are guided by the International Committee of Medical Journal Editors (ICMJE) definition, wherein “all participants in the peer-review and publication process – not only authors but also peer reviewers, editors, and editorial board members of journals – must consider and disclose their relationships and activities when fulfilling their roles in the process of article review and publication”. (ICMJE, n.d.).

For the journal Health Services Research and Managerial Epidemiology, a Sage journal, its policy on declaring conflicts of interest (COI) can be found here. In summary: All authors must provide a ‘Declaration of Conflicting Interests’ statement to be published with the article, disclosing any financial ties to sponsoring organizations or for-profit interests related to products discussed in the text. If no conflicts exist, the statement should say “None Declared”. Any interests that could appear as conflicts should also be disclosed to the Editor in the cover letter.

COI could exist if those involved in writing, editing, and peer-reviewing the papers have strong political affiliations, are employees, members, leaders or founders of, or writes papers for or acts as an expert witness solely aligning with organizations affiliated with the topic. Multiple peer reviewers with diverse affiliations helps balance potential biases. 

We would expect a balance of scientific experts on a topic with none or declared conflicts of interest across the authors, peer reviewers, and article editors. If they have COI, we would expect them to vary from person and role (e.g., the author, peer reviewers, and/or article editor should not work for, have investments in, etc. in Company X). Note that the role and control of the editorial board varies by journal and publisher and can have some to no decision authority of the articles.

In this case, we looked for activities of perceived conflicts of interest at the individual and paper level to assess a ‘risk profile’. An individual may have political affiliations but still be objective; they should still disclose these. The reason for having multiple peer reviewers is to balance those potential biases through different reviews. Because none of the journals have open peer reviewers, and we do not know who the specific article editor was for a paper, we could assess those items for COI. However, the retraction notice does state “Sage became aware that a peer reviewer who evaluated the article for initial publication also was affiliated with Charlotte Lozier Institute at the time of the review.”

For further details on understanding conflicts of interest in scientific papers, see this publication: Safeguarding Scientific Integrity: Examining Conflicts of Interest in the Peer Review Process.

What we found regarding declared and undeclared conflicts of interest

Of the 11 papers used in the case, we found eight with undeclared conflicts of interest, where there is the potential for conflicts of interest. Note that declaring conflicts of interest (also known as ‘competing interests’) in publications has moved to an expected practice across disciplines within the past five years.

Publication YearJournal TitleArticle TitleAuthors’ affiliated advocacy organizationDeclared Conflict of InterestPossible Conflict of Interest
2012Obstetrics and GynecologyExtending outpatient medical abortion services through 70 days of gestational age.Planned ParenthoodNoYes
2009Obstetrics and GynecologyImmediate Complications After Medical Compared With Surgical Termination of PregnancyNoYes (Pharma)Yes
2015ContraceptionEfficacy and safety of medical abortion using mifepristone and buccal misoprostol through 63 daysPlanned ParenthoodNoYes
2011Human ReproductionImmediate adverse events after second trimester medical termination of pregnancy: results of a nationwide registry studyNoYes (Pharma)Yes
2020Health Communication#AbortionChangesYou: A Case Study to Understand the Communicative Tensions in Women’s Medication Abortion NarrativesAnti-abortionNoYes
2013The Linacre QuarterlyThe Maternal Mortality Myth in the Context of Legalized AbortionAnti-abortionNoYes
2021Health Services Research and Managerial EpidemiologyRETRACTED:
A Longitudinal Cohort Study of Emergency Room Utilization Following Mifepristone Chemical and Surgical Abortions, 1999–2015
Anti-abortionNoYes
2021Issues in Law & MedicineDeaths and Severe Adverse Events after the use of Mifepristone as an Abortifacient from September 2000 to February 2019.Anti-abortionNoYes
2021Health Services Research and Managerial EpidemiologyMifepristone Adverse Events Identified by Planned Parenthood in 2009 and 2010 Compared to Those in the FDA Adverse Event Reporting System and Those Obtained Through the Freedom of Information ActAnti-abortionNoYes
2022Health Services Research and Managerial EpidemiologyRETRACTED:
A Post Hoc Exploratory Analysis: Induced Abortion Complications Mistaken for Miscarriage in the Emergency Room are a Risk Factor for Hospitalization
Anti-abortionNoYes
2011BMJ Evidence-Based MedicineAdolescent girls undergoing medical abortion have lower risk of haemorrhage, incomplete evacuation or surgical evacuation than women above 18 years oldNoNoNone Found

Discussion

Logically, scientific articles have been used as exhibits in this Texas court case because mifepristone  is a chemical drug. Yet, much of the scientific evidence appeared to be authored by the plaintiffs or organizations affiliated with the plaintiffs.

If the scientists are the authorities on the subject and have no conflicts of interest, there might be a case for using those studies as evidence. However, we would expect extreme rigor of the science and the ethics of those involved. Those organizations listed on the scholarly papers are either part of the plaintiff or affiliated with them. Two of the papers have evidence of undisclosed conflicts of interest and unreliable methods according to the retraction notice. Hence, the ‘science’ that has been cited appears to have been compromised at the very least, and potentially manipulated to serve the aims of those organizations.

At this point in time, two of the eleven papers used in evidence have now been retracted – a move towards correcting the scientific record. While it is now too late for the Texas court, which has already considered the retracted papers as part of its proceedings, it should not be too late for some further serious questioning of scientific publications presented in the original case. This issue highlights that more must be done to detect and prevent the manipulation or misuse of science and conflicts of interest in the courts.

Methods

Court Case

Alliance for Hippocratic Medicine, American Association of Pro-Life Obstetricians and Gynecologists, American College of Pediatricians, Christian Medical & Dental Associations[1], Dr. Shaun Jester, Dr. Regina Frost-Clark, Dr. Tyler Johnson[2] Dr. George Delgado[3] vs the Federal Drug Administration (2:22-cv-00223-Z).

Data

Eleven peer-reviewed articles were offered as evidence in this court case and used in this study. ‘Exhibits’: https://doi.org/10.6084/m9.figshare.25203248 We did not examine the statements or prior court cases mentioned in the exhibits.

DOITitleSource titlePublisher
10.1177/23333928221103107A Post Hoc Exploratory Analysis: Induced Abortion Complications Mistaken for Miscarriage in the Emergency Room are a Risk Factor for HospitalizationHealth Services Research and Managerial EpidemiologySAGE
10.1177/23333928211068919Mifepristone Adverse Events Identified by Planned Parenthood in 2009 and 2010 Compared to Those in the FDA Adverse Event Reporting System and Those Obtained Through the Freedom of Information ActHealth Services Research and Managerial EpidemiologySAGE
10.1177/23333928211053965A Longitudinal Cohort Study of Emergency Room Utilization Following Mifepristone Chemical and Surgical Abortions, 1999–2015Health Services Research and Managerial EpidemiologySAGE
Deaths and Severe Adverse Events after the use of Mifepristone as an Abortifacient from September 2000 to February 2019.Issues in Law & Medicine
10.1080/10410236.2020.1770507#AbortionChangesYou: A Case Study to Understand the Communicative Tensions in Women’s Medication Abortion NarrativesHealth CommunicationTaylor & Francis
10.1016/j.contraception.2015.01.005Efficacy and safety of medical abortion using Mifepristone and buccal misoprostol through 63 daysContraceptionElsevier
10.1179/2050854913y.0000000004The Maternal Mortality Myth in the Context of Legalized AbortionThe Linacre QuarterlySAGE
10.1097/aog.0b013e31826c315fExtending outpatient medical abortion services through 70 days of gestational age.Obstetrics and GynecologyWolters Kluwer
10.1136/ebm.2011.100064Adolescent girls undergoing medical abortion have lower risk of haemorrhage, incomplete evacuation or surgical evacuation than women above 18 years oldBMJ Evidence-Based MedicineBMJ
10.1093/humrep/der016Immediate adverse events after second trimester medical termination of pregnancy: results of a nationwide registry studyHuman ReproductionOxford University Press (OUP)
10.1097/aog.0b013e3181b5ccf9Immediate Complications After Medical Compared With Surgical Termination of PregnancyObstetrics and GynecologyWolters Kluwer


[1] Conducts lobbying activities

[2] https://www.indianasenaterepublicans.com/johnson Mentioned on AAPLOG https://aaplog.org/caring-for-both-a-curbside-consult-series/

[3] Associated with CLI and AAPLOG board member

The post Science Misused in the Law appeared first on Digital Science.

]]>
Mind the Trust Gap https://www.digital-science.com/tldr/article/mind-the-trust-gap/ Sun, 09 Jul 2023 22:32:05 +0000 https://www.digital-science.com/?post_type=tldr_article&p=64205 Trust. Five letters, multiple meanings, immense power. Trust arrives on foot and leaves on horseback. Trust is the basis for society, but foundations are fracturing in a world of growing divides.

Trust in research has never been more important in our lifetimes.

In the vast subway system of the scientific world, we must navigate through research integrity. All who create or consume science are on this journey. How do we safely traverse information and hold on to the sanctity of science. How do we mind the trust gap?

The post Mind the Trust Gap appeared first on Digital Science.

]]>

Trust. Five letters, multiple meanings, immense power. Trust arrives on foot and leaves on horseback.1 Trust is the basis for society, but foundations are fracturing in a world of growing divides.

Trust in research has never been more important in our lifetimes.

In the vast subway system of the scientific world, we must navigate through research integrity. All who create or consume science are on this journey. How do we safely traverse information and hold on to the sanctity of science. How do we mind the trust gap?

In this TL;DR theme we explore important issues surrounding trust through our blogs, podcasts, social media posts and the events Digital Science will be attending:

  • Perception: What does trust in research look like? How can trust in research be a positive force?
  • Identity: Is AI a force for good in research? Who (and what) can we trust in an AI future? What are the impacts on universities globally?
  • Communications: How do trust and peer review fit together in scholarly communications? How do we translate trust from scholarship to society?
  • Networks: Is the breakdown of trust becoming a barrier to collaboration and progress? What role do geopolitical forces play? Is there fragmentation in research that is affecting our trust in processes and publications?

Thought-provoking Articles

We’ve curated articles from our in-house experts as well as those in our community to get under the skin of trust in research and what we can all do to safeguard future research integrity.

A Conflict of Interests – Manipulating Peer Review or Research as Usual?

When are commonly held interests too overlapping for peer reviewers? Examining a case of undeclared conflicts of interest.

SDGs: A level playing field?

A new white paper on the UN SDGs shows more can be done to raise up funding and research recognition for the developing world.

SDGs research outputs per year by country income
Laboratory worker in the Rodolphe Mérieux laboratory of Bamako, Mali

Zooming in on zoonotic diseases

An analysis has revealed disparities in the research effort to combat the growing risk of animal-borne diseases amid climate change.

Reproducibility and Research Integrity top UK research agenda

Digital Science reflections on The House of Commons Science, Innovation and Technology Committee report on Reproducibility and Research Integrity.

The Lone Banana Problem in AI

The subtle biases of LLM training are difficult to detect but can manifest themselves in unexpected places. Digital Science CEO Daniel Hook calls this the ‘Lone Banana Problem’ of AI.

A different perspective on responsible AI

How a school science fair inspired a passion for science communication, a PhD in microbiology, and a valuable perspective on the current AI debate.

Building trust in research

At Digital Science our tools and services are used on a daily basis by millions of researchers and students worldwide. Trust and responsibility to our user community has always been at the core of what we do, and as technology continues to evolve we recognize our role to play in helping to build global trust in research.

We’ve been supporting this since our founding in 2010, with a specific focus in recent years on building practical applications to help, including the investment in and development of the world’s leading tools for building trust in research.

Throughout 2023 we will have a special focus on showcasing the people across Digital Science whose work has a particular relevance to trust in the community. We’ll add those interviews and insights to this post as we publish them.

You also can find us at events & webinars throughout the year, and if you’d like to know more please get in touch ✉

Footnotes

1. from the Dutch saying Vertrouwen komt te voet en gaat te paard.

The post Mind the Trust Gap appeared first on Digital Science.

]]>
Our new avenue for interesting things https://www.digital-science.com/tldr/article/our-new-avenue-for-interesting-things/ Thu, 27 Apr 2023 18:25:36 +0000 https://www.digital-science.com/?post_type=tldr_article&p=62313 Welcome to Digital Science TL;DR, our new avenue for interesting things!

We bring you short, sharp insights into what’s going on across the Digital Science group; both through our in-house experts and in conversation with amazing people from the community. And we’ll keep it brief!

The post Our new avenue for interesting things appeared first on Digital Science.

]]>
Welcome to Digital Science TL;DR, our new avenue for interesting things!

We bring you short, sharp insights into what’s going on across the Digital Science group; both through our in-house experts and in conversation with amazing people from the community. And we’ll keep it brief!

Why TL;DR? Because we’ve all experienced the “Too long; didn’t read” feeling at times, and by explicitly calling this out we’re making sure we provide a short summary at the top of every article here. 🙂

Introducing our core team

We have a core team of five (at present!) who will be the primary authors of new content on the site, often working in collaboration with our in-house experts and those in the scientific and research community.

You can think of it like our core team acting as the lightning rods ⚡ attracting cool, exciting, and sometimes provocative content from across the Digital Science group and our wider community of partners, end users, customers and friends.

And so without further ado, please say hello to: Briony, John, Leslie, Simon and Suze!

Briony Fane

Briony Fane is Director of Researcher Engagement, Data, at Digital Science. She gained a PhD from City, University of London, and has worked both as a funded researcher and a research manager in the university sector. Briony plays a major role in investigating and contextualising data for clients and stakeholders. She identifies and documents her findings, trends and insights through the curation of customised in-depth reports. Briony has extensive knowledge of the UN Sustainable Development Goals and regularly publishes blogs on the subject, exploring and contextualising data from Dimensions.

John Hammersley

John Hammersley has always been fascinated by science, space, exploration and technology. After completing a PhD in Mathematical Physics at Durham University in 2008, he went on to help launch the world’s first driverless taxi system now operating at London’s Heathrow Airport.

John and his co-founder John Lees-Miller then created Overleaf, the hugely popular online collaborative writing platform with over eleven million users worldwide. Building on this success, John is now championing researcher and community engagement at Digital Science.

He was named as one of The Bookseller’s Rising Stars of 2015, is a mentor and alumni of the Bethnal Green Ventures start-up accelerator in London, and in his spare time (when not looking after two little ones!) likes to dance West Coast Swing and build things out of wood!

Image credit Alf Eaton. Prompt: “A founder of software company Overleaf, dancing out of an office and into London while fireworks explode. high res photo, slightly emotional.”

Leslie McIntosh

Leslie McIntosh is the VP of Research Integrity at Digital Science and dedicates her work to improving research and investigating and reducing mis- and disinformation in science.

As an academic turned entrepreneur, she founded Ripeta in 2017 to improve research quality and integrity. Now part of Digital Science, the Ripeta algorithms lead in detecting trust markers of research manuscripts. She works around the globe with governments, publishers, institutions, and companies to improve research and scientific decision-making. She has given hundreds of talks including to the US-NIH, NASA, and World Congress on Research Integrity, and consulted with the US, Canadian, and European governments.

Simon Porter

Simon Porter is VP of Research Futures at Digital Science. He has forged a career transforming university practices in how data about research is used, both from administrative and eResearch perspectives. As well as making key contributions to research information visualization, he is well known for his advocacy of Research Profiling Systems and their capability to create new opportunities for researchers.

Simon came to Digital Science from the University of Melbourne, where he worked for 15 years in roles spanning the Library, Research Administration, and Information Technology.

Suze Kundu

Suze Kundu (pronouns she/her) is a nanochemist and a science communicator. Suze is Director of Researcher and Community Engagement at Digital Science and a Trustee of the Royal Institution. Prior to her move to DS in 2018, Suze was an academic for six years, teaching at Imperial College London and the University of Surrey, having completed her undergraduate degree and PhD in Chemistry at University College London.

Suze is a presenter on many shows on the Discovery Channel, National Geographic and Curiosity Stream, a science expert on TV and radio, and a science writer for Forbes. Suze is also a public speaker, having performed demo lectures and scientific stand-up comedy at events all over the world, on topics ranging from Cocktail Chemistry to the Science of Superheroes.

Suze collects degrees like Pokémon, the latest being a Masters from Imperial College London that focused on outreach initiatives and their impact on the retention of women engineering graduates within the profession.

Suze is a catmamma and in her spare time loves dance and Disney, moshing and musical theatre.

Introducing our core topics

We are focusing our content around a set of core topics which are critical not just to the research community but to the world as a whole; at Digital Science we believe research is the single most powerful transformational force for the long-term improvement of society, and our vision is a future where a trusted, frictionless, collaborative research ecosystem helps to drives progress for all.

With this vision in mind, our five core topics at launch are: Global Challenges, Research Integrity, The Future of Research, Open Research, and Community Engagement.

These topics will no doubt continue to evolve over time, but that gives us a lot to get started with! Here’s the short summary of what those topics mean to us:

Global Challenges

Most of the world’s technical and medical innovations begin with a scientific paper. It has been said that the faster science moves, the faster the world moves.

But perhaps more importantly, society increasingly looks to science for solutions to today’s most pressing social and environmental challenges. If we’re going to face up to complex health issues, an ageing population, and the digital transformation of the world, we need science and research that is faster, more trustworthy, and more transparent.

With this in mind, we explore how science and research, and its communication, is evolving to meet the needs of our rapidly changing world.

Research Integrity

Research integrity will be a dominant theme in scholarly communications over the next decade. Challenges around ChatGPT, papermills, and fake science will only get thornier and more complex. We expect all stakeholders – research institutions, publishers, journalists, funding agencies, and many others – will need to dedicate more resources to fortify trust in science.

Even faced with these challenges, taking the idea of making research better from infancy to integration is exciting. Past and present, our team has built novel and faster ways to establish trust in research. We are happy to have grown a diverse group that will continue to develop the technical pieces needed to assess trust markers.

The Future of Research

Since its inception, Digital Science has always concerned itself with the future of research tools and infrastructure, with many of our products playing a transformative role in the way research is collaborated on, organised, described and analysed. Within this topic, we explore how Digital Science capabilities can continue to contribute to research future discussions, as well as highlighting interesting developments and initiatives that capture our imagination.

Open Research

At Digital Science, we build tools that help the researchers who will change the world. Information wants to be free and since the dawn of the web, funders have been innovating their policies to ensure that all research will become open.

Digital Science believes that Open Research will help level the playing fields and allow anyone anywhere to contribute to the advancement of knowledge. It also helps with other areas that pre-web academia struggled with. These include, reproducibility, transparency, accessibility and inclusivity.

These posts will cover the why and the how of open research, as it becomes just “research”.

Community Engagement

One of Digital Science’s founding missions was to invest in and nurture small, fledging start-ups to transform scholarly research and communication. Those founding teams now form the heart of Digital Science, and the desire to make, build, and change things for the better is at the core of what we do.

But we’ve never done that in isolation; Digital Science is a success because it’s always worked with the community, and most of us came from the world of research in one form or another!

In these community engagement posts we highlight and showcase some of the brilliant new ideas and start-ups in the wider science, research and tech communities.

What’s up next?

That’s all for this welcome post, but stay tuned for a whole batch of launch content being written as we speak! We’ll also have regular weekly posts from the team, and would love to hear from you if you have an idea for a subject we should cover, or simply if you’d like to say hello! 

You can contact us via the button in the top bar or footer, or via the social media links for our individual authors. 

Ciao for now!  

The post Our new avenue for interesting things appeared first on Digital Science.

]]>