ripeta Archives - Digital Science https://www.digital-science.com/tags/ripeta/ Advancing the Research Ecosystem Tue, 31 Oct 2023 17:26:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 A Conflict of Interests – Manipulating Peer Review or Research as Usual? https://www.digital-science.com/blog/2023/01/a-conflict-of-interests/ Wed, 11 Jan 2023 08:34:55 +0000 https://www.digital-science.com/?p=60271 When are commonly held interests too overlapping for peer reviewers? Examining a case of undeclared conflicts of interest.

The post A Conflict of Interests – Manipulating Peer Review or Research as Usual? appeared first on Digital Science.

]]>

In seeking to define morality and moral actions, the Catechism of the Catholic Church states in paragraph 1753 that, “A good intention (for example, that of helping one’s neighbor) does not make behaviour that is intrinsically disordered, such as lying and calumny, good or just. The end does not justify the means.”

Stephen Sammut, PhD

Science, Scientific Method, and Politics 

It is tempting to think of science in the abstract as objective and pure based on rigorous analysis of empirical evidence. Conversely, politics might often appear less structured and more chaotic, based on subjective values and driven by interest groups and compromises. However, both are human endeavours – neither science nor politics functions solely in the abstract. Both are influenced by biases that are often not evident or transparent to the external observer. The scientific method is one mechanism of checks and balances used to curtail undue, inappropriate, or political influence on science. 

The scientific method teaches researchers to be sceptical and revolves around the performance of rigorous experiments, the collection of data, and the unbiased presentation of results in a format with sufficient explanation and transparency that peers may review, question and reproduce the results. In contrast to the platonic ideal of the scientific method, scientific enterprise in practice is more complex and nuanced. It involves many scientists with complex relationships and drivers, research institutions with needs, funding agencies with stakeholders, and publishers with shareholders. All operate according to their incentives and values. And they compete for support and funding within a society shaped by a complex, dynamic, and multi-stakeholder landscape. 

Politics also operates in what often seems like a detached or parallel universe in which decisions are reached via a mix of scientific and economic evidence, the needs of the general population, and sometimes by influential interested individuals, groups, and companies. 

In reality, science and politics have always been intimately connected, and neither works in practice as they do in theory. Science is political, and although politicians and lobbyists may not use the scientific method, they use science. Science may be used politically but what is crucial is to ensure that politics and subjectivity do not interfere with the scientific method.

Peer review is a check within the framework of scientific communication, but it is not the check. It is, however, the one salient to this story.

Existing since the 1700s, peer review provides an opportunity to validate scientific research. Growing to an accepted norm about 50 years ago, peer review ideally operates by having knowledgeable, independent experts review scientific research. Most people reading this article understand the broad workings of peer review. The peer reviewers should be independent of each other and experts in a topic covered in the paper (Fig. 1). The reviewers offer insight into the quality of the subject and the strength of the methods. In theory, all actors should be independent of one another, but in practice, this is rarely the case. ‘Peers’ means there should be some overlap among people and their knowledge – the people taking on the review must have the capacity and capability to form a thoughtful critique of a given piece of work. To that end, the editors, peer reviewers, and authors are often part of the same scientific society or even organisation (Fig. 2). 

Figure 1: Peer Review Process: Independence; and Figure 2: Peer Review Process: Affiliation Overlap.

Because the peer review process can vary and has not been standardised, the difference between optimising and manipulating the process may not be clear. The first is a grey area of knowing how the system works and fine-tuning the approach for professional gains. The latter refers to understanding how the system works and stepping over community boundaries of acceptable practices. The Committee on Publication Ethics (COPE) offers guidance on peer review. In contrast, the International Committee on Medical Journals Ethics (ICMJE) clearly states: “Reviewers should declare their relationships and activities that might bias their evaluation of a manuscript and recuse themselves from the peer-review process if a conflict exists.” 

See what you think in the following actual case.

Manipulation of Peer Review or Research as Usual?

We take a controversial 2022 research publication as our subject in this case study. However, the nature of the research is not critical to our discussion but rather the scholarly communications process and its integrity – specifically the character of the peer review process. We abstract crucial elements of this case and highlight the most salient and relevant issues. We look at this case without revealing the topic area, as this can be a distraction to the point at hand. 

We identified the current case not via a specific literature search (i.e., a topic-based approach) but rather by studying variances in trust marker signatures (e.g., hypothesis, conflict of interest, funding statements) across a range of literature, being blind to the subject area. This paper fell outside a specified range of norms for several trust markers. For example, the study purpose did not use the drier language typical for research in this area which, combined with the lack of a funding statement, raised an initial suspicion. 

Our chosen case involves three guest editors, four peer reviewers, and a single author, all of whom appear to be closely affiliated either in the community or through their professional affiliations. Three peer reviewers work directly for a single private organisation (“Organisation X”). One of the guest editors, the fourth peer reviewer, and the author are all affiliated with Organisation X. However, only one of the peer reviewers listed an affiliation with Organisation X. The other two guest editors are closely aligned with the principles of organisation X but are leaders in similar organisations. Only one of the peer reviewers originated from a traditional academic research institution. The other peer reviewers did not have affiliations with traditional research institutions. Nuances of peer review are described elsewhere.

Generally, we expect reviewers to have varying and overlapping knowledge and training in related fields for proper peer review. For example, having a topic expert and a statistician in economics would overlap fields with different areas of expertise. Additionally, we expect to see a balance of knowledge and affiliations across editors, peer reviewers, and the author. Affiliations may overlap in narrow fields with small or cutting-edge communities, but the case in question is not a narrow field. Aligned interests raised a flag, though.

In summary, the expertise of guest editors, peer reviewers and the author appears to overlap, as do their perspectives, affiliations, and alignment of interests. (Fig. 3).

Objectively and without specific context, many questions come to mind: When would these overlaps be acceptable while maintaining a robust commitment to research integrity? What other information do you need to know to make that decision? Will the peer reviewers be able to critically and independently evaluate the science within the paper?

Figure 3: Peer Review Process: Case Study.

The Case: When are commonly held interests too overlapping for peer reviewers? 

The case mentioned above is the recently published (and now retracted) paper in Frontiers in Psychology, “The Turnaway Study: A Case of Self-Correction in Science Upended by Political Motivation and Unvetted Findings” (Coleman, 2022). This paper sought to criticise The Turnaway Study, a landmark study describing “the mental health, physical health, and socioeconomic consequences of receiving an abortion compared to carrying an unwanted pregnancy to term”. The article came to our attention through algorithms where trust markers appear irregular. This alert suggested we search social media and PubPeer, where a corroborating signal was found. In addition, the signal indicated we should look closer at the trust markers within the article to ensure due diligence of scientific processes was followed. Because Frontiers published the names of reviewers and their declared affiliations, this transparency allows researchers to review their associations in the context of the peer review process and assess the potential for insularity. 

Coleman’s article, retracted on 26th December 2022, and described in Retraction Watch, appeared in the journal as part of a research topic (a curated article collection, somewhat like a special issue), Fertility, Pregnancy and Mental Health – a Behavioral and Biomedical Perspective. This research topic was led by three guest editors at Frontiers, while the specific Coleman article had four peer reviewers. All peer reviewers state different affiliations, but three are with the same anti-abortion Charlotte Lozier Institute (CLI), which states on its website that it is: “the preeminent organisation for science-based pro-life information and research.” Moreover, the editor charged with reviewing this article is affiliated with CLI. However, most associations were not disclosed (see table and Fig. 4).

NameRoleStated AffiliationAffiliation with Potential for Conflict of InterestCited by CLI*
Stephen SammutGuest EditorFranciscan University of SteubenvilleCharlotte Lozier Institute, Former member WECARE**1
Patrick P YeungGuest EditorSaint Louis UniversitySt Louis Guild of the Catholic Medical Association
Denis LarriveeGuest EditorLoyola University ChicagoInternational Association of Catholic Bioethics
Robin PierucciReviewerHomer Stryker MD School of Medicine, Western Michigan UniversityCharlotte Lozier Institute7
Steven BraatzReviewerAmerican Association of ProLife ObGynsCharlotte Lozier Institute4
Tara Sander LeeReviewerCharlotte Lozier InstituteCharlotte Lozier Institute8
John ThorpReviewerCarolina Asia Center, University of North Carolina at Chapel HillCrisis Pregnancy Center Director7
Priscilla K. ColemanAuthorHuman Development and Family Studies, Bowling Green State UniversityFormer Director, WECARE**4
*Cited by CLI means the author wrote or was cited in blog posts or other writings published by the Charlotte Lozier Institute. Note that being cited by CLI does not indicate an endorsement from the person being cited.
**World Expert Consortium for Abortion Research and Education (WECARE).
Figure 3: Peer Review Process: Affiliations.

CLI presented an amicus brief (an expert opinion) to the US Supreme Court on 29th July 2021 in support of overturning the court’s earlier decision to uphold the outcome of Roe vs Wade, which had asserted for the past 50 years that women in the United States have a constitutional right to an abortion. Moreover, one of the peer reviewers for the Coleman article, Robin Perrucci, MD, an associate scholar at CLI, filed a separate amicus brief on 20th July 2020 with the Life Legal Defense Foundation in the Dobbs v. Jackson Health US Supreme Court case. Priscilla K. Coleman directed the World Expert Consortium for Abortion Research and Education (WECARE), where Stephen Sammut was among ten other members. John Thorp’s legal testimonies on abortion have previously come into question, and he has been the medical director of an anti-abortion crisis pregnancy centre for over 40 years.

Giving Air to Unethical Practices

We are passing no comment on the area of research involved here since this is a highly emotive area for many. However, this peer review process is of clear interest in research conduct and integrity viewed independently of the underlying research. Furthermore, our simple example highlights the potential for institutes, peer reviewers, or authors to translate aligned political interests into scientific influence.

A decision-making majority of editors and peer reviewers are members or affiliates of organisations with publicly stated aligned interests; this process does not meet the standard of the independent, unbiased scientific method.

Allowing this paper to be published in the scholarly record provides a sense of unwarranted legitimacy to the arguments. We hope that publishers will learn from this experience and take action.

For those responsible for the paper, including its undeclared conflicts of interest, the end goal of having a ‘peer-reviewed’ article does not justify the means used to get there.

Note: Part of this analysis was presented at the eResearch Australasia conference in Brisbane, Australia, October 2022.

Dr Leslie McIntosh

About the Author

Dr Leslie McIntosh, Vice President, Research Integrity | Digital Science

Dr Leslie McIntosh PhD, MPH is the VP of Research Integrity at Digital Science and dedicates her work to improving research and investigating and reducing mis- and disinformation in science. As an academic turned entrepreneur, she founded Ripeta in 2017 to improve research quality and integrity. Now part of Digital Science, the Ripeta algorithms lead in detecting trust markers of research manuscripts. Dr McIntosh works around the globe with governments, publishers, institutions, and companies to improve research and scientific decision-making. Dr McIntosh served as the executive director for the Research Data Alliance (RDA) – US region and as the Director of the Center for Biomedical Informatics at Washington University School in St. Louis. She has given hundreds of talks including to the US-NIH, NASA, and World Congress on Research Integrity, and consulted with the US, Canadian, and European governments.

The post A Conflict of Interests – Manipulating Peer Review or Research as Usual? appeared first on Digital Science.

]]>
Inside Story: How a postgrad plagiarised at least 60 papers in huge publishing scam https://www.digital-science.com/blog/2022/09/inside-story-how-a-postgrad-plagiarised-at-least-60-papers/ Thu, 01 Sep 2022 17:45:24 +0000 https://www.digital-science.com/?p=58958 Investigating a plagiarism "rabbit hole" has paid off for Ripeta CEO Leslie McIntosh.

The post Inside Story: How a postgrad plagiarised at least 60 papers in huge publishing scam appeared first on Digital Science.

]]>

To fall down a rabbit hole, in today’s usage, implies that while done voluntarily, the consequences are nightmarish, with all sorts of hazards and unintended consequences. While this differs to the original meaning from Lewis Carroll’s Alice’s Adventures in Wonderland, as this New Yorker article spells out, there are corollaries that have only been discovered in today’s online-dependent world.

So, when Ripeta co-founder and CEO Leslie McIntosh described the start of her investigation of suspicious publication activity in March 2022 as being “interested in going down this rabbit hole”, she knew that while it would satisfy her curiosity as someone who had built a company on the basis of trying to improve science, it could also lead to some painful realisations as to how research and publications can go awry.

Back Story

The rabbit hole first appeared – as they so often do – in the shape of a tweet where an author had bemoaned the fact that a journal was ignoring the entreaties to act on a paper that had evidently plagiarised their own work. Over a year later, nothing had been done and so the author had taken to social media to air their concerns. After looking into the alleged plagiarism, Dr McIntosh found that the author in question – Mohammed Sabah Uddin – had published well over 100 papers while at Southeast University in Bangladesh before moving on more recently to Hong Kong University. Dr McIntosh’s suspicions were further raised by an acknowledgement in the suspect article from ‘Pharmakon Neuroscience’, which could not be verified in the GRID database maintained by parent company Digital Science

After completing a series of integrity checks using Ripeta’s technology, including cross-referencing co-authors and funding agencies, Dr McIntosh found a number of other similar acknowledgments of Pharmakon by using the Dimensions database, which contains full text searching suitable to support this type of forensic work.

Dr McIntosh explains: “The work my colleagues and I carry out is a way of protecting the academic record, and in so doing supports the increasing number of institutions, publishers, funders, and researchers who place research integrity at the heart of their approach to research.”

“As part of Ripeta’s work providing tools to highlight potential challenges to research and publication integrity principles, we will build a case study so key stakeholders can learn from this example. The work has also fed into Ripeta’s development of new solutions for research integrity.”

Analyses revealed other anomalies in Uddin’s publication record, such as:

  • A large number of articles published (130+) in a short time (from 2016-2021; source: Dimensions)
  • A high number of citations to those papers (2,000+)
  • A high number of verified reviews on Publons (300+).

Further details on Uddin’s publication record can be seen in the Appendix below.

These and other ‘trust markers’ led Dr McIntosh to consult Digital Science colleague Simon Linacre, who in addition to having 20 years’ experience in academic publishing, is also a Trustee of COPE with expertise in deceptive publishing practices. Confirming the findings of the investigation, it was time to approach the author himself.

“The work my colleagues and I carry out is a way of protecting the academic record, and in so doing supports the increasing number of institutions, publishers, funders, and researchers who place research integrity at the heart of their approach to research.”

Dr Leslie McIntosh, Founder and CEO, Ripeta

No Reply

In May 2022, Linacre used several verified email addresses to contact Uddin with a number of questions about the high volume of his articles and their similarities to other papers. Uddin was contacted three times to no avail, which is common when dealing with suspicious actors in the publishing environment. It was then decided to contact his current employer, which not only sparked an immediate response from HKU but also from Uddin himself, all on the same day. Apologising for not replying because earlier emails had gone into his email junk folder, scant details were supplied to the initial queries that had been raised. Further queries were sent to Uddin, this time copying in HKU, before the university announced a formal investigation.

The university acted quickly. After little more than a month following its investigation, HKU found evidence of improprieties on behalf of Uddin. It said he had admitted to copying 60 published papers out of a total of 180 published in the last few years – industrial-scale plagiarism made possible by using paraphrasing software to escape detection by anti-plagiarism tools. At this stage, it is still unclear how the other papers were written and published so quickly and which exact papers he authored legitimately. Overall, 90% of all articles published by Uddin are indexed in PubMed across more than 100 different journals and 10 different publishers. 

Soon after concerns were raised with HKU, Uddin’s publication history in his ORCiD profile significantly changed from populated to empty. All citations and references were deleted. In late June, HKU signalled their investigations had concluded. HKU informed Digital Science Uddin would withdraw from his PhD program at the university, effective on 1st August.

Speaking on behalf of HKU, Professor Danny Chan, Director (Education and Development of Research Integrity) said: “Upholding research integrity has always been, and will continue to be, our top priority. Any breach of trust is not only detrimental to our career, but also to the funders and the academic world. We strive to ensure our students and colleagues are putting this commitment in daily practice.“ 

Looking Ahead

Since the story was published in Retraction Watch on 25 August, a number of enquiries have been made about other aspects of the case. Who was the author in question? Why did they plagiarise so many articles? Could more have been done to identify what was going on earlier?

As Dr McIntosh commented on the Retraction Watch article when someone asked about Uddin’s acceptance into a PhD program at HKU: “This means the university must have access to both a database with all the publications and a means to disambiguate the author’s name. If anything, it shows the cracks in our current ecosystem and that HKU is willing to improve.”

To ponder too much on the motivations of an individual author can risk overlooking the systemic issues in play that can drive researchers into going to extreme lengths to develop a stacked CV. The proverbial ‘publish or perish’ culture still persists in many research environments, and it is unlikely Uddin is the first to use new tools such as paraphrasing software to pad out their publishing record. The good news is that in addition to Ripeta’s existing solutions, it is developing new tools and integrations that have been aided by the Uddin investigation which should help institutions and publishers head off such problems in the future. The bad news is, however, that as part of the investigations into Uddin, they necessitated several other ‘rabbit holes’ in the shape of co-authors and other articles acknowledging Pharmakon support. There is still a long way to go before Alice hits the bottom of this particular rabbit hole, and it is unlikely to end up anywhere like Wonderland.

Appendix

Md. Sahab Uddin Publications per Dimensions (by citations)

Md. Sahab Uddin’s Publications Citations per Dimensions*

Publication Year

2017

2016

2018

2019

2020

2021

2022

GRAND TOTAL

Unique DOIs

17

12

9

21

35

34

6

134

Sum of Times Cited

181

101

176

837

571

132

1

1999

* This is the number of publications Ripeta identified with just the author’s name and the number of citations to papers during that year.

Simon Linacre

About the Author

Simon Linacre, Head of Content, Brand & Press | Digital Science

Simon has worked in scholarly publishing for almost 20 years. His background is in journalism, and he has been published in academic journals on the topics of bibliometrics, publication ethics and research impact.

The post Inside Story: How a postgrad plagiarised at least 60 papers in huge publishing scam appeared first on Digital Science.

]]>
Sci Foo returns face-to-face in 2022 https://www.digital-science.com/blog/2022/06/sci-foo-returns-face-to-face-in-2022/ Wed, 15 Jun 2022 15:49:12 +0000 https://www.digital-science.com/?p=58115 The Digital Science team is getting ready to attend the annual Science Foo Camp (Sci Foo) in San Francisco, California.

The post Sci Foo returns face-to-face in 2022 appeared first on Digital Science.

]]>
The Digital Science team is getting ready to attend the annual Science Foo Camp in San Francisco, California this weekend – and we’re excited, because for the first time since 2019 the event will be held face-to-face as well as online.

Sci Foo, as it’s known, is an “unconference” with no fixed agenda, and brings together researchers, innovators, technologists, communicators and policy makers from around the world who are doing groundbreaking work in diverse areas of science and technology. Attendance is by invitation only.

A sketch by Alex Cagan of some of the Digital Science Sci Foo 2019 crew.
Image: A sketch by Alex Cagan of some of the Digital Science Sci Foo 2019 crew.

Since the first event in 2006, Sci Foo has aimed to do things differently. Tim O’Reilly, of O’Reilly Media, had created a format to bring together thinkers from different fields in the Friends of O’Reilly (FOO) Camp format, but it was Linda Stone who suggested that Timo Hannay (of Nature), Chris DiBona (of Google) and Tim should come together in creating a camp that brought computer scientists together with researchers and technologists.

From Digital Science, I as Head of Strategic Events and our CEO Daniel Hook are co-organisers of the event, along with Cat Allman at Google, Tim O’Reilly and Marsee Henon from O’Reilly, and Magdalena Skipper from Nature. We are ably assisted by many stalwart colleagues from across all four collaborators, who freely give their own time to support the event each year.

The topics of discussion are truly wide-ranging, and include: climate, medicine and disease, machine learning, AI, food systems, astrophysics, sustainability, neuroscience, digital society, and the various health, social, political, technological and economic impacts of the pandemic. No matter what area is being discussed, this diverse group brings a unique level of insight and expertise to the discussion, often sparking new thinking and ideas that can help to drive each individual to continue their work with renewed passion.

At Sci Foo 2022, we’re looking forward to many conversations, “lightning talks” and catching up with our fellow organisers and attendees, old and new. For those attendees unable to attend in person, there will be opportunities to join some sessions virtually. It’s our first ‘hybrid’ event, and if successful we hope to continue with this approach.

To understand more about Sci Foo, see this video from 2018 in which we asked a number of scientists what the future might hold. You can also read about past events, such as Sci Foo 2018, 2019, or the virtual Sci Foo 2021.

Video: Scientists predict the future at Sci Foo 2018.

If you’re lucky enough to be attending this year’s event, please don’t hesitate to say hi to our Sci Foo crew, including Daniel, Amarjit, Suze Kundu from Dimensions L&C, Steve Scott, Leslie McIntosh from Ripeta, and John Hammersley and Jessica Lawshe from Overleaf.

Look out for online chat about the event via the official hashtag #scifoo and discussion on Twitter and LinkedIn from the Digital Science team.

About the Author

Amarjit Myers is Head of Strategic Events at Digital Science.

The post Sci Foo returns face-to-face in 2022 appeared first on Digital Science.

]]>
Motivations of Bad Actors in Science: The Personal, The Professional, The Political https://www.digital-science.com/blog/2022/05/motivations-of-bad-actors-in-science/ Thu, 26 May 2022 11:22:19 +0000 https://www.digital-science.com/?p=57999 From lone wolves to science mercenaries, why do charlatans in science exist, what do they stand to gain, and what can be done about them?

The post Motivations of Bad Actors in Science: The Personal, The Professional, The Political appeared first on Digital Science.

]]>
Scientific publications can serve as key evidence to policymakers, as well as provide possible discussion points to inform public debate. For example, comprehensive, systematic reviews of literature regularly influence recommendations such as medical guidelines when it comes to public health policy around major issues such as the COVID-19 pandemic. The growing number of preprints available should in theory provide a faster, albeit less reviewed mechanism for researchers to share their work during the pandemic. However, what this entails is that the means to meddle with scientific communications are that much more available. But what would motivate a person, group of people, or even an organisation to intentionally game the scientific system? Personal, professional, or political – the motivations exist within people who want fame and fortune to fast-track their ambitions. Whether they use fair means or foul.

Charlatans in science are sadly not new. Persons making grandiose claims about their knowledge and outrageous cures for diseases have peppered medical history for centuries. With charismatic personalities and opportunities to influence, such individuals have professed false cures in the house of Tsar Nicholas (Rasputin) and misled ailing Londoners during an epidemic (Gustavus). Charmers playing by their own rules – gaslighting others.

Dictionary definition of charlatan.
“Charlatans in science are sadly not new.” Stock image.

Even in the least nefarious circumstances, lone actors can emerge to try to falsify science. Immense pressures placed on scientists to conduct research, publish results and have those results cited would tempt anyone to search for shortcuts. Researchers are humans, after all. Implement these requirements in an environment that supports gamifying just about anything, and even the most honest person could fail under the pressure that’s exerted.

In one case where citations were required for someone’s work, a researcher created fictitious authors in plagiarized papers to cite that work. Their work, in fact. Dr Yibin Lin posted six papers and attempted to submit eight more to preprint servers (see one example here). The case of attempting to accelerate promotion resulted in a 10-year ban from scientific research within the US.

In other cases, the motivations can only be understood from the people themselves, for example those individuals who fake being scientists. There is a long history of people outside of science providing advice as if they were experts. Some amazing citizen scientists exist, but the signal-to-noise ratio favours chaos more than substance.

There are parallels with predatory journals and those who publish their articles in them. In her seminal article on the motivations of authors published in such journals, Tove Faber Frandsen identified two main groups – the unaware and the unethical. The former claim to be ignorant of the existence of predatory journals, and innocent in succumbing to the tried and tested tactics of predatory publishers, the latter, on the other hand, exploit their existence to ensure publications – sometimes to ensure they reap the incentives in place for them, sometimes to publish unsubstantiated research. Most clearly, this has occurred during the pandemic with everything from 5G conspiracy theories to the promotion of debunked drugs and therapies appearing in fake journals.

“Even in the least nefarious circumstances, lone actors can emerge to try to falsify science.” Stock image.

As harmless as acting alone may seem in such an expansive scientific ecosystem, the consequences of a lone wolf pale in comparison to targeted attacks. Science mercenaries, well-funded by and coordinated with varying industries, can intentionally fracture the confidence in a topic. Seitz and Sanger – trained physicists later hired to undermine the harms of tobacco and climate change – worked on the atomic bomb and had legitimate education and training as physicists. As described in depth in the Merchants of Doubt, the two scientists would eventually testify in court as if they were experts in epidemiology, environmental science, virology, and dozens of other areas to undermine the confidence of overwhelming scientific evidence in the harm of tobacco and the impact of climate change. They are not alone. A whole industry exists to profit from undermining science. Worried that second-hand smoke may kill your industry? The answer seems to be to kill the research by overwhelming the regulatory agencies and polluting the scientific literature.

“Countering nefarious acts and actors must be coordinated throughout the scholarly community – publishers, institutions, and funding agencies, to name a few. Policies and practice must move from the current defensive, reactive position, to the offensive.”

Leslie McIntosh, CEO, RIPETA
“In the end, the tactics of the few will overpower an ecosystem lacking a robust strategy.” Stock image.

As with other pandemics, we have seen a plethora of charlatans emerge during the COVID-19 pandemic. From the MBAs who would ‘set the record straight on COVID’ to the self-proclaimed experts on COVID and policy. To illustrate one scam: a scientific piece would be written, typically with one author having credentials in a scientific field. The co-authors either do not exist (e.g. Yan report) or do not have supporting credentials (e.g. research conducted by Walach). In some cases, the ‘papers’ appear in repositories with little proofing evident. In other cases, the work gets published in ostensibly peer-reviewed journals – meaning the peer review, if it happened at all, was not rigorous, such as those articles published in predatory journals. Success in the eyes of the authors would see a scientific social media outcry happen where someone eventually shreds the methodology. But the authors have won. It’s a misinformation strategy: i) Put out bad-faith information on Topic X; ii) Methodology of Topic X is deeply refuted; iii) Topic X is discussed; iv) Words of Topic X are propagated. Win for the bad-faith actors.

This would be like writing an article: Squirrels – Wonderful Companions in the Garden. As everyone knows, evil squirrels steal tomatoes from the garden and throw acorns from trees maliciously trying to deprive the owners of any peace. Cute con artists at best. The authors’ intent would be to spread the lie of the delightful aspects of squirrels – intentionally putting in the key phrase they want propagated in the article title. So when a social media argument ensues, and good scientists cite the title as-is and the bad-faith actor’s message sticks: squirrels are lovely garden mates. And the lie spreads – because we as scientists playing by scientific rules indulge in critiquing the methodology before deciding on the legitimacy of the source. We legitimize their argument.

An individual infiltrating published science with falsehoods still pollutes the ecosystem. But the motivation to put profit over protecting society causes harm at scale. In the end, the tactics of the few will overpower an ecosystem lacking a robust strategy.

Countering nefarious acts and actors must be coordinated throughout the scholarly community – publishers, institutions, and funding agencies, to name a few. Policies and practice must move from the current defensive, reactive position, to the offensive – taking proactive measures to prevent harmful players entering the ecosystem and promoting automated quality checks that scale with the pace of scholarly communication.

For more information about how Ripeta can help make better science easier – for publishers, funders, researchers and academic institutions – please visit the Ripeta website.

Dr Leslie McIntosh

About the Author

Dr Leslie McIntosh, CEO | Ripeta

Dr McIntosh is the founder and CEO of Ripeta, a company formed to improve scientific research quality and reproducibility. Part of Digital Science, Ripeta leads efforts in automating quality checks of research manuscripts. Academic turned entrepreneur, Dr McIntosh served as the executive director for the Research Data Alliance (RDA) – US region and as the Director of the Center for Biomedical Informatics at Washington University School in St. Louis. Over the past years, she has dedicated her work to improving science.

The post Motivations of Bad Actors in Science: The Personal, The Professional, The Political appeared first on Digital Science.

]]>
Is research reaching the reader? https://www.digital-science.com/blog/2022/03/research-reaching-the-reader/ Wed, 09 Mar 2022 14:00:05 +0000 https://www.digital-science.com/?p=57347 How research is communicated is central to the function of a healthy research community and, as such, is close to our hearts at Digital Science

The post Is research reaching the reader? appeared first on Digital Science.

]]>

Why truly listening is so important for scholarly communications to succeed

How research is communicated is central to the function of a healthy research community and, as such, is close to our hearts at Digital Science. The Researcher to Reader (R2R) Conference is known to share that love and so it was great to see the return of the in-person gathering last month. It was an event where interested parties could engage with each other and partake in discussions on how research communication has changed through the pandemic and how it continues to change into the future.

A key challenge highlighted at R2R and which we are very familiar with at Digital Science was: Do we know how research is reaching the reader? Are we ensuring research is the single most powerful transformational force for the long-term improvement of society?

Researchers have an increasingly large number of ways to consume research results, ranging from the traditional article and raw research data to video abstracts, lay summaries and even tweets. In the last few years, consumption of open research data has developed significantly – this is certainly due to open data mandates and technologies that allow for the visualisation of data in online previews (see for example, Figshare). This dizzying range of formats and venues poses new challenges for researchers – where should they look to find new research? What can they trust? 

To address these challenges, it is important to listen and act on the needs of readers. However, what feedback loops are acceptable in the context of research? Fundamentally, those of us in the scholarly communications space need to understand the challenges of living in a world where surveillance capitalism is recognised as more than an invasion of privacy, but also as a threat to building trust in an increasingly online world. But, we should also try to build the best experience for users in the research ecosystem so that they can form trusted, collaborative and effective relationships online. This is not a transactional process, but a collaborative one.

Talks and debates at this year’s R2R were full of such collaborations, centred around reader access, open science and research data discoverability. Workshops saw deep dives into challenges such as early career researcher insights and creating a disability toolkit for scholarly publishing, the latter facilitated by Digital Science’s own Katy Alexander, Global Director Marketing Operations & Analytics.

Digital Science’s goal is to try to change the scientific ecosystem by challenging the way things are done and act on the needs of its users and customers. This is happening across its portfolio across all stages of the research lifecycle – from Altmetric tracking the most influential 100 articles to Dimensions classifying research grants by Sustainable Development Goals; from Figshare’s annual State of Open Data report to new solution Ripeta’s reproducibility checks

So, Digital Science is here to listen, to collaborate and to contribute. In 2022, the world sees continuing disruption to our way of life in the shape of the long-term impacts of the Covid-19 pandemic on our health and on our society as well as more immediately visceral issues such as conflict in Eastern Europe. We continue to believe that research is key to issues like these and that the global research community needs to be supported with the right infrastructure to make a difference.  By listening to the community and working with them to maximise research outcomes, we hope to make our contribution to a better future.

 

About the Author

Simon Linacre, Head of Content, Brand & Press | Digital Science

Simon has worked in scholarly publishing for almost 20 years. His background is in journalism, and he has been published in academic journals on the topics of bibliometrics, publication ethics and research impact. 

The post Is research reaching the reader? appeared first on Digital Science.

]]>
Back to school – how our tools can help your research https://www.digital-science.com/blog/2021/09/back-to-school-2021/ Thu, 09 Sep 2021 09:30:00 +0000 https://www.digital-science.com/?p=55489 As we head into a brand new term, here are some of the ways we can support you, whether you're returning to campus or working from home.

The post Back to school – how our tools can help your research appeared first on Digital Science.

]]>
Grab your textbooks and charge up those laptops, because it’s time to get back to school!

For many members of our research community, a new term is accompanied by many other novel opportunities. Perhaps you’ll be teaching a new course. Maybe you will be starting a long-awaited new research project. Have you finally found the perfect excuse to buy that new notebook you’ve had your eye on? Of course you have. Treat yourself!

While some things have remained unchanged during this pandemic, many of our regular research, teaching and learning activities have become more challenging. We’ve navigated new virtual learning environments, tried not to break expensive, remotely accessible lab equipment, and attended conferences from home while simultaneously juggling life admin, often physically as well as metaphorically.

Campus university in autumn

At Digital Science, we offer a range of solutions that help keep your research going during these interesting times, and even when we return to that long-awaited ‘normal’. As we head into a brand new term, here are just some of the many ways that we can support you, our research community, whether you’re returning to campus or labs, or whether you’re continuing to work from home.

We’ve got a whole page dedicated to our COVID-19 initiatives, all designed to help you navigate your way through the COVID-19 crisis. Use our free search for COVID-19 related research outputs in Dimensions to discover the latest research, host your lecture slides, notes, conference outputs and research data on Figshare, write up your results with your collaborators wherever they are in the world using Overleaf, plan your next research project using Symplectic’s Research Funding Solution, find books relevant to your research using Altmetric, or prepare for your next research role with Scismic.

In more recent developments, Writefull’s new Full Edit mode helps you to proofread your scientific texts. Full Edit mode delivers the best AI-based language feedback you’ll find – it is even tailored to scientific writing, so you know that your research outputs are going to be impactfully worded.

As a researcher, being able to quickly find relevant information is crucial. Dimensions provides free access to over 120 million publications and preprints to help you find exactly what you need, quickly and easily. Furthermore, its in-built analytical tools also help you gain actionable insights to help you guide your research in the right direction. Find out more in this introduction for researchers or dive right in using this search box:

ReadCube’s Papers App is your all-in-one literature management tool designed to keep the clutter off your desk so you can spend more time focusing on your research. Papers allows you to read and annotate your research literature, share your papers with collaborators, and cite your fundamental research quickly and easily in your own research publications.

What about when it comes to reporting on the outputs and impact of your research and teaching? Symplectic Elements helps you showcase all of your academic achievements and activities by collecting and curating research information in an easy-to-maintain search and discovery interface. Use Elements to demonstrate impact and expertise, discover internal and external collaboration, find mentorship opportunities, and easily dive into connections across institutional networks.

These are just some of the ways we are helping support our researcher community. If you want to know more about how our tools can help you, head to our website or get in touch with us by email or on Twitter. We’d love to hear about how we’re helping you make a difference.

We wish you a great start to the new term!

The post Back to school – how our tools can help your research appeared first on Digital Science.

]]>
Provocative Paper Titles https://www.digital-science.com/blog/2021/08/provocative-paper-titles/ Tue, 24 Aug 2021 15:00:03 +0000 https://www.digital-science.com/?p=55333 Does a disconnect between a paper’s abstract and its title indicate a potential need to inspect the article for possible trust issues?

The post Provocative Paper Titles appeared first on Digital Science.

]]>
By Dr Leslie McIntosh, Founder and CEO of Ripeta, and Dr Hilde van Zeeland, Applied Linguist at Writefull.

At Ripeta, we develop tools to automatically scan manuscripts for key scientific quality indicators and provide feedback on ways to improve research reporting. We assess, design, and disseminate practices and measures to improve the reproducibility of, and trust in science with minimal burden on scientists.

In what can often feel like a sea of dry scientific writing, provocative titles in scientific research papers stand out. Occasionally, legitimate scientists conducting good research will attempt more humorous titles. Sometimes, they even land! 

To highlight the joy of a jaunty paper title, our friends at Writefull, providers of AI-based research proofreading services, have developed a fun app to generate scientific paper titles based on article abstracts. When pondering paper titles, I wondered whether a disconnect between a paper’s abstract and its title could indicate a potential need to inspect the article for possible trust issues, and what better way to investigate it than to use their app!

And so, without further ado, or indeed statistical significance, I present three articles: 

Article 1: A (very not) scintillating title 

Human Created Title

An analysis of form and function of a research article between and within publishers and journals

Writefull’s Computer-Generated Title 
  • Research Article Heading Organization and Forms for Machine Learning and Natural Language Processing: A Case Study from a Single Institution
  • A Heading Form and Function Analysis for Machine Learning
  • Research Article Heading Form and Function Analysis Using Rhetorical Functions

Picking an enticing article we just published at Ripeta about research article heading and subheadings, I wanted to see how close our paper name compared to those generated by a computer. Based on the alternative possibilities, the similarities of titles reassured me that we accurately framed our paper as dryly as possible, keeping in line with scientific naming conventions. Quite an interesting article when training machine learning algorithms to parse and categorize articles. However, definitely not click-bait.

An image of a paper entitled "An analysis of form and function of a research article between and within publishers and journals" on an iPad screen

Article 2: A title from an author trying to be clever (apologies Dr. Luke)

Human Created Title

Where there’s smoke there’s money: Tobacco industry campaign contributions and U.S. Congressional voting

Writefull’s Computer-Generated Title Possibilities
  • Voting Behaviors of Representatives from the Tobacco Industry Political Action Committees in the United States: A Cross-Sectional Analysis
  • The Effectiveness of Campaign Contributions for Tobacco-Related Legislators in the United States: A Cross-Sectional, Multilevel Model
  • Voting Behavior of Tobacco Industry Political Action Committees

A search in Dimensions shows over 160 articles alluding to the proverb ‘Where there’s smoke’ in the title. Not that uncommon. Maybe even overused? From personal experience, Dr. Doug Luke enjoys using more flavourful titles for his papers and talks to make statistics sound as interesting as it really is. The generated titles compare favourably to the original segment after the academic colon.

An image of a Dimensions screen showing a paper entitled "Where there's smoke there's money: Tobacco industry campaign contributions and U.S. Congressional voting" on an iPad screen

Article 3: A provocative title (from a retracted article)

Human Created Title

The Safety of COVID-19 Vaccinations—We Should Rethink the Policy

Writefull’s Computer-Generated Title Possibilities
  • Vaccine Safety and Risk Assessment for mRNA Vaccine COVID-19
  • Vaccination of COVID-19: A Review of the Safety of Vaccines
  • Safety Evaluation of COVID-19 Vaccines: The mRNA Vaccination versus the Number Needed for Vaccination

The problem with this title is the authors put in a recommendation into the title, which plays on the boundaries of scientific cultural norms. In fact the term ‘rethink the policy’ appears in only a handful of article titles. More troublesome is that the recommendation in the title does not logically follow from the paper, as also reflected by the auto-generated titles given by Writefull. Before even considering the fraughtful methods of the paper, we know the title and substance of the paper don’t agree with each other.

Provocative paper titles remind us that, first, scientists are able to laugh at themselves a little, and second that the title itself could have a bearing on the readership and thus the exposure of the science within. Could there be a relationship between paper titles and trust? We’d love to hear your thoughts. Tweet us @ripetaReview.

An image of a paper entitled "The Safety of COVID-19 Vaccinations—We Should Rethink the Policy" on an iPad screen

Want to try your hand at the title generation app? Go to the Writefull Title Generator and let us know what you found @Writefullapp and @ripetaReview.

At Ripeta we will keep exploring and automating checks to make better science easier. To learn more, head to the Ripeta website or contact us at info@ripeta.com.

Leslie Ripeta - Headshot

Dr. Leslie McIntosh
CEO and Founder, Ripeta

Leslie is the founder and CEO of Ripeta and a researcher passionate about mentoring the next generation of data scientists. She is active in the Research Data Alliance, grew the St. Louis Machine Learning and Data Science Meetup to over 1500 participants, and was a fellow with a San Francisco based VC firm. She recently concluded as the Director of Center for Biomedical Informatics (CBMI) at Washington University in St. Louis where she led a dynamic team of 25 individuals facilitating biomedical informatics services. Dr. McIntosh has a focus of assessing and improving the full research cycle and making the research process reproducible.

The post Provocative Paper Titles appeared first on Digital Science.

]]>
Curious Case within Preprints: Is the Author Real? https://www.digital-science.com/resource/curious-case-within-preprints/ Mon, 10 May 2021 20:34:15 +0000 https://www.digital-science.com/?post_type=story&p=52108 This article highlights one case of a person acting as a scientist and placing papers on multiple preprint platforms.

The post Curious Case within Preprints: Is the Author Real? appeared first on Digital Science.

]]>

Case within Preprints: Is the Author Real?

An assumption at the heart of the scientific publication process is that the author of a manuscript is a scientist. But how can we tell when that is not the case? This supplements an article in The Scholarly Kitchen highlighting one case of a person acting as a scientist and placing papers on multiple preprint platforms.

We are committed to supporting researchers on their path to a more open and reproducible research 

Reproducibility by design

Reproducibility should be a natural and integral part of the research process – embedded and invisible wherever possible. We are committed to supporting researchers on their path to a more open and reproducible research.

Ripeta

Ripeta aims to make better science easier by providing a quick and accurate way to assess the trustworthiness of research. Ripeta focuses on assessing the quality of the reporting and robustness of the scientific method.

Making Science Better

The report focuses on the increasing importance of failure in supporting modern research and addresses three areas including appropriate documentation and sharing of research data, clear analysis and processes, and the sharing of code.

The post Curious Case within Preprints: Is the Author Real? appeared first on Digital Science.

]]>
Imposters and Impersonators in Preprints https://www.digital-science.com/resource/imposters-and-impersonators-in-preprints/ Wed, 17 Mar 2021 12:25:47 +0000 https://www.digital-science.com/?post_type=story&p=48051 Known challenges do exist but now is the time to build a coalition – to foster credibility and integrity into the open science ecosystem.

The post Imposters and Impersonators in Preprints appeared first on Digital Science.

]]>

Case within Preprints: Is the Author Real?

Imposters and Impersonators in Preprints: How do we trust authors in Open Science?

Leslie Ripeta - Headshot

In this Scholarly Kitchen post, Leslie D. McIntosh founder and CEO of Ripeta, tackles indicators of trust, and the curious cases of imposters and impersonators in covid preprints. Leslie walks us through an example to illustrate how open science practices have been manipulated through fake authorship.

Known challenges exist and these include fake peer reviewers, paper mills, and falsified institutional affiliations. These challenges offer us all the opportunity to discuss as a community the checks and balances we can put in place. Now is the time to build a coalition – to foster credibility and integrity into the open science ecosystem.

The post Imposters and Impersonators in Preprints appeared first on Digital Science.

]]>
Reproducibility By Design https://www.digital-science.com/challenge/reproducibility-by-design/ Mon, 21 Dec 2020 11:35:09 +0000 https://www.digital-science.com/?post_type=project&p=42386 Reproducibility should be a natural and integral part of the research process - embedded and invisible wherever possible.

The post Reproducibility By Design appeared first on Digital Science.

]]>

Reproducibility by Design

Reproducibility should be a natural and integral part of the research process – embedded and invisible wherever possible. However, in recent years, the research world has encountered a reproducibility crisis whereby methods, analytical software, and data reported have not been shared fully openly or accurately.

We are committed to supporting researchers on their path to a more open and reproducible research through the development and implementation of technological solutions.

A Transparency SnapShot

Author awareness and compliance needed

To understand the adoption and impact of transparency guidelines the Ripeta team analysed the top 25 highest impact journals’ manuscripts from 2019. Their results indicate that even the most compliant journals failed to ensure that half of the authors included data availability statements. These results signal the need for better author awareness and improved methods for checking compliance.

Watch the video

Helping to solve the Reproducibility Crisis

Resources

The Anatomy of a Data Availability Statement (DAS)

Reproducibility, Replicability and Trust in Science

Trusting Science in the Time of Coronavirus

Reproducibility or Producibility? Metrics and their Masters

The post Reproducibility By Design appeared first on Digital Science.

]]>