peerreview Archives - Digital Science https://www.digital-science.com/tags/peerreview/ Advancing the Research Ecosystem Mon, 19 Feb 2024 11:44:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 What does New Zealand Research Look Like? https://www.digital-science.com/resource/what-does-new-zealand-research-look-like/ Tue, 22 Dec 2020 16:55:44 +0000 https://www.digital-science.com/?post_type=story&p=42768 This poster demonstrates collaboration patterns for Australasian Research Organisations.

The post What does New Zealand Research Look Like? appeared first on Digital Science.

]]>

What does New Zealand Research Look Like?

External (left) and internal (right) collaboration patterns are presented here for Australasian Research Organisations (selected by top 20 ). Researchers are coloured by the field of research that they most commonly publish in, and sized by total number of journal articles that they have published (relative to the network). To create the networks, journal articles published between 2015 and 2018 were analysed.

If you want to find out more check out our interactive dashboard.

The post What does New Zealand Research Look Like? appeared first on Digital Science.

]]>
Does Creating a Rapid Reviewer Pool Improve Trust in COVID-19 Peer Review? https://www.digital-science.com/blog/2020/09/does-creating-a-rapid-reviewer-pool-improve-trust-in-covid-19-peer-review/ Thu, 24 Sep 2020 12:00:59 +0000 https://www.digital-science.com/?p=34745 Earlier this year levels of COVID-19 papers were threatening to overwhelm the capacity of the academic community. A collaboration of publishers, industry experts and a preprint site joined together to anticipate and solve this key issue.  Six months on, has the collaboration worked at addressing that need? What is the impact on peer review? Our […]

The post Does Creating a Rapid Reviewer Pool Improve Trust in COVID-19 Peer Review? appeared first on Digital Science.

]]>
Earlier this year levels of COVID-19 papers were threatening to overwhelm the capacity of the academic community. A collaboration of publishers, industry experts and a preprint site joined together to anticipate and solve this key issue. 

Six months on, has the collaboration worked at addressing that need? What is the impact on peer review? Our guest authors, Dr Sarah Greaves and Jon Treadway are here to reveal the results in the first of three blog posts.

Dr Sarah Greaves has over 20 years of experience within STM editorial and publishing. She was originally an academic researcher before joining the editorial team at Nature Cell Biology after which she became the Publisher for Nature. Sarah launched Nature Communications and Scientific Reports and was recently the Chief Publishing Officer at Hindawi. Throughout her career, she has focused on creating innovative new products and services aimed at solving key researcher pain points whilst ensuring the academic scientist remains at the heart of any publishing decision. Sarah is involved in numerous STEM outreach initiatives, is a volunteer with InToUniversity and is chairing the C19 Rapid Review group. Sarah is also a keen tap dancer and avid support of Norwich City Football Club.

Jon Treadway is the Director of Great North Wood Consulting, where he helps mission-driven organisations understand their businesses and develop actionable strategy. Jon has held strategic and operational leadership roles in the public sector, digital entities positioning for growth and large commercial organisations. He was most recently Chief Operating Officer for Digital Science for four years, before which he worked in the strategy team at Holtzbrinck Publishing Group, and held senior roles in a number of the group’s entities including Nature Publishing Group and Macmillan Education. He spent four years running the largest cultural funding programme in the UK and became a Chartered Accountant (CPFA) while working at KPMG. He is a trustee of Conway Hall Ethical Society. Jon is based in South London, and will happily discuss the merits of Angela Carter or Sproatly Smith.

COVID-19 and the rise in volume of research outputs

During March it was clear that publishers were being swamped with submissions on COVID-19, and that this was impacting multiple elements of their work. The speed of decision making and the robustness of peer review were being affected but, critically, publishers were unable to fast track papers through peer review that were likely to make the biggest difference in treatment or understanding of the virus. Key researchers were also being swamped with requests to review. At the same time, preprint sites saw an explosion in COVID-19 submissions and had to alter their processes to handle these.

A group of publishers led by PeerJ, The Royal Society, Hindawi, F1000 Research, PLOS and eLife joined together to try and address the key issues at the time. They specifically set out to find ways to::

  • find enough peer reviewers without always asking the same group of academics
  • provide fast turnarounds for authors
  • allow quick resubmission to an alternative title
  • allow peer reviewer reports to migrate with a paper to an alternative publication venue if it was rejected at the original journal

A wide range of partners came together to create and support the initiative

The group was endorsed by the Open Access Scholarly Publishers Association (OASPA) and supported by the Copyright Clearance Center (CCC). The group of publishers worked alongside PREreview, the preprint site, to create a letter of intent for the academic community to sign up to. The group also created a sign-up process for rapid reviewers. Reviewers who signed up committed to fast turnaround times, portable and even open peer review.

A global effort for a global challenge

The response from potential reviewers surprised all those involved – over 1,000 reviewers signed up within two weeks, with further growth continuing in subsequent months as reviewers signed up from across the globe:

Highest volume reviewer sign-ups over time by geography

Total reviewer sign-ups by geography

There are now more than 1,800 rapid reviewers. The response was global, and the pattern of those signing up appears to closely mirror the incidence of disease in different countries, as shown below. The USA and India are comfortably higher than in other countries, with Italy and Brazil closely behind. (Note – for transparency and clarity, COVID-19 sign-ups have been shown on a logarithmic scale while total reviewers signed up are not):

Analysis of reviewer sign-ups & total Covid-19 instances to June 30 by geography. Source: https://ourworldindata.org/covid-cases

Collaboration and its collective impact on the research community

Other publishers have now signed the letter of intent and joined the group, all inspired to try and prevent slower turnaround times and reduce peer review pressure for COVID-19 papers. The group currently has more than 20 members and has signed up publishers including Springer Nature, Cambridge University Press, LifeScience Alliance, University College London and MIT Press, alongside more preprint sites including AfricaRxiv and SSRN, and the Research on Research Institute (RoRI, which Digital Science co-founded) who will help analyse the impact of the collaboration in more detail.

It is clear the collaboration has been a success in terms of impact but has it achieved the aims around peer review and paper transfer? The group has expanded to such an extent that it has impacted many facets of research. In our next blog in two weeks time, we will present an analysis of how the publishing process has been affected, how publishers have used the reviewers who have signed up, and what the experience has been like so far for those reviewers.

The post Does Creating a Rapid Reviewer Pool Improve Trust in COVID-19 Peer Review? appeared first on Digital Science.

]]>
Artificial Intelligence and Peer Review https://www.digital-science.com/blog/2020/09/nlp-series-ai-and-peer-review/ Wed, 23 Sep 2020 14:09:00 +0000 https://www.digital-science.com/?p=34736 Find out some of the potential applications of AI in research, from facilitating collaboration between researchers to writing papers.

The post Artificial Intelligence and Peer Review appeared first on Digital Science.

]]>

Despite the fact that, for many people, it still feels like the middle of March, we have somehow made it to September and find ourselves celebrating the sixth annual Peer Review Week! This year’s theme is Trust, and what better way to celebrate than to look back on some of the amazing developments and discussions happening around peer review and natural language processing (NLP).

In April’s episode of RoRICast, the podcast produced by the Research on Research Institute that Digital Science co-founded a year ago, my co-host Adam Dinsmore and I chatted to Professor Karim Lakhani, the Charles E. Wilson Professor of Business Administration and the Dorothy and Michael Hintze Fellow at Harvard Business School. Karim is an expert in the application of artificial intelligence in research processes, from collaboration to peer review.

Karim joined us from his home in marvellous Massachusetts. Although an MIT graduate, Karim is now based across the river at Harvard Business School. His research involves analysing a range of open source systems to better understand how innovation in technology works. One of his specific research interests is in contest-driven open innovation and how, by throwing problems open to the wider world, we are often able to engage with a range of as yet unexplored solutions, owing to the different approaches a fresh perspective can bring.

Having determined that science is both a collaborative and competitive process, Karim and his team run experiments to better understand how teams are formed, and how different novel ideas are evaluated. Karim is also investigating the impact of artificial intelligence (AI) on organisations in terms of optimising scale and scope and gathering insights to help shape future business strategy.

Mirroring the experiences of Digital Science’s own Catalyst Grant judges and mentors, Karim has seen a rise in machine-learning based tech solutions at innovation contests. His latest book,  Competing in the Age of AI: Strategy and Leadership When Algorithms and Networks Run the World, includes examples of how AI is now not only having an impact on technology and innovation but also on our everyday lives. Karim’s work informs best practice in research and innovation by conducting research on research.

In this episode of RoRICast, Karim gave us some examples of how AI is not just confined to sci-fi movies and Neal Stephenson novels, though such stories give a great many examples of what is termed ‘strong AI’, capable of carrying out many tasks extremely efficiently. However, ‘weak AI’, that is tech that has been created to do one narrow task very well, has already permeated our everyday lives, whether that is through some of the NLP solutions we have discussed in this blog series, or whether it is something as commonplace as our voice-activated smart devices capable of playing Disney songs on demand, our email spam filters, or even our Netflix recommendations.

Karim discussed some of the potential applications of AI in research, from facilitating collaboration between researchers to writing papers. He also discussed how research can implement aspects of NLP within the research process that relate to peer review. For example, by using an NLP-driven tool such as Ripeta, researchers can receive recommendations on how to improve a paper prior to submission. Ripeta analyses the reproducibility and falsifiability of research, including everything from a well-reported methodology to the inclusion of data that adheres to FAIR principles.

With the rise of the open research movement, preprints have been gaining momentum as an important research output alongside the more traditional journal publications. This is particularly relevant in these current COVID-19 times, where research output is being produced at an unprecedentedly high volume, and many researchers are opting to share their work via preprint that undergoes an ongoing and dynamic review process rather than the more formal journal peer-review process.

A rise in preprint publication has been seen across almost all fields of research in 2020, in part due to the fact that many research areas contribute to solving the challenge of a global pandemic. This has however led to some concern over preprints, and whether they are a trustworthy research output without more formal peer review practices. It is here that a tool like Ripeta could add some level of trust, transparency, robustness and reliability to research shared via preprint even before the work is shared. The Ripeta team investigated this perceived lack of confidence in COVID-19 related preprints and found that although reporting habits in pandemic-related preprint publications demonstrated some room for improvement, overall the research being conducted and shared was sound.

The use of AI in peer review is a hot topic. There are many reasons to use AI in peer review, such as eliminating the potential conflict of interest posed by a reviewer in a very closely related field, or as a means to quickly assess the vast volume of submissions, again for example during a global pandemic. However, there are limitations to the technology, and factors that must be considered when determining whether the AI could be propagating and amplifying any areas of bias within the process, simply by failing to consider the bias within the training data fed to the programme, or by failing to eliminate said bias. As Joris van Rossum explained in his article on the limitations of tech in peer review, AI that has learned from historic decisions made is potentially able to reinforce imbalances and propagate the impact of unconscious biases in research.

Karim went on to describe the way that AI can be built to mitigate such circumstances that would actually lead to breaking down many barriers to inclusion by using AI, providing we as a community invest the time and effort into creating good data, testing the technology and ensuring that programs work fairly and ethically; an aspect of social science research that RoRI is particularly interested in. Furthermore, complementary AI could be used in other parts of the research process to eliminate many stumbling blocks that could be presented by reviewers on submitting a paper.

Using AI in peer review is just one example of open innovation to improve an aspect of research, but when can we expect to see this and other AI solutions being widely adopted as part of the research process? There is already a lot of tech around us, but within the next few years, this field will expand further as we learn more about how research works. By conducting research on research, researchers like Karim can uncover trends and connections in a range of research processes, and work towards creating tech solutions that will alleviate the burden and increase the efficiency of research.

We would like to thank Professor Karim Lakhani for giving up his time to join us for this episode of RoRICast. You can hear this whole episode of RoRICast here.

SEE MORE POSTS IN THIS NLP SERIES

We’ll be staying on the topic of peer review and pandemics by kicking off a mini-blog series tomorrow on the PREreview Project. Earlier this year a collaboration of publishers, industry experts and a preprint site (PREreview) joined together to respond to overwhelming levels of COVID-19 papers. Using information and feedback from the parties and reviewers involved, our authors Jon Treadway and Sarah Greaves examine what happened, whether the initiative succeeded, and what the results can tell us about peer review.

The post Artificial Intelligence and Peer Review appeared first on Digital Science.

]]>
Peer Review Week Presentation on the Physiome Journal project https://www.digital-science.com/resource/peer-reveiw-week-the-physiome-journal-project/ Thu, 19 Sep 2019 16:46:14 +0000 https://www.digital-science.com/?post_type=story&p=42764 This presentation documents the work done to support the Peer Review Process for the Physiome Journal.

The post Peer Review Week Presentation on the Physiome Journal project appeared first on Digital Science.

]]>

What does New Zealand Research Look Like?

This work documents the work done to support the Peer Review Process for the Physiome Journa

The post Peer Review Week Presentation on the Physiome Journal project appeared first on Digital Science.

]]>
A SpotOn Day to Celebrate Peer Review #SpotOn16 https://www.digital-science.com/blog/2016/11/spoton-day-celebrate-peer-review-spoton16/ Thu, 10 Nov 2016 10:04:08 +0000 https://www.digital-science.com/?p=22214 Last Saturday, on a cold a crisp morning, SpotOn 2016 asked – what might peer review look like in 2030? With this theme in mind, scientists, policymakers, technologists and publishers came together to take part in a day packed full of fun, thoughtful and diverse sessions and workshops.  SpotOn London brings together all three SpotOn […]

The post A SpotOn Day to Celebrate Peer Review #SpotOn16 appeared first on Digital Science.

]]>
Last Saturday, on a cold a crisp morning, SpotOn 2016 asked – what might peer review look like in 2030? With this theme in mind, scientists, policymakers, technologists and publishers came together to take part in a day packed full of fun, thoughtful and diverse sessions and workshops.  SpotOn London brings together all three SpotOn strands – policy, outreach and tools – in a single day event. Our hashtags were buzzing from the moment #SpotOnPoetry kicked off! Thanks to zeeba.tv we were able to live stream this year’s event.

https://twitter.com/SpotOnLondon/status/794833208634441728

Our guests were not sent to #SpotOnPoetry empty handed – they were each given a goody bag with ballots to cast their votes in each session and biology themed stuffed toys – fun and learning!

https://twitter.com/SpotOnLondon/status/794868525320073216

Dan Simpson and Sam Illingworth started SpotOn with a bang! Their confidence on stage and their repertoire of the works of scientific poets entertained and educated. Sam and Dan asked whether science could be a poetic way of describing the universe or is it cold and uncaring? The latter was proved to be false!

After #SpotOnPoetry came #SpotOnHistory, spearheaded by Dr Noah Moxham, a historian from St Andrews University who gave a fascinating talk about the history of peer review. It was astonishing to learn that the concept of peer review took a long time to catch on and was even dismissed by scientists such as Einstein.

During the panel debate to discuss ethical issues around peer review, it was clear that researchers were frustrated with the culture of publish or perish; equally as obvious was the desire for publishers to have a transparent and efficient review process.

We were lucky enough to have two rooms at our disposal at the Wellcome Collection for this year’s SpotOn and we certainly made good use of them! #SpotOnCreative was held in the Burroughs room and we had a wonderful set of speakers giving quick presentations about new forms of technologies that are speeding up and revolutionising the peer review process. Below, you can find the full debate filmed by audience members!

By the end of the morning, our hashtag had attracted so much attention we started to trend. Considering we were up against bonfire night – that was pretty impressive!

Many people who attended and presented at SpotOn16 were technology professionals, therefore it was only fitting to have one of our speakers, Ivan Oransky, co-founder of Retraction Watch, join us online via a webcam.

Ivan provided some staggering statistics that evoked a strong response from the audience.

After Ivan finished a detailed live Q&A with our audience, we took a lunch break and got prepared for another panel discussion focused on preprints to celebrate the 25th anniversary of arXiv – the first preprint server for scientific papers worldwide. Our panelists answered complex questions about the legacy of arXiv and also about transparency in the movement of data from its origin to eventual publication.

In today’s world, artificial intelligence (AI) is no longer the stuff of science fiction writers, it is very much a significant part of developing technology. Robotics usually come to mind when we think of AI, but can it be used for peer review? Perhaps, more importantly – should it? The room couldn’t quite agree.

https://twitter.com/G_ruber/status/794912319339954180

To finish off a wonderful day, we had a session called #SpotOnTraining where we attempted to discuss solutions to help train and even rate peer reviewers and also debated the need for an incentive for people to review in the first place. There was certainly some friction when we asked if you can actually train someone to peer review papers well – many thought expertise in a subject trumps all.

https://twitter.com/sharmanedit/status/794937271535435776

Throughout the day we had a talented scribe working away in the auditorium corner creating an illustrated summary of the SpotOn’s most valuable lessons! What an outstanding job he did!

Just before the thanks to all involved were given, Jenna Shapiro’s SpotOn poetry competition’s winning poem was read aloud by Sam Illington, and she won a £50 Amazon gift voucher – well done Jenna!

https://twitter.com/SpotOnLondon/status/795562276283629568

Organising an event of this size can’t be done alone – BioMed Central played a pivotal role in running SpotOn16.

The post A SpotOn Day to Celebrate Peer Review #SpotOn16 appeared first on Digital Science.

]]>
Sentinels of Science Awards From Publons https://www.digital-science.com/blog/2016/09/sentinels-science-awards-publons/ Fri, 23 Sep 2016 14:35:49 +0000 https://www.digital-science.com/?p=20932 Digital Science was proud to be a sponsor of this year’s Sentinels of Science Awards run by Publons! These awards mark an effort to recognise the hard work that scientists from around the world have put into peer review. Congratulations to all the participants, and hats off to the overall winner Jonas Ranstam – a bag of […]

The post Sentinels of Science Awards From Publons appeared first on Digital Science.

]]>
Digital Science was proud to be a sponsor of this year’s Sentinels of Science Awards run by Publons! These awards mark an effort to recognise the hard work that scientists from around the world have put into peer review. Congratulations to all the participants, and hats off to the overall winner Jonas Ranstam – a bag of our famous Digital Science swag is on its way!

“I’m very glad to receive this award. I have always enjoyed reviewing. I find it to be a rewarding intellectual challenge that gives insights into current research.”

Second place: Chemical Engineer Grigorios Kyriakopoulos

Third Place: Cardiologist Gaetano Santulli

In the video below, watch top scientific industry professionals – including Nobel laureate, Richard J. Roberts, congratulating  the top participants.

We’re coming to the end of #PeerRevWk16, and it’s been one to remember. We were extremely grateful to feature industry experts sharing their opinions about peer review in our video, and we urged people to get involved in the week’s discussions using #peerviews.

To kick the week off in style, Publons hosted a fascinating webinar with a wonderful set of panelists!

Digital Science and Publons were not alone in commenting on the importance of peer review, Lillienne Zen from Springer Nature, posted a piece celebrating peer review. An interesting Kickstarter campaign was also brought to the world’s attention! All and all, a fantastic week.

The post Sentinels of Science Awards From Publons appeared first on Digital Science.

]]>
Why Peer Review Recognition Matters to Universities https://www.digital-science.com/blog/2016/02/peer-review-recognition-matters-universities/ Tue, 02 Feb 2016 09:49:46 +0000 https://www.digital-science.com/?p=16755 This guest post was authored by Daniel Johnston, cofounder of Publons.  A decade ago a discussion on peer review would have been completely out of place at a Research Profiles Conference.  There was no way for a research institution to reliably track and verify the peer review activities of their researchers, and nothing visible to profile! […]

The post Why Peer Review Recognition Matters to Universities appeared first on Digital Science.

]]>
daniel johnstonThis guest post was authored by Daniel Johnston, cofounder of Publons. 

A decade ago a discussion on peer review would have been completely out of place at a Research Profiles Conference.  There was no way for a research institution to reliably track and verify the peer review activities of their researchers, and nothing visible to profile!  But the landscape has changed, and now 50,000+ researchers are using services like Publons to keep a verified record of their peer review contributions.  Universities now have an opportunity to lead the charge in making use of this newly-uncovered research output, to the benefit of the university, their researchers, and to academic research on a whole.

The amount of time individual researchers spend on peer review varies significantly; some spend upwards of 120 hours a year peer reviewing, without much in the way of recognition by their research institution.  A university should be proud of the peer review contributions of their researchers — having researchers that are repeatedly asked to review by academic editors is great external validation of expertise and standing.  Highlighting these contributions plays an important part in showcasing the quality of an institution’s researchers.

Including peer review in research profiles opens up the ability to gather insights from peer review — for instance, which journals their researchers are reviewing for, and how their peer reviewing activity compares to other universities — and to include evidence of peer review in promotion applications.  The University of Queensland Library recently became the first to begin work on importing peer review data into their research output management system for these purposes.

Beyond the value to universities and their peer reviewing researchers, universities taking peer review contributions into account has the potential to improve the performance of the peer review process for everyone.

Peer review today is incredibly slow — while a single peer review takes about four hours, the process of organising two such reviews takes on average four months or more.

The main issues slowing down the peer review process — an indifference to review invitations, inconsistent quality of reviews, and long delays in returning reviews — are all typical characteristics of a task where people have no incentives to do it, or to do it well.  Improving those incentives, even marginally, will have an enormous flow-on effect on the advancement of science.

The more that universities acknowledge the peer review contributions of their researchers, the more the activity of peer reviewing will be respected.  Researchers will be more willing to perform prompt and comprehensive peer review when they know doing so will count for something next time they’re up for promotion.  Universities are in a unique position to incentivise a faster publication cycle, to the benefit of all stakeholders in the industry.

The research profiles landscape has changed a lot with respect to peer review in just a few short years.  It is now possible to measure peer review, and the rapid growth of services like Publons is evidence that researchers value having a profile of their reviewing contributions.  It is now up to universities to decide whether to match these developments in their own research profiles.

 

The post Why Peer Review Recognition Matters to Universities appeared first on Digital Science.

]]>
Exploring the ALPSP – Search on the Open Web, UX and Peer Review https://www.digital-science.com/blog/2015/09/exploring-the-alpsp-search-on-the-open-web-ux-and-peer-review/ Thu, 17 Sep 2015 13:11:39 +0000 https://www.digital-science.com/?p=14181 This post follows on from last week’s, that served as sort of a kick-off for my own personal conference season. Last week, I attended the ALPSP international conference in London. Congratulations first and foremost to Audrey McCulloch and the ALPSP team for putting on a fantastic series of talks and panels. Not least of which were […]

The post Exploring the ALPSP – Search on the Open Web, UX and Peer Review appeared first on Digital Science.

]]>
This post follows on from last week’s, that served as sort of a kick-off for my own personal conference season.

There's a lot to see and do in the ALPS(P)
There’s a lot to see and do in the ALPS(P)

Last week, I attended the ALPSP international conference in London. Congratulations first and foremost to Audrey McCulloch and the ALPSP team for putting on a fantastic series of talks and panels. Not least of which were the duelling keynote talks from Anurag Acharya, co-founder of Google Scholar and Kuansan Wang, Director, Internet Service Research Center, Microsoft Research. The pair, speaking in equivalent keynote slots on the first and second days of the conference, respectively outlined very different views of academic discovery on the open web.

Unsurprisingly, both speakers were asked about the degree to which they track individual user behaviour and how that information is used – interestingly they both explained very different scenarios. Perhaps a little evasively, Acharya didn’t really confirm or deny whether Google is actually tracking user searches at an individual level, but he did say that the information isn’t used to personalize future searches. Citing the difference between “general” search for say a local business, and the geographically global nature of “academic” search, Acharya suggested that personalizing Google scholar wouldn’t yield much additional value. Conversely, Wang described a very different philosophy of highly monitored, highly personalized search, through Bing and Cortana, that would adapt to individual users needs.

During the Q&As and the two respective post-keynote coffee breaks, it seemed to me like most of the audience agreed with Wang’s perspective. For example, identifying whether a researcher is searching within their own field or outside of it, or when certain keywords have different meanings for different fields (e.g. plasma), knowing a bit about the searcher might prove useful in providing the right type of results. On the other hand, it would be remiss of me not to mention the privacy concerns that people have about this type of data gathering and privacy. I wrote a post on Scholarly Kitchen exploring that idea some time ago.

Another favourite session of mine was the first plenary on understanding the needs of researchers by studying them. I have a lot of enthusiasm for this approach. Publishers’ main clients have traditionally been libraries. Recently, however, there has been a shift in the publishing industry towards acknowledging researchers themselves as the ultimate customer. Lettie Conrad of Sage gave a fantastic account of the work that they have been doing applying User Experience (UX) techniques such as usability testing to the search and discovery process of a variety of researchers. A white paper on the subject is here. It turns out that when you sit and watch a researcher try to find something, they behave in ways that publishers and librarians don’t expect. It becomes pretty clear that the workflows we’ve designed as publishers and librarians aren’t really working for many users. Among the observations that Conrad discussed, the most worrying was the tendency for researchers to copy and paste citations out of PDFs and into Google. Conrad also reported that search generally starts on the open web with researchers moving to library-based discovery tools as a way to authenticate, once they know what they want to download.

Conrad’s talk segwayed well into the presentation by Deidre Costello of EBSCO Information Services. As part of her talk, Costello showed a highlight reel from a series of video interviews conducted with undergraduates getting their first taste of research. Some of the responses from students were enlightening and sometimes darkly amusing.  A general sense of fear and loathing of literature was apparent, as well as a worrying lack of understanding of the purpose of librarians.

One intriguing observation that Costello made was the difference you see when you ask a student how they find content and how they actually do it. She noted that few say that they start with Google, while most of them actually do. A possible explanation being that Google is so firmly embedded in young researcher’s routines that they don’t even think about the fact that they use it. You wouldn’t expect somebody to tell you that they opened an internet browser, would you? It’s an obvious, assumed step.

The last session I’d like to mention was from the final day, titled “Peer review: evolution, experiment and debate.” The panel included fascinating presentations and turned into a great discussion during the Q+A section. Aileen Fyfe’s potted history of peer-review really illustrated how much the concept of peer review has evolved over the years, from an essentially editorial check for things like sedition and blasphemy, to something that as of fairly recently, is expected to be able to detect scientific truth itself. One reoccurring theme that emerged from the discussions: the fact too much is currently being asked of the peer review process. With the mantra of “publish or perish” being truer now than it’s ever been, it can be argued that publishers find themselves unwittingly in the position of administering the process that decides whose career advances and whose doesn’t.

There was certainly plenty to think about at the ALPSP conference. I’m looking forward to thinking and talking about some of these ideas a bit more. The concept of personalization of academic search poses many unanswered questions about the role of search and how it shapes the research process, not to mention privacy concerns. I’m also really glad to see people doing such great work around user experience. On the other side of the coin, the overlap between research assessment, academic advancement, quality control and peer review is a big and complex topic. No doubt we’ll be hearing more about that.

Kudos to Kudos
Kudos to Kudos

Finally, congratulations to Kudos, who won the ALPSP award for innovation. JStor Daily and to our portfolio company Overleaf, who were both highly commended.

 

The post Exploring the ALPSP – Search on the Open Web, UX and Peer Review appeared first on Digital Science.

]]>
It Takes a Community to Raise a Journal – Call for Action and Papers in Scholarly Publishing https://www.digital-science.com/blog/2015/07/it-takes-a-community-to-raise-a-journal-call-for-action-and-papers-in-scholarly-publishing/ Tue, 07 Jul 2015 15:10:37 +0000 https://www.digital-science.com/?p=12930 Having served as an ad hoc peer reviewer and author of several scholarly publishing articles, and being a staunch supporter and long term volunteer for the Association of Learned and Professional Society Publishers (ALPSP), I am honored now to serve on the Learned Publishing (LP) Editorial Board as an Associate Editor under the watchful stewardship […]

The post It Takes a Community to Raise a Journal – Call for Action and Papers in Scholarly Publishing appeared first on Digital Science.

]]>
Having served as an ad hoc peer reviewer and author of several scholarly publishing articles, and being a staunch supporter and long term volunteer for the Association of Learned and Professional Society Publishers (ALPSP), I am honored now to serve on the Learned Publishing (LP) Editorial Board as an Associate Editor under the watchful stewardship of Pippa Smart, LP’s Editor in Chief.

Learned PublishingI always enjoy peer reviewing articles, not just because the experience provides an opportunity to share your time and expertise with the scholarly community, but more importantly it provides exposure to interesting and cutting edge subjects within one’s area of interest (global research/publishing and publishing technologies on my end). The chance to offer input and feedback, and in many cases to help improve and develop articles and scholarly thinking is energizing and stimulating. Connecting the dots between new research and practical examples of publisher cases/industry learning, while also developing a keen understanding of what message the author is presenting is infinitely rewarding, and in numerous cases has developed lasting relationships with authors. This experience is so valuable to expand your knowledge base and create new relationships. I strongly encourage all of you to consider a volunteer position within industry/member organizations such as SSP, ALPSP, STM, ISMTE, EASEUKSG, ORCID or CSE, whether it is helping plan annual meetings, utilizing your marketing and social media skills to help promote events and articles, and/or networking with peers to increase collaboration and understanding. These experiences are mutually beneficial and worthwhile in ways that may not be realized for years to come.

CommunityAnyway, I digress, as I embark on my new volunteer assignment as Associate Editor for Learned Publishing, I’m hoping to solicit the support of colleagues and peers within the industry to help elevate the Learned Publishing Journal, a true industry centerpiece and vital resource for all in the scholarly publishing community.

With that in mind, and a rallying cry hopefully ringing out, here’s the charge and task that I, and also fellow industry experts and Associate Editors such as Charlie Rapple, Lettie Conrad and Judy Luther are charged with:

  • Encourage colleagues and peers (yes that’s you) to submit articles
  • Suggest interesting articles, topics, and authors – either to me or the Editor
  • Promote the journal through personal and professional networks, including social media and at meetings (I’ll be tweeting and supporting key articles, such as the recent one on Project CRediT that Amy Brand co-authored).
  • Contribute to the journal development strategy – including suggesting changes and helping to plan new developments.

And for those not aware of Learned Publishing, here are a few quick facts:

  • Articles on all aspects of publishing, from authorship, reviewing, technology, marketing and discoverability, new initiatives, readership, data, to internationalization, and innovative articles with challenging viewpoints
  • A range of article types including research articles, case studies, industry updates and opinion pieces
  • Quarterly publication, in print and online, with fully international authorship
  • Distributed to all ALPSP members and free to view online for all SSP (Society for Scholarly Publishing) members – and available to everyone else for subscription
  • Hybrid OA – AuthorChoice available, license to publish, authors retain copyright
  • Open/blinded review – authors/reviewers select, and rapid time to publish
  • Twitter feed: @LearnedPublish

So with that in mind, please do feel free to share your thoughts, comments, and contributions … I’d personally love to see cross stakeholder collaboration and co-authored articles that cover broad and deep emerging topic areas such as metrics, data publishing and new technologies that are important to the whole scholarly publishing ecosystem. I do believe the journal can be a show case and testing ground for the model publication.

As the saying goes, it takes a community to raise a journal!

The post It Takes a Community to Raise a Journal – Call for Action and Papers in Scholarly Publishing appeared first on Digital Science.

]]>
When Scientific Fraud Isn’t Fraud: How Both Researchers and Publishers Can Help Prevent Retractions – A Guest Blog by Tara Spires-Jones https://www.digital-science.com/blog/2015/03/when-scientific-fraud-isnt-fraud-how-both-researchers-and-publishers-can-help-prevent-retractions/ https://www.digital-science.com/blog/2015/03/when-scientific-fraud-isnt-fraud-how-both-researchers-and-publishers-can-help-prevent-retractions/#comments Tue, 17 Mar 2015 09:57:42 +0000 https://www.digital-science.com/blog/?p=3388 How researchers and publishers can help prevent scientific paper retractions

The post When Scientific Fraud Isn’t Fraud: How Both Researchers and Publishers Can Help Prevent Retractions – A Guest Blog by Tara Spires-Jones appeared first on Digital Science.

]]>
tara_photoDr Tara Spires-Jones is a Reader and Chancellor’s Fellow at the University of Edinburgh working on Alzheimer disease research. She’s a member of the editorial board of several scientific journals including The Journal of Neuroscience and is reluctantly entering the world of public outreach.

In the last few weeks, peer-review and quality assurance in scholarly publishing (or rather its shortcomings) has come into focus again in the field of Neuroscience with the retraction of a controversial paper published in the Journal of Neuroscience in 2011 by the powerhouse lab of Virginia Lee and John Trojanowski (See Alzfroum news story).  The authors were investigated by their host institution UPenn, who found that the errors in the published figures were not intentional and did not affect the conclusions of the paper.  Despite this finding, the journal retracted the paper instead of allowing corrections, and further banned the senior authors from publishing in the journal for two years.  This retraction highlights the wider problem of quality control of scientific papers.  I have painful personal experience of this problem, which in light of this recent controversy, I am drudging up to try and make a few points about how we can avoid publishing these types of mistakes in the future.

The most devastating feeling in my scientific career, worse than getting rejected for major grants, dropping an expensive piece of equipment, or realizing that a box of irreplaceable samples has been ruined, was being accused of fraud.  This accusation was from an anonymous blogger who had discovered errors in two papers on which I was a co-author.  We published errata for these papers to correct the mistakes, both of which we tracked back to clerical errors in preparing multiple revisions of the manuscripts.  All of the primary data and the analyses were sound, and our conclusions unaffected by the mistakes.  We work incredibly hard validating our experimental protocols, performing umpteen control experiments, blinding the data for analysis, lovingly storing all primary data on servers backed up to a different building in case one building burns down, etc.  So how did a group of such careful scientists publish not one but TWO papers containing errors in the figures, and what can we do to avoid this in future?

First, for those of you on the other side of the publishing fence, let me tell you what a Sisyphean task it is to get a paper published.  Here is the rundown of a typical publishing experience: submit to journal A, they reject without review after a month or so. Submit to journal B, they have it reviewed, send it back two months later, asking for revisions.  We do more experiments and send it back to journal B six months later.  They then reject it so we re-format and send to journal C who review and ask for more changes (with another delay of a couple of months).  We then do more experiments, revise again and journal C finally accepts the paper.  In this example, we have gone through five major revisions of the paper over the course of a year, all of which involve moving the figures around.  This figure rearrangement is where mistakes can easily creep in.  Take a look at these examples of loading controls for Western blots from different experiments.

blog for Phill images

Not too different, right?  Adding to the challenge, , figures often have to be placed at the end of manuscripts, making it harder for both authors and reviewers to keep track. And to make matters even worse, strict, over-small figure size limitations mean that we often present cropped versions of images simply to save space.  Even with careful tracking of where each panel of each figure comes from, it is easy to see how clerical and formatting mistakes slip through the net.

So what are we doing about this? I recently started a new lab at the University of Edinburgh and with a clean slate, instituted several lab policies for data management.  We now use electronic lab notebook software.  I chose eCAT (now RSpace), because it’s locally supported by the University.  I have colleagues who use and like other programs such as Labguru.  The electronic notebook for the whole lab allows me to easily search for experiments and importantly, after students and postdocs move on, I can find individual experiments without wading through years of paper notebooks or spreadsheets. When preparing figures, I can link them to the raw data files.  As part of the wider push for data sharing, we are also starting to collect all of the raw data that goes into each published paper and upload it to our university data repository.  On a practical level, this means we are able  to keep better track of each figure and what data were analysed to make it, making it less likely that we will make mistakes in manuscript preparation.

Those are a few of the things we as scientists are doing, but the publishing industry can also play a role in improving quality control.  Two of the biggest values that publishers add to the scientific endeavour are the cultivation of peer review and selection of the best papers to publish.  Peer reviewers and editors should really be catching this type of error in figures (full disclosure here, I am also a frequent peer reviewer and editorial board member of several journals including J Neurosci). There are initiatives by publishers that are helpful such as requiring the full raw images as supplemental material. This is particularly useful when only a cropped version is presented in the full text article.  Many journals also require the uploading of raw data for some (or all in the case of PLoS) types of datasets, particularly for genomics, proteomics and  other “omics” technologies.  Giving the reviewers and editors the opportunity to view the raw versions of figures and data could be quite helpful, although I’m hating myself a little bit already for suggesting this and heaping more things to review onto my desk.

Looking only a little further into the future, the more that publishers are able to support researchers with information management tools, the more these technologies can be interconnected. One day hopefully soon, version controlled data sets and figures could be connected to the published article without requiring manual steps like copy and pasting, that so easily lead to these final-step errors.

Clearly mistakes will continue to happen in publishing and as an individual scientist, I don’t have all of the answers to how to reduce these. Having said that, some of the new technologies and initiatives in data sharing and data management will unquestionably help researchers keep their data and figures in order.  On the publishers side, there are lots of ways in which information handling might be improved between the bench and the article and there is plenty of room for publishers to help on this score.  My appeal to publishers is to  keep in mind that researchers want high quality data and are willing to work together to ensure we get it in publications.

The post When Scientific Fraud Isn’t Fraud: How Both Researchers and Publishers Can Help Prevent Retractions – A Guest Blog by Tara Spires-Jones appeared first on Digital Science.

]]>
https://www.digital-science.com/blog/2015/03/when-scientific-fraud-isnt-fraud-how-both-researchers-and-publishers-can-help-prevent-retractions/feed/ 2