AI Archives - Digital Science https://www.digital-science.com/tags/ai/ Advancing the Research Ecosystem Thu, 11 Jul 2024 09:14:33 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 Racing Ahead: Why US Defense Must Boost AI Investments to Compete with China https://www.digital-science.com/resource/racing-ahead-why-us-defense-must-boost-ai-investments-to-compete-with-china/ Fri, 05 Jul 2024 13:00:00 +0000 https://www.digital-science.com/?post_type=story&p=72031 US defense agencies need to invest in AI solutions to keep up with China. Discover how Digital Science uses AI technology with their range of products available.

The post Racing Ahead: Why US Defense Must Boost AI Investments to Compete with China appeared first on Digital Science.

]]>

US defense agencies need to invest heavily in AI-embedded solutions to keep pace with China, according to leading AI technology firm Digital Science, emphasizing the urgent need for strategic partnerships with AI solution providers to ensure the US does not fall behind in the AI race.

Digital Science President Stephen Leicht highlights the rapid advancement and multifaceted uses of AI technology, underscoring the necessity for the US to act swiftly. “AI technology is advancing at a blistering pace. For US agencies, the key is to collaborate with AI providers to drive progress. Otherwise, we risk being left behind,” Leicht warns.

Leicht underlines this by pointing out a dramatic shift in global research outputs: “If I run an analysis of AI-related research outputs in 2023 or 2024 on Dimensions – whether it’s artificial intelligence, autonomous vehicles or related technologies – China consistently comes out on top. Ten years ago, this analysis always showed the US leading.”

“China is outpacing the world in AI output. US decision-makers should operate under the assumption that strategic competitors, especially China, may be doing these things better, investing more, and are currently ahead,” Leicht asserts.

For example, Fig. 1 shows global publication outputs in 2014 concerning artificial intelligence and autonomous vehicles according to Dimensions, with the US way out in front when you look at the network of publications on this topic, and China a distant third. However, fast forward 10 years and if you run the exact same analysis on 2024 (Fig. 2), China has more than double the number of publications of the US, and double the citations as well.

To assist governments, funders, and other stakeholders in the research space, Digital Science uses Dimensions—the world’s largest linked research database of publications, grants, clinical trials, and patents – which also offers private or custom instances that address special client needs such as security requirements and private data integration. The company excels in Retrieval-Augmented Generation (RAG), building large language models (LLMs), and utilizing LLMs for predictive analytics on client, internal and external data holdings. This comprehensive expertise in research intelligence positions Digital Science as a crucial partner for US defense agencies. A leader in the AI space, the company launched Dimensions Research GPT in February 2024.

“AI has vast applications, but it is not a one-size-fits-all solution. When responsibly applied, these technologies can significantly enhance capabilities, streamline workflows, and mobilize knowledge more effectively,” says Leicht.

“Common concerns about AI use in defense include practical issues like over-reliance, knowledge gaps, in-built biases, and potential job losses. Strategic and tactical concerns, such as the potential for deception or weaponization, are also significant.  And there are real challenges in the pace of AI adoption and technological development by the US, especially as China’s research output has risen to be the global leader.”

Leicht believes autonomous systems, including AI-driven vehicles and drones, will fundamentally change military reconnaissance, surveillance, and combat operations. He adds: “Cybersecurity is another critical area. The great challenge lies in integration—our vast and powerful military infrastructure includes many legacy systems, making seamless integration a significant hurdle.”

It is clear that the US defense sector must prioritize AI investments and partnerships to remain competitive. As such, Digital Science stands ready to support these efforts with cutting-edge tools like Dimensions Research GPT and its extensive expertise in RAG, LLMs, predictive analytics, and research intelligence, driving forward the capabilities and efficiencies of tomorrow’s defense technologies.

(This is an updated version of an original article published by Executive Biz: https://executivebiz.com/2024/07/digital-science-president-stephen-leicht-dod-must-boost-ai-investments-to-compete-with-china/

The post Racing Ahead: Why US Defense Must Boost AI Investments to Compete with China appeared first on Digital Science.

]]>
Digital Science and Artificial Intelligence https://www.digital-science.com/resource/digital-science-and-artificial-intelligence/ Wed, 28 Feb 2024 10:58:24 +0000 https://www.digital-science.com/?post_type=story&p=70025 Digital Science supports your journey towards AI adoption using our technical and analytical capabilities

The post Digital Science and Artificial Intelligence appeared first on Digital Science.

]]>

AI-powered solutions to transform your research

At Digital Science, we recognize that the journey toward AI adoption is as unique as the organizations and individuals we support. From bench researchers to medical affairs professionals to research offices, our approach is grounded in collaboration and deep understanding.

Since 2013, we’ve been investing in advanced AI methodologies, expanding our technical and analytical capabilities, and assembling a global team of AI experts. To us,  AI isn’t a one-size-fits-all solution; it encapsulates a range of both new and existing capabilities and approaches that when thoughtfully applied, can significantly enhance capabilities and streamline workflows. Our commitment continues to be focused on working closely with our partners, deeply understanding their unique challenges and aspirations, to deliver innovative and responsible AI capabilities that enhance human intelligence, drive progress, and unlock the full potential of the research community.

Our Capabilities

For the last decade, we have focused around machine learning innovations with Dimensions.ai, investment in Writefull and development of different LLMs. Building on this AI lineage, 2024 will see a continuous flow of new releases, starting with Dimensions Research GPT Enterprise and Dimensions Research GPT.

Dimensions in ChatGPT

Available via OpenAI’s GPT Store, the new products aim to provide users looking to use ChatGPT for research-related questions with generative answers they can trust – grounded in scientific evidence from Digital Science’s Dimensions database.

Key features of Dimensions Research GPT Enterprise – available to Dimensions customers with a ChatGPT Enterprise licence – include: 

  • Answers to research queries with publication data, clinical trials, patents and grant information
  • Set up in the client’s private environment and only available to client’s end users
  • Notifications each time content generated is based on Dimensions data, with references and citation details
  • Possible for clients to have custom features (following prior discussion with Dimensions).

For Dimensions Research GPT, answers to research queries are linked to tens of millions Open Access publications, and access to the solution is free to anyone with a Plus or Enterprise subscription to OpenAI’s GPT Store.

Next-generation search experience

Dimensions has introduced a new summarization feature to support the user in their discovery process for publications, grants, patents and clinical trials. It has integrated AI-driven summarization capabilities into the Dimensions web application to enable all users to accelerate the identification of the most relevant content for their research questions. Short, concise summaries are now available for every record in a given search result list with a single click, providing users with AI-generated insights quickly. The Dimensions team has used feedback from members of the research community – including academic institutions, industry, publishers, government, and funders – to develop this summarization feature in the Dimensions web app.

Smarter searching in Dimensions

Other AI solutions will follow shortly from Digital Science, all of which seek to surface AI capabilities to support users with specific, relevant functionalities where AI in particular can offer improved results. Just as importantly, they have been developed with a grounding in reliability and responsibility so that users can trust them as they do with all our other products. 

Connecting your Data

The Dimensions Knowledge Graph, powered by metaphactory, aims at helping customers harness the synergy of global research knowledge and their internal data, and enable AI-powered applications and business decisions.

AI-Powered Writing Support

Writefull uses big data and Artificial Intelligence to boost academic writing. With language models trained on millions of journal articles, it provides the best automated language feedback to date leading the next generation of research writing help.

Deeper Understanding of Scholarly Papers

Available within ReadCube Enterprise Literature Management & Papers Reference Management, our beta AI Assistant is designed to enhance research efficiency by providing real-time, in-depth analysis, summarization, and contextual understanding of scholarly articles within a researcher’s literature library.

Our latest AI insights

An experienced partner in AI

The history of AI at Digital Science

AI & Digital Science

How does Digital Science use AI? We ask ChatGPT

The post Digital Science and Artificial Intelligence appeared first on Digital Science.

]]>
AI: To Buy or Not to Buy https://www.digital-science.com/blog/2023/11/ai-to-buy-or-not-to-buy/ Thu, 30 Nov 2023 11:32:51 +0000 https://www.digital-science.com/?p=68549 What AI capabilities is GE HealthCare bringing into the medical technology company? Here's what the patent data tells us.

The post AI: To Buy or Not to Buy appeared first on Digital Science.

]]>
Shortly after General Electric spun off its HealthCare division, the newly released company started buying AI technology. To share some strategic insights, Digital Science’s IFI CLAIMS Patent Services has taken a look at the target companies’ patents to see what capabilities they’re bringing into the medical technology company.

The phrase ‘patently obvious’ is used in many contexts, from political exchanges to newspaper op-ed columns. Curiously, it is rarely used in the realm of actual patents, but in the case of General Electric’s (GE) HealthCare division, its use seems entirely appropriate.

In early 2023, GE made the decision to spin off GE HealthCare, and immediately following the move the new entity started its M&A strategy by acquiring two companies of its own – Caption Health and IMACTIS. At this early stage, is it possible to infer whether these were sound investments? Six months later, there is still a way to go before full year financial results are posted along with other financial data. However, Digital Science company IFI CLAIMS Patent Services – a global patent database provider for application developers, data scientists, and product managers – can gain insights by looking into the patents the newly enlarged GE HealthCare now holds.

Patents = Strategic Insights

It should be ‘patently obvious’, but checking companies’ patents can be a part of any due diligence process before an investment decision is made. Not only does this help understand risk and technology overlaps, it can also be used to determine where R&D efforts are currently focused in the target acquisition, and in turn set the strategy for the newly merged entity. Analyzing a company’s patent holdings in the midst of M&A dealings provides insights, such as: 

  • Strategic direction of companies (i.e., such as the extent to which they are making strides in AI)
  • Unique takes on M&A transactions as it is possible to determine – based on companies’ technologies – if core competencies overlap or not with the acquiring company
  • Ascertaining if a company’s core competencies are enhanced or not by the acquisitions it’s made

IFI’s latest acquisition report takes a look at GE HealthCare’s acquisitions of IMACTIS and Caption Health’s patented technologies to determine the innovative direction of the company.

‘A good fit’

So what insights can be gleaned from patent data about GE HealthCare and its nascent M&A strategy? According to the report, the acquisition of Caption Health and IMACTIS were a ‘good fit’ for GE HealthCare. Both the acquisitions point towards GE HealthCare’s continued growth in terms of both AI and the application of AI to its existing core technologies. Specifically:

  • IMACTIS is a tech healthcare company that offers, among other things, the provision of 3D virtual imaging to surgical navigation
  • Caption Health focuses on providing AI capabilities and image data generation to ultrasound technologies

You can see from the chart below that GE HealthCare competes with a number of major companies in establishing AI-related patents, which surged in 2019-2020 before dipping in 2021. As such, the acquisitions in the early part of 2023 of companies that are focused on technology and AI in particular seem to be a good strategic move, especially given the furore around AI technology since late 2022.

Competitive landscape for AI patent applications. Source: https://www.ificlaims.com/news/view/blog-posts/the-ifi-deal-ge-healthcare.htm

What the data says

The report concludes that both Caption Health and IMACTIS make sense for GE HealthCare for several reasons. In the current competitive climate, Caption Health adds necessary AI capabilities while IMACTIS adds new dimensions to the suite of patents it has with 3D virtual images. So overall, it’s a gold star for GE HealthCare when it comes to enhancing its patent – and future commercial – strategy. Isn’t that obvious?

Top patented concepts by Caption Health. Source: https://www.ificlaims.com/news/view/blog-posts/the-ifi-deal-ge-healthcare.htm
Top patented concepts by IMACTIS. Source: https://www.ificlaims.com/news/view/blog-posts/the-ifi-deal-ge-healthcare.htm

Three key takeaways

1. Digital Science’s IFI CLAIMS Patent Services – a global patent database provider for application developers, data scientists, and product managers – can help customers gain insights by looking into the patents held by firms, such as newly enlarged GE HealthCare.

2. IFI’s latest acquisition report takes a look at GE HealthCare’s acquisitions of IMACTIS and Caption Health’s patented technologies to determine the innovative direction of the company – the report concludes that both Caption Health and IMACTIS make sense for GE HealthCare for a number of reasons.

3. Checking companies’ patents should be a part of any due diligence process before any corporate investment decision is made, especially in pharmaceuticals sector.

Simon Linacre

About the Author

Simon Linacre, Head of Content, Brand & Press | Digital Science

Simon has 20 years’ experience in scholarly communications. He has lectured and published on the topics of bibliometrics, publication ethics and research impact, and has recently authored a book on predatory publishing. Simon is an ALPSP tutor and has also served as a COPE Trustee.

The post AI: To Buy or Not to Buy appeared first on Digital Science.

]]>
Digital Science: An Experienced Partner in AI https://www.digital-science.com/resource/digital-science-partner-in-ai/ Tue, 12 Sep 2023 10:07:43 +0000 https://www.digital-science.com/?post_type=story&p=65712 Digital Science has developed capabilities in artifical intelligence over the years to develop a range of tools to support researchers and research organisations.

The post Digital Science: An Experienced Partner in AI appeared first on Digital Science.

]]>
An Experienced Partner in AI

The history of AI at Digital Science

While 2023 has been all about artificial intelligence (AI) and its future capabilities, the technology has been progressing for several years with Digital Science at the forefront as an established partner for its many communities. As part of the Digital Science #MindTheTrustGap campaign, Simon Linacre presents an overview of how Digital Science has developed its capabilities in AI.

For anyone who has been involved in academia for the past 25 years or so, they will remember the challenges that have followed from leaps forward in technology. How were we going to cope if the Millennium Bug crashed our machines on 1 January 2000? What would Web 2.0 mean for scholarly communications? Would MOOCs bring down the university as a learning institution?

All these big questions came and went without too much fuss in the end, showing how resilient scientific communities were in solving any problems that had arisen. In 2023, the Next Big Thing is of course AI – or more specifically, Generative AI that has been released on a mostly free basis to the world – and many people involved in research are pondering how it might impact them. 

Over the years, Digital Science has aimed to support researchers and research organizations in its early adoption and implementation of AI and AI-related technologies, such as machine learning (ML), natural language processing (NLP) and more recently large language models (LLMs). For a review of how Digital Science has supported research communities – and a refresher on what these and other AI-related terms mean – we have collated some of the key articles we have published in the last few years to guide people through the AI maze. From universities to pharmaceutical firms, from researchers to publishers, these articles look specifically not just at technology like NLP itself, but its applications and implications as well.

AI across different sectors

If you are working in a large corporate setting such as a pharmaceutical firm, AI will already be familiar to you. Nevertheless, some of the advancements using AI in recent years to support scientists have been nothing short of breathtaking. It has had a major impact on cancer research and detection, which was already apparent from this article in 2018; part of this has been the creation of huge knowledge bases such as Dimensions and devising tools to interrogate them to produce new knowledge and procedures. For example, as this post by Suze Kundu highlighted, technologies such as natural language processing could help identify specific information from huge amounts of unstructured data.

When Digital Science celebrated its 10th anniversary in 2020, one thing author Katy Alexander thought would be significant was that AI would enable both an increase in accessibility and ability to understand research across disciplines and themes in a much more structured way. We are seeing this play out already with the advancement of things like the Sustainable Development Goals (SDGs), and in turn AI is playing a part in enabling collaboration among academic researchers. 

Understanding how all these factors fit together and developing solutions that enable improved capabilities for researchers is at the heart of many academic publishers, who have felt the impact of AI in recent years most keenly. Five years ago we reported – somewhat prophetically – how machines were being revolutionized by AI and what the implications might be for academic publishers as automated text generation (ATG) had clear implications for creating article content. Further implications for publishing were also discussed by Suze Kundu with a look at the potential use of NLP in peer review, and the inevitable implications that would have on trust. As we have seen in the Digital Science campaign on trust with posts such as Daniel Hook’s on the ‘Lone Banana Problem’, while there are incredible advantages to be had from using AI, there are also disadvantages if the technology isn’t engineered well or if the user doesn’t account for potential errors.

Applications

When it comes to the application of AI technology, this is where the concept of a trusted partner is particularly important. When as individuals we use technology, we do so with implicit trust in the tool we are using – trusting it will work for us, that it will find the right results and that it won’t break any rules in doing so. Scale that up to organizations using hugely powerful tools across hundreds of individuals, then the trust in the technology partner also grows along with it. 

At Digital Science, we take the responsibility of being a trusted partner for customers and users very seriously, and one way we do this is in how we invest in future solutions for the research communities we work with. You can see this with the annual program we have had, known as Catalyst Grant. This showcased new technology in this space and led to a number of long-term investments, which in turn led to some companies becoming part of the Digital Science family. They have also included support for AI technology, such as two winners that sought to disrupt their niches in the research industry as highlighted in a piece by Alex Jackson. 

In his article on NLP and startups, Digital Science Director of Portfolio Development Steve Scott outlined what a key role NLP was already playing in Digital Science solutions, as well as their increasing value in the future. Rather prophetically he said, “In the next 10 years, we will see NLP capabilities expand and be embedded in new products and services, helping researchers navigate ever-expanding data outputs and allowing new ways to extract and interpret meaningful analysis from past and present papers.” Only we are seeing this much quicker than the 10 years Steve forecast.

Natural Language Programming

A key aspect of AI technology that Digital Science has focused on over the years has been NLP – so much so that we published a series of articles curated by Suze Kundu devoted to the topic. As well as asking – and answering – important questions such as ‘what is NLP’ posed by Suze, the series also looked at how NLP could assist in academic writing, which we have seen develop further with the Writefull solution for Digital Science. 

Another focus for NLP at Digital Science has been using AI as a force for good, mitigating against its risks for some communities in the future. This includes its use in product development for companies such as Scismic, where it has led the way in responsible recruitment, as well as Suze’s highlighting what the risks can be if certain warnings are not heeded. This is true right up until the present day as these risks are balanced in current AI technology developments at Digital Science, and we work to inform people about how what we witness with AI is not magic, but a science that requires high levels of skill and mastery as Daniel Hook so vividly portrayed in his article. 

Digital Science continues to work hard to achieve this – sometimes a little too hard. Despite our best efforts, we fooled no one on 1 April 2017 with ‘news’ of a new member of the leadership team, but who knows what might happen in the future?

The post Digital Science: An Experienced Partner in AI appeared first on Digital Science.

]]>
AI at Digital Science https://www.digital-science.com/resource/ai-digital-science/ Fri, 18 Aug 2023 14:42:22 +0000 https://www.digital-science.com/?post_type=story&p=65335 How does Digital Science and it's portfolio of companies use AI?

The post AI at Digital Science appeared first on Digital Science.

]]>

How does Digital Science use AI? We ask ChatGPT

The year 2023 will forever be remembered as the year that AI finally broke into the mainstream. But in the research ecosystem, what do we mean by AI and how is it already being used? Simon Linacre does what everyone has been doing and asks AI itself for some answers

Anyone would be forgiven for thinking that artificial intelligence (AI) had only just been invented given the huge interest that has suddenly been generated in it in 2023. However, AI actually has a long history, being first conceived of in the 1950s and gradually becoming a reality in the decades after. Nevertheless, a real step forward has been made with Generative AI, which can create seemingly flawless text or images in just seconds. But is it really that new?

Writing these words on a Google Doc, I have already accepted one or two suggestions from the predictive text function, itself a form of AI. When you realize that the words you have chosen have not 100% been your own, you can see how lines have started to blur with the more advanced Generative AI’s capabilities. However, caution must still be used as AI is more than capable of getting things wrong – just read Digital Science’s post on ‘The Lone Banana Problem’. Moreover, a lack of context or key detail can lead to misrepresentation or redundancy in the content being created.

Digital Science and AI

Digital Science has been using AI and related technologies such as Machine Learning (ML) and Natural Language Processing (NLP) for several years now – indeed, Dimensions has had https://www.dimensions.ai as its URL since its launch in 2018 – and it has become a trusted partner for hundreds of organizations that depend on its AI expertise. These are the ways Digital Science and its portfolio companies uses AI-related technologies in 2023:

Dimensions

  • Document classification; Researcher Disambiguation; Research Integrity Trust Markers

OntoChem

  • Image classification to decide whether an image shows a chemical compound; Entity recognition to decide whether a term describes a chemical compound

ReadCube

ReadCube graphic
  • Sentiment Analysis for Mentions; Article Recommendation Engine

metaphacts

  • Knowledge Graph and graph-based reasoning

Writefull

Writefull graphic
  • Core writing assistance offering is AI-based; AI detection

Ask ChatGPT

The information above is correct because, well, I work at Digital Science and I know it to be true. It also breaks the information down by constituent parts in terms of each Digital Science company and how it uses the relevant technology. So what happens if we ask a Generative AI model how Digital Science uses AI? Well, we get the text below courtesy of OpenAI’s new ChatGPT iPhone app:

“Digital Science, as a company that provides scientific software and research solutions, utilizes AI in various ways. Here are a few examples:

  • – Text and data mining: AI algorithms are employed to analyze large volumes of scientific literature, extracting valuable insights, identifying trends, and accelerating the discovery process.
  • – Natural Language Processing (NLP): Digital Science uses NLP techniques to enhance the understanding and processing of scientific texts, enabling advanced search capabilities, automatic categorization, and semantic analysis.
  • – Predictive analytics: AI models are trained on scientific data to predict outcomes, such as the success or impact of research projects, funding opportunities, or potential collaborations, helping researchers make informed decisions.
  • – Recommendation systems: By employing machine learning algorithms, Digital Science can provide personalized recommendations to researchers, suggesting relevant articles, journals, or research resources based on their interests and previous activities.
  • – Data visualization and exploration: AI-powered tools enable the creation of interactive visualizations and data exploration platforms, allowing scientists to gain deeper insights from complex datasets and facilitate data-driven decision-making.

These are just a few examples of how Digital Science incorporates AI into its offerings. The field of AI is rapidly evolving, and new applications are continually emerging to support scientific research and discovery.”
Source: ChatGPT

While all of the information presented by the AI is correct, it is broken down by technical functionality, only presenting ‘examples’ of how technology is used and not what it is used on. As such, it is technically very accomplished, but of little interest to most audiences who would miss the context of how the technology is actually being applied.

AI vs Humans

In 2023 while all the hoo-haa around Generative AI was at its peak, Digital Science launched a campaign called ‘Mind the Trust Gap’ which sought to underline the importance of trust in research among those who both carry out research itself and the rest of us who depend on its integrity and quality. The developments in AI technology are often compared to humans like it is some sort of competition or even battle, but it is perhaps more sensible to view it as a relationship entering a new phase, and trust is key to that relationship. As long as we can use AI with transparency and openness like the examples above, we can build a better understanding of the world around us.

Can our AI-powered solutions help you?

The post AI at Digital Science appeared first on Digital Science.

]]>
Artificial Intelligence and Peer Review https://www.digital-science.com/blog/2020/09/nlp-series-ai-and-peer-review/ Wed, 23 Sep 2020 14:09:00 +0000 https://www.digital-science.com/?p=34736 Find out some of the potential applications of AI in research, from facilitating collaboration between researchers to writing papers.

The post Artificial Intelligence and Peer Review appeared first on Digital Science.

]]>

Despite the fact that, for many people, it still feels like the middle of March, we have somehow made it to September and find ourselves celebrating the sixth annual Peer Review Week! This year’s theme is Trust, and what better way to celebrate than to look back on some of the amazing developments and discussions happening around peer review and natural language processing (NLP).

In April’s episode of RoRICast, the podcast produced by the Research on Research Institute that Digital Science co-founded a year ago, my co-host Adam Dinsmore and I chatted to Professor Karim Lakhani, the Charles E. Wilson Professor of Business Administration and the Dorothy and Michael Hintze Fellow at Harvard Business School. Karim is an expert in the application of artificial intelligence in research processes, from collaboration to peer review.

Karim joined us from his home in marvellous Massachusetts. Although an MIT graduate, Karim is now based across the river at Harvard Business School. His research involves analysing a range of open source systems to better understand how innovation in technology works. One of his specific research interests is in contest-driven open innovation and how, by throwing problems open to the wider world, we are often able to engage with a range of as yet unexplored solutions, owing to the different approaches a fresh perspective can bring.

Having determined that science is both a collaborative and competitive process, Karim and his team run experiments to better understand how teams are formed, and how different novel ideas are evaluated. Karim is also investigating the impact of artificial intelligence (AI) on organisations in terms of optimising scale and scope and gathering insights to help shape future business strategy.

Mirroring the experiences of Digital Science’s own Catalyst Grant judges and mentors, Karim has seen a rise in machine-learning based tech solutions at innovation contests. His latest book,  Competing in the Age of AI: Strategy and Leadership When Algorithms and Networks Run the World, includes examples of how AI is now not only having an impact on technology and innovation but also on our everyday lives. Karim’s work informs best practice in research and innovation by conducting research on research.

In this episode of RoRICast, Karim gave us some examples of how AI is not just confined to sci-fi movies and Neal Stephenson novels, though such stories give a great many examples of what is termed ‘strong AI’, capable of carrying out many tasks extremely efficiently. However, ‘weak AI’, that is tech that has been created to do one narrow task very well, has already permeated our everyday lives, whether that is through some of the NLP solutions we have discussed in this blog series, or whether it is something as commonplace as our voice-activated smart devices capable of playing Disney songs on demand, our email spam filters, or even our Netflix recommendations.

Karim discussed some of the potential applications of AI in research, from facilitating collaboration between researchers to writing papers. He also discussed how research can implement aspects of NLP within the research process that relate to peer review. For example, by using an NLP-driven tool such as Ripeta, researchers can receive recommendations on how to improve a paper prior to submission. Ripeta analyses the reproducibility and falsifiability of research, including everything from a well-reported methodology to the inclusion of data that adheres to FAIR principles.

With the rise of the open research movement, preprints have been gaining momentum as an important research output alongside the more traditional journal publications. This is particularly relevant in these current COVID-19 times, where research output is being produced at an unprecedentedly high volume, and many researchers are opting to share their work via preprint that undergoes an ongoing and dynamic review process rather than the more formal journal peer-review process.

A rise in preprint publication has been seen across almost all fields of research in 2020, in part due to the fact that many research areas contribute to solving the challenge of a global pandemic. This has however led to some concern over preprints, and whether they are a trustworthy research output without more formal peer review practices. It is here that a tool like Ripeta could add some level of trust, transparency, robustness and reliability to research shared via preprint even before the work is shared. The Ripeta team investigated this perceived lack of confidence in COVID-19 related preprints and found that although reporting habits in pandemic-related preprint publications demonstrated some room for improvement, overall the research being conducted and shared was sound.

The use of AI in peer review is a hot topic. There are many reasons to use AI in peer review, such as eliminating the potential conflict of interest posed by a reviewer in a very closely related field, or as a means to quickly assess the vast volume of submissions, again for example during a global pandemic. However, there are limitations to the technology, and factors that must be considered when determining whether the AI could be propagating and amplifying any areas of bias within the process, simply by failing to consider the bias within the training data fed to the programme, or by failing to eliminate said bias. As Joris van Rossum explained in his article on the limitations of tech in peer review, AI that has learned from historic decisions made is potentially able to reinforce imbalances and propagate the impact of unconscious biases in research.

Karim went on to describe the way that AI can be built to mitigate such circumstances that would actually lead to breaking down many barriers to inclusion by using AI, providing we as a community invest the time and effort into creating good data, testing the technology and ensuring that programs work fairly and ethically; an aspect of social science research that RoRI is particularly interested in. Furthermore, complementary AI could be used in other parts of the research process to eliminate many stumbling blocks that could be presented by reviewers on submitting a paper.

Using AI in peer review is just one example of open innovation to improve an aspect of research, but when can we expect to see this and other AI solutions being widely adopted as part of the research process? There is already a lot of tech around us, but within the next few years, this field will expand further as we learn more about how research works. By conducting research on research, researchers like Karim can uncover trends and connections in a range of research processes, and work towards creating tech solutions that will alleviate the burden and increase the efficiency of research.

We would like to thank Professor Karim Lakhani for giving up his time to join us for this episode of RoRICast. You can hear this whole episode of RoRICast here.

SEE MORE POSTS IN THIS NLP SERIES

We’ll be staying on the topic of peer review and pandemics by kicking off a mini-blog series tomorrow on the PREreview Project. Earlier this year a collaboration of publishers, industry experts and a preprint site (PREreview) joined together to respond to overwhelming levels of COVID-19 papers. Using information and feedback from the parties and reviewers involved, our authors Jon Treadway and Sarah Greaves examine what happened, whether the initiative succeeded, and what the results can tell us about peer review.

The post Artificial Intelligence and Peer Review appeared first on Digital Science.

]]>
How will technology shape research in the coming decade? https://www.digital-science.com/blog/2020/04/digital-science-is-10-how-will-technology-shape-research-in-the-coming-decade/ Wed, 29 Apr 2020 11:04:10 +0000 https://www.digital-science.com/?p=33722 Making predictions is never easy – particularly with the potential for unanticipated developments, but mining Dimensions’ data can provide us with some clues.

The post How will technology shape research in the coming decade? appeared first on Digital Science.

]]>
Quick Read
  • The rising popularity of technologies such as the Internet of Things (IoT) offers interconnectivity of devices that is allowing us to gather data on an unprecedented scale
  • 3D printing technology is offering labs the opportunity to create their own equipment at a greatly reduced cost
  • For many researchers, data has already replaced the traditional theoretical hypothesis as the trigger point for a new research project. 4) Funders are not only scanning grant applications for signs of societal impact, many now want to see evidence of collaboration

Author Biographies

Inez van Korlaar

Inez van Korlaar is the Marketing Director for Dimensions. She has a PhD in Medical Psychology and joined Digital Science in September 2018. Before joining Digital Science she worked in scholarly publishing in a variety of publishing, marketing, and product management roles.

Cristina Huidiu is a product specialist for Dimensions and Altmetric and as a former librarian, is fascinated with data stories that can be used as strategic insights by research institutions.

Dusting off our crystal ball

As Digital Science celebrates its 10th birthday this year, Suze Kundu kicked off our birthday year in style with her January post on the game changers that have reshaped research over the past 10 years. So, the Dimensions team decided to dust off their crystal ball and attempt to uncover what the next 10 years may bring. Making predictions is never easy – particularly with the potential for unanticipated developments, such as the novel coronavirus, COVID-19. But mining Dimensions’ rich and interconnected data can certainly provide us with some useful clues.

You only have to glance through the publication, clinical trial, grant and patent data in Dimensions to see that a number of factors are likely to drive change in the coming decade. Many of these factors won’t come as a surprise – they are already key features of the research landscape. These range from funding challenges to evolving impact measures, and the rise in collaboration to the growth of open science. But if there is “one theme to rule them all”, it would have to be technology. Not only is it rapidly evolving in its own right, it is also empowering change at every stage of the research lifecycle.

Understanding the technology revolution

The rising popularity of technologies such as the Internet of Things (IoT) offers an interconnectivity of devices that is allowing us to gather data on an unprecedented scale. And now, thanks to increasingly sophisticated algorithms and advances in artificial intelligence (AI), we can interrogate that data in ways that were barely imaginable a few years ago. In fact, for many, AI is the technology with the greatest potential to truly transform research and society as a whole; as a result, it has become one of the fastest-growing research fields. We can see this reflected in the rise in AI-related Dimensions publications over the past 10 years (figure 1).

Figure 1:  The publication growth for AI and image processing (a subcategory of Information and Computer Sciences) was 131.6% between 2009 and 2019. This compares to a total growth of 93.2% for all subcategories in Information and Computer Sciences over the same period.

At the same time, 3D printing technology is offering labs the opportunity to create their own equipment at a greatly reduced cost. These printers can even mimic human tissue and organs, creating the potential to reduce – possibly eventually eliminate – animal testing. Technology is also fuelling the automation of many labor-intensive lab processes, and can perform tests with greater speed and even accuracy than the humans it has replaced.

Then there’s augmented reality (AR), virtual reality (VR) and blockchain… the list goes on. All of these technologies are combining to fuel an enormous shift in what we research, how those experiments are conducted and how we access their results. Let’s take a closer look….

Driving decision-making

Increasingly, data is dictating the research agenda and that is not going to change. For many researchers, data has already replaced the traditional theoretical hypothesis as the trigger point for a new research project. Data generated by the Internet of Things network is also allowing researchers to monitor and manage experiments remotely and identify when a change of direction is needed. This trend will grow and, as the decade progresses, researchers will be called upon to enhance their data skills.

Data is also used to rate institutions, which, in turn, use it to understand and showcase their performance to potential funders, employees, students and collaborators. Importantly, funders use it to identify areas where their money is needed most and which researchers and institutions are likely to spend it wisely – and that push to spend money wisely will only increase in importance in the years ahead. Traditionally, the number of articles a researcher has published, how often those articles have been cited and the “quality” of the journals in which they appeared have proved crucial factors in securing funding and career progression. But, with much research funded by taxpayers, researchers are now under pressure to demonstrate that their findings will benefit those footing the bill.

Figure 2: Active grants from 2009 to 2047 awarded by the European Commission: 

The top 10 disciplines awarded funding have additional metrics such as overall number of grants, acknowledged resulting publications and the impact of those publications expressed through our field weighted metric, FCR. An additional layer of societal impact of the acknowledged publications is expressed through the percentage of publications with attention tracked by Altmetric.

The role of the United Nations’ Sustainable Development Goals (SDGs)

Global challenges are also feeding the demand for research with societal impact; for example, the ramifications of climate change (rising sea levels, natural calamities, refugees, food shortages, etc.) and an ageing population with its economic and health consequences. Increasingly, funding is aligned with the United Nations’ Sustainable Development Goals (SDGs). To help the community track research aligned with these goals, we’ve added a filter to Dimensions for all 17 SDGs (see figure 3).

Figure 3:  Allocation of grants by the 10 main funders to the 17 SDGs. The data, drawn from Dimensions, show that affordable clean energy has been awarded the highest volume of funding to date.

Technology is already fuelling these new ways of determining impact beyond scholarly citation, such as tracking usage and online mentions through altmetrics. The next step may be to leverage its potential to measure other aspects of research from how openly a researcher shares their data, to how easy their work is to replicate and its benefit to society.

We are also seeing new sources of funding emerge. With so many pressures on the public purse – a situation that is only likely to increase following the economic fallout of COVID-19 – philanthropists and industry are stepping up their investment in R&D. And for these groups too, technology will be in the driving seat. For example, the stated aim of the Bill & Melinda Gates Foundation’s multi-billion dollar fight against malaria is to reduce deaths through “improved data-driven decisions” and to build the systems and platforms required to achieve that goal.

Reshaping how research is conducted

Funders are not only scanning grant applications for signs of societal impact, many now want to see evidence of collaboration. These collaborations can take many forms; for example, institutional agreements spanning multiple countries and industry/academia partnerships to achieve shared goals. This pooling of knowledge and funds brings many benefits, especially for researchers in developing regions who may not have access to the same cutting-edge technology and equipment as their counterparts in research-intensive countries. It is also supporting a move towards mega-science, with initiatives such as CERN, the world’s largest particle physics laboratory – one of the first European collaborations, its ground-breaking technology is now available for use by research institutes all over the world.

Figure 4 uses Dimensions publications data to map the steady rise in collaborative projects over the past decade – we can expect to see this trend continue over the coming 10 years.  In fact, the urgent need to respond to COVID-19 is likely to accelerate growth – only by working together and sharing knowledge can researchers and governments secure their nations’ health and economies.

Figure 4: The percentage rise in collaborations between 2010 and 2020, based on Dimensions publications data.

And that growth in collaboration is likely to be evident across both open access and subscription publications, as figure 5 demonstrates.

Figure 5: Collaboration patterns across open access and subscription publications between 2010 and 2020, based on Dimensions publications data. 

This increase has been made possible by new tools to support sharing of large data sets, collaborative writing and online meetings; in many cases, collaborators will never even meet face to face.

Interdisciplinary science will also continue to grow in the years ahead. This rise will be driven, in part, by the complexity of the global challenges we face, which many feel can only be solved by the blurring of disciplinary borders. This too is being fuelled by technology as sophisticated algorithms and software help researchers discover and consume content outside their own fields.

Figure 6: The interdisciplinarity of artificial intelligence research

As we touched on briefly earlier in this article, life within the lab and the lecture hall will continue to evolve thanks to technology, which offers the power to conduct lessons and experiments remotely, enhance our understanding in virtual or augmented environments, and so much more. It is likely that the need to find new ways of working prompted by COVID-19 will accelerate change in this area. It has already triggered more timely and collaborative ways of sharing knowledge in a bid to accelerate discovery; for example, we are making available all COVID-19 related published articles, preprints, datasets and clinical trials from Dimensions in one file, updated daily, which is free for anyone to access.

Revolutionizing access

Open science – including Open Access publishing – has been a major trend over the past decade and its importance will only increase (see figure 7).  This move toward making science more transparent, accessible, and to share results as early as possible, is driven by many of the themes we’ve explored in this article, from a growing call for public accountability to the rise in global challenges.

Figure 7: The volume of research published each year is rising. In this graph, which draws on Dimensions publications data, we can see that the growth of Open Access articles is accelerating at a faster pace than subscription publications.

Another factor is the perceived reproducibility crisis currently facing science, prompted by the worryingly high number of published scientific studies that are difficult, or even impossible to replicate. There is a belief that by sharing data and other elements of the research process more openly, the crisis will be resolved. This has resulted in the launch of new online solutions which host all stages of the research cycle from source code to data and software designs, such as Figshare. Datasets are also now available in Dimensions as a content type.

We’ve also seen the launch of preprint servers to support early sharing of results and online channels for the negative results and sound science papers that researchers have traditionally struggled to find a home for.

Figure 8: Preprint publications grew by more than 175% between 2009 and 2019.

These new platforms mean there is more content available than ever before. Many companies are leveraging technology to filter this sea of literature and use algorithms to track and understand researchers’ preferences. We can expect this software to become ever more sophisticated in the decade ahead.

There are other factors that will influence how research evolves in the coming decade from shifts in geopolitical power to developments that it’s impossible to anticipate today. However, what is clear, is that technology will play a key role, reshaping the research landscape in ways we have yet to imagine. There will be challenges along the way, ranging from ethical questions around AI to funding and political agendas, but with the shared passion for discovery that drives us all in the research community, we at Dimensions believe the future is filled with possibility!

The post How will technology shape research in the coming decade? appeared first on Digital Science.

]]>
NLP Series: AI in Science; the Promise, the Challenge, and the Risk https://www.digital-science.com/blog/2020/04/nlp-series-ai-in-science-promise-challenge-risk/ Tue, 07 Apr 2020 18:46:22 +0000 https://www.digital-science.com/?p=33578 Dr Joris van Rossum focuses on AI in science and looks at the potential to make research better, but also the pitfalls.

The post NLP Series: AI in Science; the Promise, the Challenge, and the Risk appeared first on Digital Science.

]]>

Continuing our blog series on Natural Language Processing, Dr Joris van Rossum focuses on AI in science; the potential to make research better, but also the pitfalls that we must be wary of when creating and applying these new technologies. Joris has over 20 years of experience driving change in the publishing industry through new technologies and business models. His former roles include Director of Publishing Innovation at Elsevier and Director of Special Projects at Digital Science, a role in which he authored the Blockchain for Research report. He co-founded Peerwith in 2015, and currently serves as Research Data Director at STM, where he drives the adoption of sharing, linking and citing data in research publications.

Understanding the risks

According to Professor Thomas Malone, Director of the MIT Center for Collective Intelligence, AI should essentially be about connecting people and computers so that they collectively act more intelligently than any individual person, group or computer has ever done before. This connectivity is at the core of science and research. Science is a collective activity par excellence, connecting millions of minds in space as well as time. For hundreds of years, scientists have been collaborating and discussing their ideas and results in academic journals. Computers are increasingly important for researchers: in conducting experiments, collecting and analyzing data and, of course, in scholarly communication. Reflecting on this, it is perhaps surprising that AI does not play a bigger role in science today. Although computers are indispensable for modern scientists, the application of artificial intelligence lags behind other industries, such as social media and online search. Despite its huge potential, uptake of AI has been relatively slow. This is in part due to the nascent state of AI, but also to do with cultural and technological features of the scientific ecosystem. We must be aware of these in order to assess the risks associated with unreflectively applying artificial intelligence in science and research.

AI and NLP in healthcare

A logical source of data for intelligent machines is the corpus of scientific information that has been written down in millions of articles and books. This is the realm of Natural Language Processing (NLP). By processing and analyzing this information, computers could come to insights and conclusions that no human could ever reach individually. Relationships between fields of research could be identified, proposed theories collaborated on or rejected based on an analysis of a broad corpus of information, and new answers to problems given.

This is what IBM’s Watson has attempted in the field of healthcare. Initiated in 2011, it aims to build a question-and-answer machine based on data derived from a wealth of written sources, helping physicians in clinical decisions. IBM has initiated several efforts to develop AI-powered medical technology, but many have struggled, and some have even failed spectacularly. What this lack of success shows is that it is still very hard for AI to make sense of complex medical texts. This will therefore most certainly also apply to other types of scientific and academic information. So far, no NLP technology has been able to match human beings in comprehension and insight.

Barriers to information

Another reason for the slow uptake of NLP in science is that scientific literature is still hard to access. The dominant subscription and copyright models make it impossible to access the entire corpus of scientific information published in journals and books by machines. One of the positive side effects of the move towards Open Access would be the access to information by AI engines, although a large challenge still lies in the immaturity of NLP to deal with complex information.

More data give greater context

Despite the wealth of information captured in text, it is important to realize that the observational and experimental scientific data that stands at the basis of articles and books is potentially much more powerful for machines. In most branches of science the amount of information collected has increased with dazzling speed. Think about the vast amount of data collected in fields like astronomy, physics and biology. This data would allow AI engines to fundamentally do much more than what is done today. In fact, the success of born-digital companies like Amazon and Google have had in applying AI is to a large extent due to the fact that they have a vast amount of data at their disposal. AI engines could create hypotheses on the genetic origin of diseases, or the causes for global warming, test these hypotheses by means of plowing through the vast amount of data that is produced on a daily basis, and so to arrive at better and more detailed explanations of the world.

Shifting the culture around data sharing to create better AI

A challenge here is that sharing data is not yet part of the narrative-based scholarly culture. Traditionally, information is shared and credit earned in the form of published articles and books, not in the underlying observational and experimental data.

Important reasons for data not being made available is the fear of being scooped and the lack of incentives, as the latest State of Open Data report showed. Thankfully in recent years efforts have been made to stimulate or even mandate the sharing of research data. Although these offers are primarily driven by the need to make science more transparent and reproducible, enhancing the opportunity for AI engines to access this data is a promising and welcome side-effect.

Like the necessary advancement of NLP techniques, making research data structurally accessible and AI-ready will take years to come to fruition. In the meantime, AI is being applied in science and research in narrower domains, assisting scientists and publishers in specific steps in their workflows. AI can build better language editing tools, such as in the case of Writefull, who we will hear from in the next article in this series. Publishers can apply AI to perform technical checks, such as in Unsilo, scan submitted methods sections for assessing the reproducibility of research, the way Ripeta and SciScore do, and analyze citations, like Scite. Tools are being developed to scan images of submitted manuscripts to detect manipulation and duplication, and of course scientists benefit from generic AI applications such as search engines and speech and image recognition tools. Experiments have also been done with tools that help editors in making decisions to accept or reject papers. The chance of publishing a highly cited paper is predicted based on factors including the subject area, authorship and affiliation, and the use of language. This last application exposes an essential characteristic of machine learning that should make us cautious.

Breaking barriers, not reinforcing them

Roughly speaking, in machine learning, computers learn by means of identifying patterns in existing data. A program goes through vast numbers of texts to determine the predominant context in which words occur, and uses that knowledge to determine what words are likely to follow. In the case of the tools that support editors in their decision to accept or reject papers, it identifies factors that characterize successful papers, and makes predictions based on the occurrence of these factors in submitted papers. This logically implies that these patterns will be strengthened. If a word is frequently used in combination with another word, the engine subsequently suggesting this word to a user will lead to that word being used even more frequently. If an author was successful, or a particular theory or topic influential, AI will make these even more so. And if women or people from developing countries have historically published less than their male counterparts from Western countries, AI can keep them underperforming.
In other words, AI has the risk of consolidating the contemporary structures and paradigms. But as the philosopher of science Thomas Kuhn showed, real breakthroughs are characterized by replacing breaking patterns and replacing paradigms with new ones. Think of the heliocentric worldview of Kepler, Copernicus and Galileo, Darwin’s theory of natural selection, and Einstein’s theory of relativity. Real progress in science takes place by means of the novel, the unexpected, and sometimes even the unwelcome. Humans are conservative and biased enough. We have to make sure that machines don’t make us even more so.

DOI: https://doi.org/10.6084/m9.figshare.12092403.v1

SEE MORE POSTS IN THIS NLP SERIES

The post NLP Series: AI in Science; the Promise, the Challenge, and the Risk appeared first on Digital Science.

]]>
Launching our blog series on Natural Language Processing (NLP) https://www.digital-science.com/blog/2020/03/launching-our-blog-series-on-natural-language-processing-nlp/ Wed, 04 Mar 2020 15:25:30 +0000 https://www.digital-science.com/?p=33083 Launching our blog series on Natural Language Processing (NLP)

The post Launching our blog series on Natural Language Processing (NLP) appeared first on Digital Science.

]]>
Today we launch our blog series on Natural Language Processing, or NLP. A facet of artificial intelligence, NLP is increasingly being used in many aspects of our every day life, and its capabilities are being implemented in research innovation to improve the efficiency of many processes.

Over the next few months, we will be releasing a series of articles looking at NLP from a range of viewpoints, showcasing what NLP is, how it is being used, what its current limitations are, and how we can use NLP in the future. If you have any burning questions about NLP in research that you would like us to find answers to, please email us or send us a tweet. As new articles are released, we will add a link to them on this page.

Our first article is an overview from Isabel Thompson, Head of Data Platform at Digital Science. Her day job is also her personal passion: understanding the interplay of emerging technologies, strategy and psychology, to better support science. Isabel is on the Board of Directors for the Society of Scholarly Publishing (SSP), and won the SSP Emerging Leader Award in 2018. She is on Twitter as @IsabelT5000

NLP is Here, it’s Now – and it’s Useful

I find Natural Language Processing (NLP) to be one of the most fascinating fields in current artificial intelligence. Take a moment to think about everywhere we use language: reading, writing, speaking, thinking – it permeates our consciousness and defines us as humans unlike anything else. Why? Because language is all about capturing and conveying complex concepts using symbols and socially agreed contracts – that is to say: language is the key means of transferring knowledge. It is therefore foundational to science.

We are now in the dawn of a new era. After years of promise and development, the latest NLP algorithms now regularly score more highly than humans on structured language analysis and comprehension tests. There are of course limitations, but these should not blind us to the possibilities. NLP is here, it’s now – and it’s useful.

NLP’s new era is already impacting our daily lives: we are seeing much more natural interactions with our computers (e.g. Alexa), better quality predictive text in our emails, and more accurate search and translation. However, this is just the tip of the iceberg. There are many applications beyond this – many areas where NLP makes the previously impossible, possible.

Perhaps most exciting for science at present is the expansion of language processing into big data techniques. Until now, the processing of language has been almost entirely dependent on the human mind – but no longer. Machines may not currently understand language in the same way that we do (and, let’s be clear, they do not), but they can analyse it and extract deep insights from it that are broader in nature and greater in scale than humans can achieve.

For example, NLP offers us the ability to do a semantic analysis on every bit of text written in the last two decades, and to get insight on it in seconds. This means we can now find relationships in corpuses of text today that it would previously have taken a PhD to discover. To be able to take this approach to science is powerful, and this is but one example – given that so much of science and its infrastructure is rooted in language, NLP opens up the possibility for an enormous range of new tools to support the development of scientific knowledge and insight.

Google’s free NLP sentence parsing tool
Google’s free NLP sentence parsing tool

NLP is particularly interesting for the research sector because these techniques are – by all historical comparisons – highly accessible. The big players have been making their ever-increasingly good algorithms available to the public, ready for tweaking into specific use cases. Therefore, for researchers, funding agencies, publishers, and software providers, there’s a lot of opportunity to be had without (relatively-speaking) much technical requirement.

Stepping back, it is worth noting that we have made such extreme advances in NLP in recent years due to the collaborative and open nature of AI research. Unlike any cutting edge discipline in science before, we are seeing the most powerful tools open sourced and available for massive and immediate use. This democratises the ability to build upon the work of others and to utilise these tools to create novel insights. This is the power of open science.

Here at Digital Science, we have been investigating and investing in NLP techniques for many years. In this blog series, we will be sharing an overview of what NLP is, examine how its capabilities are developing, and look at specific use cases for research communication – to demonstrate that NLP is truly here. From offering researchers writing support and article summarisation, to assessing reproducibility and spotting new technology breakthroughs in patents, all the way through to the detection and reduction of bias in recruitment: this new era is just getting started – where it can go next is up to your imagination.

Look out for the next article in our series, “What is NLP?”, and follow the conversation using the hashtag #DSreports.

The post Launching our blog series on Natural Language Processing (NLP) appeared first on Digital Science.

]]>
From Artificial Intelligence and AI Scientists to Pharmaceutical Analytics – 2019 Catalyst Grant Winners https://www.digital-science.com/blog/2019/09/from-artificial-intelligence-and-ai-scientists-to-pharmaceutical-analytics-2019-catalyst-grant-winners/ Wed, 25 Sep 2019 12:08:32 +0000 https://www.digital-science.com/?p=32105 2019 Catalyst Grant Winners

The post From Artificial Intelligence and AI Scientists to Pharmaceutical Analytics – 2019 Catalyst Grant Winners appeared first on Digital Science.

]]>

BPT Analytics, Intoolab and MLprior, three projects aiming to disrupt the academic space, are the latest recipients of the Catalyst Grant award for innovative startups. The grant is our international initiative to develop innovative projects and technologies and we award up to £25,000 or $30,000 for concepts with the potential to transform scientific and academic research.

BPT Analytics is an online business intelligence tool for the pharmaceutical industry. The tool is built on top of an up-to-date database of life science companies, which tracks what they do and how they perform in the market. It follows the team’s already established and growing publishing platform BioPharmaTrend.com, which features articles from leading pharma professionals and business leaders.

Co-founder Dr. Andrii Buvailo commented:

“While there is a plethora of large-scale business intelligence platforms on the market, the majority of them are too general for such a domain-specific market as drug discovery, so they can’t grasp important nuances, critical for decision making. BPT Analytics aims to eliminate as much guesswork from the practice of pharmaceutical industry strategists, business developers, and decision-makers, as is possible. By providing them with visualized access to systematic and constantly curated data about the most innovative industry players, trends, and opportunities.”

Intoolab is an artificial intelligence platform built for pharmaceutical companies and researchers. Its main feature, Tzager, an AI scientific tool which scours through millions of research papers, helps find causal connections and join the dots between papers that would otherwise take significant time. The tool has been developed in collaboration with a number of universities worldwide and a pilot has been completed at Aarhus University in Denmark.

CEO Nikos Tzagkarakis commented:

“The biggest problem in drug discovery is that there are millions of research papers with different information, but there are also millions of potential combinations of concepts that could solve specific problems. We are trying to solve the problem at its core by not just connecting information, but also creating an intelligence that understands the mechanics of ‘why’ things happen. The grant will enable us to develop our deep learning methods faster and also connects us with the valuable network of Digital Science. We are confident Tzager will become increasingly intelligent and we’re excited for the first time it will figure out an original solution in medicine and drug discovery.”

MLprior is a tool which uses AI-based analysis to predict whether a scientific paper will be accepted at a conference. The co-founders behind the product, Denis Volkhonskiy and Vladislav Ishimtsev, have both been actively researching AI with a focus on creating new models and algorithms at Skolkovo Institute of Science and Technology for the past five years. They are joined by PhD students Nikita Klyuchnikov from Skolkovo Institute of Science and Technology and Pavel Shvechikov from Higher School of Economics, who make up the four-person team.

Denis Volkhonskiy commented:

“Our product simplifies and speeds up the process of writing scientific papers,” says Volkhonskiy. “We use artificial intelligence for analysing the text of the article and suggesting improvements. We hope to become a must-have service for each researcher. Researchers spend several months on polishing scientific papers from draft to publication, checking formulas and correcting mistakes – our tool will hopefully help save a lot of time.”

Steve Scott, Director of Portfolio Development at Digital Science said:

“Once again, we would like to thank the community of researchers and entrepreneurs for sharing their ideas and passion with us. The field for this round of the Catalyst Grant was brimming with great ideas and narrowing down the entries proved a real challenge. The three winners reflect our belief that AI and machine learning solutions will offer step-changes in the way we analyse and interact with data, whether that be for business intelligence, discovery or creation. We hope the grant, and our ongoing support, will help each of them achieve their next milestone.”

The post From Artificial Intelligence and AI Scientists to Pharmaceutical Analytics – 2019 Catalyst Grant Winners appeared first on Digital Science.

]]>