innovation Archives - Digital Science https://www.digital-science.com/tags/innovation/ Advancing the Research Ecosystem Thu, 30 Nov 2023 11:32:55 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 AI: To Buy or Not to Buy https://www.digital-science.com/blog/2023/11/ai-to-buy-or-not-to-buy/ Thu, 30 Nov 2023 11:32:51 +0000 https://www.digital-science.com/?p=68549 What AI capabilities is GE HealthCare bringing into the medical technology company? Here's what the patent data tells us.

The post AI: To Buy or Not to Buy appeared first on Digital Science.

]]>
Shortly after General Electric spun off its HealthCare division, the newly released company started buying AI technology. To share some strategic insights, Digital Science’s IFI CLAIMS Patent Services has taken a look at the target companies’ patents to see what capabilities they’re bringing into the medical technology company.

The phrase ‘patently obvious’ is used in many contexts, from political exchanges to newspaper op-ed columns. Curiously, it is rarely used in the realm of actual patents, but in the case of General Electric’s (GE) HealthCare division, its use seems entirely appropriate.

In early 2023, GE made the decision to spin off GE HealthCare, and immediately following the move the new entity started its M&A strategy by acquiring two companies of its own – Caption Health and IMACTIS. At this early stage, is it possible to infer whether these were sound investments? Six months later, there is still a way to go before full year financial results are posted along with other financial data. However, Digital Science company IFI CLAIMS Patent Services – a global patent database provider for application developers, data scientists, and product managers – can gain insights by looking into the patents the newly enlarged GE HealthCare now holds.

Patents = Strategic Insights

It should be ‘patently obvious’, but checking companies’ patents can be a part of any due diligence process before an investment decision is made. Not only does this help understand risk and technology overlaps, it can also be used to determine where R&D efforts are currently focused in the target acquisition, and in turn set the strategy for the newly merged entity. Analyzing a company’s patent holdings in the midst of M&A dealings provides insights, such as: 

  • Strategic direction of companies (i.e., such as the extent to which they are making strides in AI)
  • Unique takes on M&A transactions as it is possible to determine – based on companies’ technologies – if core competencies overlap or not with the acquiring company
  • Ascertaining if a company’s core competencies are enhanced or not by the acquisitions it’s made

IFI’s latest acquisition report takes a look at GE HealthCare’s acquisitions of IMACTIS and Caption Health’s patented technologies to determine the innovative direction of the company.

‘A good fit’

So what insights can be gleaned from patent data about GE HealthCare and its nascent M&A strategy? According to the report, the acquisition of Caption Health and IMACTIS were a ‘good fit’ for GE HealthCare. Both the acquisitions point towards GE HealthCare’s continued growth in terms of both AI and the application of AI to its existing core technologies. Specifically:

  • IMACTIS is a tech healthcare company that offers, among other things, the provision of 3D virtual imaging to surgical navigation
  • Caption Health focuses on providing AI capabilities and image data generation to ultrasound technologies

You can see from the chart below that GE HealthCare competes with a number of major companies in establishing AI-related patents, which surged in 2019-2020 before dipping in 2021. As such, the acquisitions in the early part of 2023 of companies that are focused on technology and AI in particular seem to be a good strategic move, especially given the furore around AI technology since late 2022.

Competitive landscape for AI patent applications. Source: https://www.ificlaims.com/news/view/blog-posts/the-ifi-deal-ge-healthcare.htm

What the data says

The report concludes that both Caption Health and IMACTIS make sense for GE HealthCare for several reasons. In the current competitive climate, Caption Health adds necessary AI capabilities while IMACTIS adds new dimensions to the suite of patents it has with 3D virtual images. So overall, it’s a gold star for GE HealthCare when it comes to enhancing its patent – and future commercial – strategy. Isn’t that obvious?

Top patented concepts by Caption Health. Source: https://www.ificlaims.com/news/view/blog-posts/the-ifi-deal-ge-healthcare.htm
Top patented concepts by IMACTIS. Source: https://www.ificlaims.com/news/view/blog-posts/the-ifi-deal-ge-healthcare.htm

Three key takeaways

1. Digital Science’s IFI CLAIMS Patent Services – a global patent database provider for application developers, data scientists, and product managers – can help customers gain insights by looking into the patents held by firms, such as newly enlarged GE HealthCare.

2. IFI’s latest acquisition report takes a look at GE HealthCare’s acquisitions of IMACTIS and Caption Health’s patented technologies to determine the innovative direction of the company – the report concludes that both Caption Health and IMACTIS make sense for GE HealthCare for a number of reasons.

3. Checking companies’ patents should be a part of any due diligence process before any corporate investment decision is made, especially in pharmaceuticals sector.

Simon Linacre

About the Author

Simon Linacre, Head of Content, Brand & Press | Digital Science

Simon has 20 years’ experience in scholarly communications. He has lectured and published on the topics of bibliometrics, publication ethics and research impact, and has recently authored a book on predatory publishing. Simon is an ALPSP tutor and has also served as a COPE Trustee.

The post AI: To Buy or Not to Buy appeared first on Digital Science.

]]>
Launching our blog series on Natural Language Processing (NLP) https://www.digital-science.com/blog/2020/03/launching-our-blog-series-on-natural-language-processing-nlp/ Wed, 04 Mar 2020 15:25:30 +0000 https://www.digital-science.com/?p=33083 Launching our blog series on Natural Language Processing (NLP)

The post Launching our blog series on Natural Language Processing (NLP) appeared first on Digital Science.

]]>
Today we launch our blog series on Natural Language Processing, or NLP. A facet of artificial intelligence, NLP is increasingly being used in many aspects of our every day life, and its capabilities are being implemented in research innovation to improve the efficiency of many processes.

Over the next few months, we will be releasing a series of articles looking at NLP from a range of viewpoints, showcasing what NLP is, how it is being used, what its current limitations are, and how we can use NLP in the future. If you have any burning questions about NLP in research that you would like us to find answers to, please email us or send us a tweet. As new articles are released, we will add a link to them on this page.

Our first article is an overview from Isabel Thompson, Head of Data Platform at Digital Science. Her day job is also her personal passion: understanding the interplay of emerging technologies, strategy and psychology, to better support science. Isabel is on the Board of Directors for the Society of Scholarly Publishing (SSP), and won the SSP Emerging Leader Award in 2018. She is on Twitter as @IsabelT5000

NLP is Here, it’s Now – and it’s Useful

I find Natural Language Processing (NLP) to be one of the most fascinating fields in current artificial intelligence. Take a moment to think about everywhere we use language: reading, writing, speaking, thinking – it permeates our consciousness and defines us as humans unlike anything else. Why? Because language is all about capturing and conveying complex concepts using symbols and socially agreed contracts – that is to say: language is the key means of transferring knowledge. It is therefore foundational to science.

We are now in the dawn of a new era. After years of promise and development, the latest NLP algorithms now regularly score more highly than humans on structured language analysis and comprehension tests. There are of course limitations, but these should not blind us to the possibilities. NLP is here, it’s now – and it’s useful.

NLP’s new era is already impacting our daily lives: we are seeing much more natural interactions with our computers (e.g. Alexa), better quality predictive text in our emails, and more accurate search and translation. However, this is just the tip of the iceberg. There are many applications beyond this – many areas where NLP makes the previously impossible, possible.

Perhaps most exciting for science at present is the expansion of language processing into big data techniques. Until now, the processing of language has been almost entirely dependent on the human mind – but no longer. Machines may not currently understand language in the same way that we do (and, let’s be clear, they do not), but they can analyse it and extract deep insights from it that are broader in nature and greater in scale than humans can achieve.

For example, NLP offers us the ability to do a semantic analysis on every bit of text written in the last two decades, and to get insight on it in seconds. This means we can now find relationships in corpuses of text today that it would previously have taken a PhD to discover. To be able to take this approach to science is powerful, and this is but one example – given that so much of science and its infrastructure is rooted in language, NLP opens up the possibility for an enormous range of new tools to support the development of scientific knowledge and insight.

Google’s free NLP sentence parsing tool
Google’s free NLP sentence parsing tool

NLP is particularly interesting for the research sector because these techniques are – by all historical comparisons – highly accessible. The big players have been making their ever-increasingly good algorithms available to the public, ready for tweaking into specific use cases. Therefore, for researchers, funding agencies, publishers, and software providers, there’s a lot of opportunity to be had without (relatively-speaking) much technical requirement.

Stepping back, it is worth noting that we have made such extreme advances in NLP in recent years due to the collaborative and open nature of AI research. Unlike any cutting edge discipline in science before, we are seeing the most powerful tools open sourced and available for massive and immediate use. This democratises the ability to build upon the work of others and to utilise these tools to create novel insights. This is the power of open science.

Here at Digital Science, we have been investigating and investing in NLP techniques for many years. In this blog series, we will be sharing an overview of what NLP is, examine how its capabilities are developing, and look at specific use cases for research communication – to demonstrate that NLP is truly here. From offering researchers writing support and article summarisation, to assessing reproducibility and spotting new technology breakthroughs in patents, all the way through to the detection and reduction of bias in recruitment: this new era is just getting started – where it can go next is up to your imagination.

Look out for the next article in our series, “What is NLP?”, and follow the conversation using the hashtag #DSreports.

The post Launching our blog series on Natural Language Processing (NLP) appeared first on Digital Science.

]]>
Blockchain for Research https://www.digital-science.com/resource/blockchain-for-research/ Sun, 12 Nov 2017 23:06:02 +0000 https://www.digital-science.com/?post_type=story&p=41863 This report will zoom in on the potential of blockchain to transform scholarly communication and research in general.

The post Blockchain for Research appeared first on Digital Science.

]]>

Blockchain for Research

Blockchain Report Cover

Blockchain is a revolutionary technology that has the potential to fundamentally change many industries, which include banking, music and the publishing industry. This report will zoom in on the potential of blockchain to transform scholarly communication and research in general. By describing important initiatives in this field, it will highlight how blockchain can touch many critical aspects of scholarly communication, including transparency, trust, reproducibility and credit.

Moreover, blockchain could change the role of publishers in the future, and it could have an important role in research beyond scholarly communication. The report shows that blockchain technology has the potential to solve some of the most prominent issues currently facing scholarly communication, such as those around costs, openness, and universal accessibility to scientific information.

The post Blockchain for Research appeared first on Digital Science.

]]>
What Do You Hear When You Buy a Researcher a Beer? Frustration! https://www.digital-science.com/blog/2015/06/what-do-you-hear-when-you-buy-a-researcher-a-beer-frustration/ Tue, 30 Jun 2015 11:10:33 +0000 https://www.digital-science.com/?p=12843 One of the objections that often comes up to reforming scholarly communication is that academics and researchers just aren’t that interested in new technologies and approaches. I tackled this idea on the perspectives blog a couple of months ago when I elaborated on an answer that I’d given to question from Peter Ashman during the […]

The post What Do You Hear When You Buy a Researcher a Beer? Frustration! appeared first on Digital Science.

]]>
Henry_Oldenburg
Henry Oldenburg. Helluva guy.

One of the objections that often comes up to reforming scholarly communication is that academics and researchers just aren’t that interested in new technologies and approaches. I tackled this idea on the perspectives blog a couple of months ago when I elaborated on an answer that I’d given to question from Peter Ashman during the London Book Fair. This week, I’m going to address a slightly different slant that I’ve been hearing lately on what is essentially the same objection.

Occasionally I hear academic publishers remark that when you ask researchers what they need and what is important to them, the answer is simple. Impact Factor. Now, let’s put to one side the apparent disconnect between that perception and how both funders and institutions say that they assess academic impact and instead take a step back and think about why academics say that.

If you sit down over a cup of coffee (or even better a beer) with a researcher and ask some follow-up questions, you’ll hear some things that might surprise you. Many academics that I’ve spoken to are frustrated with the current status quo, and this isn’t just the ones like me who dropped out because they weren’t getting the traction that they needed to be successful. I’m referring to successful academics at high profile institutions, many of whom are fully tenured professors.

A year or so ago, I spoke to a very successful biophysicist that I’ve known since I was a scientist myself, while tagging along with my wife at an academic conference in Spain (I know, it’s tough job). He told me that he’d grown frustrated by the fact that instrumentationists are rarely afforded the credit that they deserve, often getting lost in the middle of the author list or even worse, left off entirely, when their contribution was no less important or intellectually demanding than the person whose postdoc made the measurements.  While this didn’t affect him too badly as the holder of a named chair with a long and storied career, he feared for the future of many of his most talented graduate students and postdocs.

On another occasion, a full professor at an Ivy league university explained to me how he felt constrained by the “publish or perish” mentality and the way that institutions and funders look at assessment. He remarked that at his career stage he felt like he ought to be making more of a difference. That he should be pushing forward big ideas, testifying to Congress and engaging with the public. Despite having such ideas, he felt that he couldn’t pursue them because they would be reviewed as too risky by grant panels, have a high chance of generating negative data, for which he’d receive no credit, and not necessarily lend themselves to the formulas, that he knew well, to get papers into high impact journals.

When Henry Oldenberg founded the first scientific journal, he didn’t do it because Robert Hooke told him that it was what natural philosophy needed in order to transform into what we call science today. He saw a need to accelerate research communication so that scholars didn’t have to wait for their colleagues to write entire books. He was also aware of the need for quality control and so instituted the first peer-review system. He saw a need and understood that filling that need was good business. As a result, he fundamentally changed the way that new knowledge is communicated and therefore, to my mind, did as much for the advancement of human knowledge as any scholar of the enlightenment.

My call to publishers is to be a little more like Oldenberg and listen more deeply to what researchers are saying about their communication needs. When they say that all they need is high impact publications and nothing else matters, they’re not saying that they don’t want reform or innovation. They’re saying that they feel locked into a system that negatively affects the way in which they work, act and even think. Scholarly publishing has always been central to the advancement of knowledge and particularly to the scientific method. It’s part of the fabric of the philosophy and history of science and as custodians of it, it’s our duty to innovate so that we can better serve the researchers, scientists, humanists and academics that rely on what we do.

The post What Do You Hear When You Buy a Researcher a Beer? Frustration! appeared first on Digital Science.

]]>
Publishers: Why is all your content not accessible? https://www.digital-science.com/blog/2014/12/publishers-why-is-all-your-content-not-accessible/ https://www.digital-science.com/blog/2014/12/publishers-why-is-all-your-content-not-accessible/#comments Tue, 16 Dec 2014 10:00:02 +0000 https://www.digital-science.com/blog/?p=2535 Kaveh Bazargan is a physicist by training. In 1988 he founded River Valley Technologies (www.rivervalleytechnologies.com) in London, in order to introduce computer generated illustrations to UK publishers. The main business is now typesetting for STM publishers, using the only “pure” XML-first system in the industry. In recent years River Valley  has been working on cloud-based platforms […]

The post Publishers: Why is all your content not accessible? appeared first on Digital Science.

]]>

Kaveh
BazarganKaveh is a physicist by training. In 1988 he founded River Valley Technologies (www.rivervalleytechnologies.com) in London, in order to introduce computer generated illustrations to UK publishers. The main business is now typesetting for STM publishers, using the only “pure” XML-first system in the industry. In recent years River Valley  has been working on cloud-based platforms for publishers, including an end-to-end XML-based authoring to publication platform.

I am privileged to know a man called John Gardner, a distinguished solid state physicist at Oregon State University. In 1988, at the age of 48, he underwent a routine eye operation that he did not react well to. Tragically, and unexpectedly, he lost his eyesight completely. Having come to terms with his new life ahead, he was keen to get back to work and to continue his research and his teaching. But the only way he could read papers was to have his wife read them aloud to him! Nothing was “accessible”. He persevered and even went on to establish a successful company to help others needing assistive technologies.

220px-Handicapped_Accessible_sign.svgSo here we are, 25 years on. Can John click the DOI link of a paper and have it read out to him automatically? Or access a Braille version? Well, certainly not if there is heavy math in it, as there would be in the case of Physics papers. Many publishers are even converting their equations to static bitmap images, thus guaranteeing that they will never be accessible!

Now let’s look at another case where accessibility could be improved. In some forms of dyslexia, it is thought that the visual system interprets letters on a page differently to the average person. Most of us can easily distinguish a “p” from a “b” (or an “n” from a “u”), but it is thought that for dyslexics, the brain unconsciously rotates letters as it’s trying to interpret them (and ends up miss-interpreting them) – nice little video here explains it. All our brains do something similar to a certain extent; think of how we all know that a table placed upside down is still a table! So people with dyslexia have to work harder than average to read text in conventional “symmetric” typefaces. Well, there are at least two typefaces designed to address this problem: OpenDyslexic, and Dyslexie. And apparently these “bottom heavy” faces are much easier to read for dyslexics.

So, all publishers: raise your hands if your content is accessible in dyslexic fonts. Hmm, don’t see any hands… I know what you are thinking – you would love to have your content fully accessible, but you just don’t have the manpower in your IT team to have the content in every possible accessible form.

Enter XML

Well, I have good news for you. Most publishers (at least journal publishers) now archive the full XML version of their content. If we think back some 10–15 years, the whole logic behind creating XML was to allow new types of formats to be produced painlessly. If the XML is accurate as well as granular, then it is not technically hard to use it to create accessible content (for the visually impaired, dyslexics, etc) automatically, and there would be no extra burden on the publisher. A third party can do it. So why is that not being done now? Well, there are technical and legal challenges.

Technical challenges

The most basic technical challenge is that with very few exceptions (mainly Open Access publishers) the XML is hidden from public view. This begs the question of why the XML is produced in the first place! A second potential problem is that the XML is generally not as accurate as the PDF. For instance I just looked at the XML for a paper published four days ago by a well known (but nameless) OA publisher, where all non-standard characters have been replaced with a question mark, e.g. “Götz” is given in the XML as “G?tz” – clearly a completely useless XML file!

Legal challenge

Even when the XML is available, it is not clear if a third party is allowed to create a new format from that. This is the case even if one has paid a subscription fee for the article. And for OA articles, the inclusion of “ND” in a CC-BY license explicitly forbids any “derivative works”, including creating a new format.

My advice

In order to make a start at providing accessibility, here is my humble advice to publishers:

  • Publish your XML – the whole point of XML is to allow creation of new formats easily. So if I pay for an article, give me not only the HTML and the PDF, but the XML too.
  • Ensure your XML is correct. Mandate automated XML-first pagination from your suppliers to avoid embarrassing errors like that pointed out above.
  • Make sure that your licenses allow third parties to use the XML to create new formats, and even encourage them to do so.
  • If you are an OA publisher, please don’t use the ND versions of CC licenses.

The post Publishers: Why is all your content not accessible? appeared first on Digital Science.

]]>
https://www.digital-science.com/blog/2014/12/publishers-why-is-all-your-content-not-accessible/feed/ 3