John Hammersley Articles - TL;DR - Digital Science https://www.digital-science.com/tldr/people/john-hammersley/ Advancing the Research Ecosystem Thu, 15 Feb 2024 09:27:25 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 Building a platform to prevent procrastination…to put off writing up my PhD! https://www.digital-science.com/tldr/article/building-a-platform-to-prevent-procrastinationto-put-off-writing-up-my-phd/ Thu, 25 Jan 2024 10:00:00 +0000 https://www.digital-science.com/?post_type=tldr_article&p=69145 Dr Elisabeth Essbaumer, co-founder of ConcentrAid, speaks to John Hammersley about her innovative new platform helping academics convert time from unproductive meetings into time spent on useful, focused work, and to meet new people in the process!

The post Building a platform to prevent procrastination…to put off writing up my PhD! appeared first on Digital Science.

]]>
Foreword by John H: I recently spoke to Dr Elisabeth Essbaumer, co-founder of ConcentrAid, an innovative new platform helping academics convert time from unproductive meetings into time spent on useful, focused work, and to meet new people in the process!

Tell us a bit about yourself and your background, and how you came to start up your own startup.

I’m an economist, I come from Germany, and I studied at the Ludwig Maximilian University of Munich. I also spent some time in Marseille, before I came to Switzerland and the University of St. Gallen (HSG) to do my PhD.

My start-up is a digital co-working platform called ConcentrAid (https://www.myconcentraid.com/). The original idea was born during lockdown when I started to work together with a colleague; it wasn’t like chatting, we’d just go online, work during the day, we had fixed sessions together, and we really enjoyed this way of working very much.

But then after the pandemic, when we had to come back to the institutes at HSG, it became very difficult to coordinate. We’d be asking each other questions such as “When are you working from home?” “Can we work together? “Oh no, one hour later would be nicer for me”… and we suddenly had a very high coordination cost for arranging those types of sessions.

So this is where the idea for ConcentrAid came from – we needed an easier way to arrange co-working sessions. We didn’t work on it immediately, but the idea never really died. So we had it in our heads still, and then came the point at which we decided we should go for it, and we started with the development last year.

So that’s an interesting point – as you alluded to, we run into various problems day-to-day, but we don’t always think “I’ll go and start a business to fix it”. Was there anything in particular that made you say “Actually, yeah this is something that we should do”?

Great question! I’d never thought about taking an idea and founding a company, but at the University of St. Gallen there are a number of start-ups, it’s strong in business, and it fosters that kind of entrepreneurship. Because of all these things, you’re more likely to meet PhD students who are already involved in a commercial/start-up project and it encourages you to think “I really believe in the idea. I also see that the technical implementation is feasible. So, why shouldn’t I do it? Why not?”. So that was it, basically!

Is it just you? Who have you got working with you at the moment, and how do you split your responsibilities?

Right now the team is the founder team, Caroline and myself. We have had a bit of support from a small student group, and a bit of technical support, but it’s mainly just us!  

In terms of splitting responsibilities, I would say we try to decide things together but when we have different opinions the person with more expertise in that specific field will have the final say. For example, Caroline is coming from the business side, she’s already founded a startup and digital platform and I greatly value her expertise in that. Whereas I would say that on the technological side, I would have the final say.

What is the core problem you’re trying to solve, and who are the kinds of people you think are most likely to benefit from it?

We have one, clear, target group, and that is academia. And the problem we solve is first of all productivity, which I think in itself is a huge topic. But we also help to solve the problem of having a network among researchers.

A major advantage of working remotely is that you’re very independent, and can be very flexible, in terms of your location. even across countries. But that can also be a disadvantage in the sense that you might miss some exchange relevant to your research, some random event at your university, or some chance meeting with colleagues. Not only from people at your own university, but from researchers in similar fields worldwide, especially as conferences are also now often virtual.

With digital co-working via ConcentrAid you can still have these exchanges with other research partners, both in your particular field but also more broadly; we try to connect researchers coming from different disciplines.

This builds upon the productivity aspect, which was our original idea for the platform, where you have very focused working time and working structure that supports your productivity and focus.

We see this combination as not just important for your immediate work, but also to help broaden your network and connections for future research collaborations and employment options.

This feels especially timely at the moment, following the period of enforced remote work due to the pandemic and the fragmented approaches to returning to the office. There’s a great opportunity in this area to try and figure out better ways to work together, I can see why you find it an exciting problem to work on.

What’s your solution – how does ConcentrAid work?

We have a platform where you can book a session, and we will match you with a person who comes from a related research field, or is in your list of favorite coworkers. Once you’ve agreed a time, it’s in your calendar, someone is waiting for you and so you are less likely to postpone it.

In essence we take the obligation we all feel to not skip meetings – because of the impact on the other attendees — and use that to help you properly set aside time to get your important work done.

It’s very much about behavioral triggers and it’s also about consistency. Because I know from my own experience, when I had a hard time on a paper, I’d tend to push it away, do other things I enjoyed more first, even though of course I know that at some stage I had to deal with it. Digital co-working really helps me to stay consistent, not put things off, and it helps me to see my own progress.

That’s what we offer with ConcentrAid. We call it social accountability: the other person holds you accountable to show up and go through with what you want to do.

Who do you see as your competitors? How does what you’re doing differ?

There is a US platform for co-working in general that is offering a similar service and feel. Where we differ is on the networking and our focus on academia. Typically you have products which either focus on productivity — you have a lot of productivity apps – or networking, but none that do both for academia. And even the co-working platform I mentioned, we see as a competitor but also as the proof of concept for what we’re doing for academia.

So we’re focusing on networking, we’re focusing on academia, that’s what’s different with ConcentrAid.

How far along is your platform? What stage of development are you at?

We have a working product, a very simple matching so you can book a session where you get matched with a partner. That works very well, although of course there’s always room for improvement!

Before we handed in our thesis, we tested this with groups of students.

How did you get those initial users to give it a try?

Very much word of mouth — it was our network and the network of our network, I would say. That was our first major test group, and it also gives us a nice feedback loop for things we are working on and trying to improve.

One of the great advantages when you’re working in a university or in academia in general is that people are very open when you approach them and ask them for feedback. It’s very supportive and helpful as a community, and so we could more or less directly approach almost everyone to ask for their feedback, and they were happy to talk to us!

We also used the platform a lot ourselves – for example I was using it to finish my thesis! So we also took part in working sessions with people and could just ask them directly, as we were observing first-hand their behavior. For example, initially we had quite a few issues with the video technology for the sessions, and we saw that if the first session had problems like this, people will not come back again after it doesn’t work! So it’s important for us to make sure people have a good first impression of the platform.

Is there a particular point at which people say “aha, I get it now, I see the benefit”?

Usually I would say it takes users two sessions: the first session people are still figuring out what it is, getting used to it – it’s almost like prep for the second session. Then in the second session you see they’re much more relaxed, they know it’s not complicated, and that’s when they get a lot of work done. And so at the end of the second session, that’s when they see what they’ve achieved, and they are like “I totally get this now”!

It’s an especially strong reaction because everyone knows a situation where they’ve been procrastinating,  putting off a task they don’t want to do. So to find a solution to that is really significant for them; it’s a gamechanger for how they approach their work.

I mean, I was definitely procrastinating — basically, I built this platform just to avoid working on my PhD!

That is taking procrastination to a whole new level!

Exactly! But even today — after handing in my PhD thesis, when I don’t have that same “motivation to procrastinate” anymore — I still enjoy having this working structure with the co-working time and everything. So I still keep using it, and that’s what we see in others too.

Looking to the future, what’s your business model for ConcentrAid? How do you see it working financially?

One advantage of it being a digital platform is that the scalability costs are fairly low — you have the initial development cost, you have the user acquisition cost, but the marginal costs are not that high, which is a huge advantage. So of course our model is based on that.

And we have our first paying customers!

Very nice! How did you get those first paying customers?

From our testing groups there was always a proportion who wanted to keep using it, and were happy to pay to do so. For example, PhD students in particular found it useful to help structure their day.

Interestingly not everyone uses it in the same way though — we see people who are really using it at a specific time of the day. Some would use it in the morning to start their day, others would have working first session in the afternoon to overcome the usual lull after lunch, and we had some others who would use it in the evenings to help them spend a concentrated hour of time before finishing for the day.

We also have some heavy users who really do use ConcentrAid to manage their entire home office working day! These are our super users! And I remember when the first person paid for a subscription, that was really cool, we were so happy! 😊

What would you like to achieve in the next six months to a year? Where do you want to have got to by the end of 2024?

We want to have growth, of course, but also to have a sustainable company with a strong community. That would be my personal goal.

We very much discussed the scalability to different markets, but for 2024 we are focused on continuing to  create a popular, useful solution for our target audience of academia. I want to create a solution that people in academia really enjoy to work with, because it’s very close to our hearts. We can then look to expand out into other markets off the back of that success.

That’s a great point to end on – it’s very important early on as a founder to be really focused on the needs of the user, and to care about their experience. If your early users get a sense from you that you know the problems they have, because you’ve experienced them yourself, and that you really care about solving those problems in a practical way, they will become the advocates that help you continue to grow and scale, and make ConcentrAid a success!

 —

To find out more about ConcentrAid, visit their website,  and you can find Elisabeth on LinkedIn.

To find out more about how Digital Science supports new start-ups, take a look at our investment opportunities.

The post Building a platform to prevent procrastination…to put off writing up my PhD! appeared first on Digital Science.

]]>
FuturePub Berlin – Special Edition https://www.digital-science.com/tldr/article/futurepub-berlin-special-edition/ Tue, 14 Nov 2023 21:54:40 +0000 https://www.digital-science.com/?post_type=tldr_article&p=67917 #FuturePub headed to Berlin for our fourth and final event of the year, during Berlin Science Week and ahead of the inspiring Fallings Walls Science Summit 2023. We had a lovely venue -- the Hotel Palace Berlin -- located in the vibrant heart of the city between the Ku’damm and Zoological Garden, which helped set the tone for a thought-provoking evening.

The post FuturePub Berlin – Special Edition appeared first on Digital Science.

]]>

#FuturePub headed to Berlin for our fourth and final event of the year, during Berlin Science Week and ahead of the inspiring Fallings Walls Science Summit 2023. We had a lovely venue — the Hotel Palace Berlin — located in the vibrant heart of the city between the Ku’damm and Zoological Garden, which helped set the tone for a thought-provoking evening.

Here’s our two-minute video highlights (or watch on YouTube):

This special edition FuturePub was set up to encourage conversations on the future of global research, a topic close to our hearts at Digital Science. In many ways, the world now feels more divided than ever, despite the advances in technology that connect our daily lives together. This was all the more poignant given the city and the anniversary of the fall of the Berlin Wall 34 years earlier in November 1989.

Many of the people we met in Berlin this week spoke of the overall sense of optimism the fall of the Wall brought — and indeed the 90s are generally looked back on as a particularly optimistic, peaceful and hopeful time in Europe and beyond. 

Today there is still optimism — as evidenced by the remarkable and inspiring presentations, discussion and ideas on show during Falling Walls — but with perhaps less stability than we would all dearly love to see. 

We hope the conversations at FuturePub, Falling Walls, and the wider Berlin Science Week events help break down more barriers and further build connections between people — for those who were able to attend in person and those in wider communities.

Our speakers

As always with FuturePub, we aim to have an array of talks on innovative, cutting-edge topics, and this event was no different — we had a brilliant mixture of speakers sharing perspectives and starting conversations on the future of research from many different backgrounds, including researchers, librarians, publishers and open science practitioners:

Photos, links and summaries can be found below, and recordings will soon be available on Cassyni.

Niamh O’Connor – ‘Open Science is the science we need for the future we want’

Niamh O’Connor, Chief Publishing Officer at PLOS, kicked things off perfectly with a discussion of open science, and made lives easy for us as hosts by doing it sans slides! But her relationship with Digital Science tangentially goes back a long way as she remembered Suze from a talk she gave at the Royal Institution based on Neal Stephenson’s nano-fiction book, ‘The Diamond Age’ back in 2014. Suze will be adding that to her annual appraisal under the category of long-term impact!

Niamh’s talk reflected her goal to see research assessment (and by extension, publishing) change for the better:

My title is a quote from Audrey Azoulay, UNESCO Director General. And Open Science cannot be truly open without equitable participation in knowledge creation and sharing.

Current incentive systems, including researcher assessment systems, which are built on the event of publication of an article as the ‘unit of value’ of research contribute to perpetuating inequity and holding back transition to an Open Science ecosystem.

We need to change the system and design publishing and business models that incentivize sharing the form of output most appropriate to the research and support the advancement of usable, trustworthy knowledge and global participation.

Niamh O’Connor, Chief Publishing Officer at PLOS

This view — that the research paper is no longer a suitable vessel by which to measure and share research outputs — reflects a short hallway conversation at Falling Walls with Hemai Parthasarathy, a former founding editor of PLOS and now Head of Rapid Evaluation at Google X

There have been many attempts and initiatives to move scholarly communications beyond the paper since the turn of the century. Still, it remains today as the unit by which research and researchers are measured. It will be interesting to see if the current concerns over misinformation, papermills and trust in research, especially now in what is being termed the “age of AI”, accelerate the move to a new mechanism.

The conversation shouldn’t stop here — and it won’t! Given their shared love of all things science fiction and science fact, and their twinned mission to make research the best it can possibly be for as many people as possible, Suze also caught up with Niamh later in the week to record a one-on-one conversation for a new series of in-depth interviews with the best and most exciting movers and shakers in our community — look out for the launch of that in early 2024!

Jacob White – Expanding the funded investigator pipeline through open science

It was a bit of a surprise to read Jacob’s proposal for his FuturePub talk — not because the talk didn’t sound interesting (he’s building purpos.eco to promote open science in environmental stewardship; exactly the type of project we love to have at FuturePub!) — but because he’s based in Salt Lake City, Utah, which isn’t all that close to Berlin! 

But it all worked out perfectly! Jacob was in Berlin the weekend before for a wedding, and so FuturePub was timed just right for him to present in person – and the bride and groom even joined us! What better way to kick off your honeymoon than attending FuturePub?!

Jacob, an Informationist at Johns Hopkins University, introduced purpos.eco, a public use repository which, as mentioned above, is aiming to help promote open science in environmental stewardship and is planned for launch in the summer of next year. 

In his talk he discussed how an ORCID iD is now a requirement for NSF and NIH funding, and how purpos.eco can help familiarize underrepresented minorities in science with the ORCID system early in their education, reducing barriers to participating in funded and refereed science in the US and beyond. 

Jacob’s talk had a purposeful focus on exploring practical ways in which organizations can work with and help to upskill those people actively participating at the forefront of environmental science who might not even consider themselves to be scientists — and therefore might be unaware that it is something they could pursue as a career! 

To find out more about purpos.eco, visit the planning website or reach out to Jacob via LinkedIn.

Benjamin Johnson – Revamping Science for the Future of Energy

Ben is a theoretical chemist, or a physicist as he likes to call himself (Suze – behave!), who made the switch to the history of science eight years ago. His talk on revamping science for the future of energy — which generated probably the most discussion in the room throughout the night — briefly touched on the public acceptance of new technologies before moving on to talking about climate justice.

Ben talked about how it is tricky to consider the Global North and Global South as two separate systems of adaptation in the face of climate challenges when the two are inextricably linked, with the effects of either one being felt across the Globe.

He used Hurricane Katrina, which hit New Orleans in 2005, as an example. The city’s lack of socio-economic homogeneity resulted in a range of different responses and expectations of people who were practically neighbours. Ben suggests that, rather than trying to generalise large swathes of the population, we must engage with different publics across the many facets of demography in order to really understand what the challenges to adoption of new technologies are, in order to design the most appropriate and effective adoption strategies for maximum update and progress for all.

Ben and his colleague at the Max Planck Institute, Dr Maria Avxentevskaya, also joined us for a chat that will feature in our new series of conversations with our community launching in 2024. We were so honoured to record our chat in their amazing Research Library — a stunning place where you can almost hear the echoes of science legends past roaming the towering bookshelves.

Jo Havemann – Making Open Science resources accessible to all

The earlier presentations set the scene nicely for Jo Havemann’s presentation on open science resources, where she focused in particular on the work she’s undertaken to map the various resources in context with each other.

At Digital Science we’ve known Jo for a number of years — indeed she recently interviewed our very own Mark Hahnel and John Hammersley on her Access2Perspectives podcast — and she’s been actively working to help promote and encourage open science practices around the world in many way, recently with AfricaArXiv, which aims to “enhance the visibility of African research, [and] enable discoverability and collaboration opportunities for African scientists on the continent as well as globally.”

Speaking on her motivations for creating the open science resources map:

To attempt to counter the misconception that Open Science only fits a few, I decided to start mapping the resources.  And, indeed, there seem to be at least two or three that researchers of any given topic in any region of the world can identify and adopt for their data and workflow processes.” 

Jo Havemann, Access 2 Perspectives

The map that Jo created now contains more than 800 resources and supplementary data nodes across the spectrum of available tools, guidelines, events, and services by research discipline, including general resources that are sortable by Open Science principle, language or country.

The map is freely available online, and Jo welcomes contributions linking to new resources or updating existing ones — if you have any feedback or suggestions, please reach out to Jo via email or find her on LinkedIn.

Stephanie Dawson – Institutional OA Journals: Publishing in Context

Our final speaker of the night, Stephanie Dawson, CEO of ScienceOpen, spoke on the wider questions around open access in the publishing industry and is coincidentally also a recent guest on Jo’s podcast. 

Open access has many forms, and some of the models adopted by larger publishers have faced criticism for the barriers the high cost of the article processing charges places on researchers and their institutions (see e.g. this recent report from an OASPA workshop in March 2023). There is also a concern that researchers are still not getting the support they need to publish open data, as explored in the recent State of Open Data 2023 report

Stephanie focused her talk on an open access model — Diamond Open Access — that has many positives: articles are free to read and publish, the journals and repositories are often community- or academic society-led, and run as a not-for-profit. And this is not a negligible fraction of research:

“A recent study found between 17,000 to 29,000 diamond OA journals currently in existence publishing 8-9% of the total number of scholarly journal articles each year. Many of these journals are managed and published by academic institutions.” 

Stephanie Dawson, CEO of ScienceOpen

Why isn’t this model more widely adopted? Stephanie set out four of the challenges these journals face:  sustainable funding, where they often lack the funding required to manage, curate and provide sufficient editorial oversight to the submissions they receive; reputation, which affects the potential impact that publishing in them can have on a researcher’s career; discoverability, as they are less likely to be run on infrastructure that allows the content to be easily discovered and shared (e.g. via DOIs); and finally the collective issues of ownership, governance and legal responsibilities, which can either be unclear or not set up to address the other challenges.

Through the work ScienceOpen is doing in this space, Stephanie is trying to help address these challenges:

“These academic-run journals should be seen and assessed within the full range of outputs from their sponsoring institution. ScienceOpen provides a framework to put institutional OA journals in context to raise their profile and reputation.”

Stephanie Dawson, CEO of ScienceOpen

Suze then wrapped up the lightning talk part of the evening, and the discussions (re)commenced! 

Thanks again to all our speakers for presenting, and we’d especially like to thank Stephanie for agreeing to present at the last minute! We’d unfortunately had a speaker drop out due to illness, and when we saw Stephanie’s registration (to attend) come through, we messaged to see if she’d also like to give a talk — and she said yes! It rounded off a brilliant mix of talks, each of which flowed nicely from the previous one, and all together they helped spark many interesting and varied conversations in the room afterwards.

The recordings will shortly be made available on Cassyni, along with the recordings from our previous events. For a notification of when they’re available, you can subscribe to FuturePub on Cassyni.

Photo Gallery

More photos from the evening can be found in the gallery here. Thanks as always to Huw James from Science Storylab for his amazing camerawork!

See you in 2024!

That’s a wrap for FuturePub in 2023 — but you can continue to find awesome content on TL;DR and our ongoing Speaker Series

FuturePub London will return in the Spring of next year – date and venue still to be confirmed! But check out our recent FuturePub London – The AI Edition for a flavour of what’s to come! 

If you’re keen to bring FuturePub to your town or city, let us know. And if you’re interested in speaking at a future #FuturePub, please do let us know as early as possible by filling out this short proposal form!

The post FuturePub Berlin – Special Edition appeared first on Digital Science.

]]>
Implications of AI for Science: Friend or Foe? An impressive opening to the Falling Walls Circle 2023 https://www.digital-science.com/tldr/article/implications-of-ai-for-science-friend-or-foe-an-impressive-opening-to-the-falling-walls-circle-2023/ Wed, 08 Nov 2023 23:55:00 +0000 https://www.digital-science.com/?post_type=tldr_article&p=67845 Well, Falling Walls certainly lived up to expectations! It’s six years since I was originally slated to attend but had to hand presentation duties over to my cofounder due to the birth of my youngest daughter, Annabelle.

I was fortunate to be able to attend in person this year, and today started with a wonderful panel session on the “Implications of AI for Science: Friend or Foe?”, chaired by Cat Allman who has recently joined Digital Science (yay!) and featuring a brilliant array of panellists. Here's my notes from the session.

The post Implications of AI for Science: Friend or Foe? An impressive opening to the Falling Walls Circle 2023 appeared first on Digital Science.

]]>

Update: Video recording of the session now available.

Well, Falling Walls certainly lived up to expectations! It’s six years since I was originally slated to attend but had to hand presentation duties over to my cofounder due to the birth of my youngest daughter, Annabelle.

I was fortunate to be able to attend in person this year, and today started with a wonderful panel session on the “Implications of AI for Science: Friend or Foe?”, chaired by Cat Allman who has recently joined Digital Science (yay!) and featuring a brilliant array of panellists:

  • Alena Buyx, Professor of Ethics in Medicine and Health Technologies and Director of the Institute of History and Ethics in Medicine at Technical University of Munich. Alena is also active in the political and regulatory aspects of biomedical ethics; she has been a member of the German Ethics Council since 2016 and has been its chair since 2020.
  • Sudeshna Das, a Postdoctoral Fellow at Emory University and with a PhD from the Centre of Excellence in Artificial Intelligence at the Indian Institute of Technology Kharagpur. Her doctoral research concentrated on AI-driven Gender Bias Identification in Textbooks.
  • Benoit Schillings, X – The Moonshot Factory’s Chief Technology Officer, with over 30 years working in Silicon Valley holding senior technical roles at Yahoo, Nokia, Be.Inc and more. At X, Benoit oversees a portfolio of early-stage project teams that dream up, prototype and de-risk X’s next generation of moonshots.
  • Henning Schoenenberger, Vice President Content Innovation at Springer Nature, who is leading their explorations of AI in scholarly publishing. He pioneered the first machine-generated research book published at Springer Nature.
  • Bernhard Schölkopf, Director of the Max Planck Institute for Intelligent Systems since 2001. Winner of multiple awards for knowledge advancement, he has helped kickstart many educational initiatives, and in 2023 he founded the ELLIS Institute Tuebingen, where he acts as scientific director.

Cat Allman herself, now VP Open Source Research at Digital Science, was the perfect facilitator of the discussion – she has spent 10+ years with the Google Open Source Programs Office, and has been co-organizer of Science Foo Camp for 12+ years.

The panel session is part of Falling Walls Science Summit 2023, an annual event that gathers together inspirational people from across the globe who are helping to solve some of the world’s biggest problems through their research, new ventures, or work in their local community. I saw around 50 presentations yesterday during the Pitches day, and I’ll be sharing some of the highlights in a follow up post!

But before we go further, an important moment happened during the discussion, and Alena deserves a special mention for ensuring that Sudeshna was given the time to speak, just before the panel answered questions in the Q&A section.

Sudeshna had been unfortunately cut off due to timekeeping, and although it had been well-intentioned — to ensure the Q&A section didn’t get lost — Alena did the right thing in stepping in. Alena’s polite but firm interjection was appreciated by everyone in the room, and it’s this kind of thoughtfulness during the discussion, which was on show throughout, that made it a very enjoyable panel debate to attend.

Onto the session itself, and in their opening remarks, each panellist was encouraged to state whether they felt AI was a friend or foe to science. Of course, that is a binary way to view a complex and ever evolving topic, and the responses reflected this — they radiated a generally positive view of the potential for AI to help science, but with caution on how it’s important to focus on specific examples and situations, to try to be precise both in terms of what the AI is and what it’s intended to do.

Benoit expanded on this need to be precise by giving a couple of specific examples of how he’s been experimenting with AI, both of which fall into the broader category of AI acting a personal assistant. 

In one experiment, Benoit fed a model his reading list and asked for a personalised list of research recommendations and summaries. He was essentially taking to a more personalised level the types of recommendation engine that many websites use to (try to) encourage us to consume more content. What came across was his optimism that such a way of filtering / tailoring the literature — as an aid to a practising researcher — could help deal with the mountain of scientific content. He expects these types of systems to be common within the next few years, and it will be interesting to see who manages to create (or integrate) such a system successfully.

Whilst his first example could be seen as using an AI assistant to narrow down a broad selection of options, his second example is the reverse — when starting out on a new research topic, he often asks Bard for fifteen ideas for avenues to explore on that topic (I forget the exact phrase he used, sorry!). Although not all fifteen suggestions make sense, what comes back is usually useful at stimulating his further thought and ideas on the topic — it’s a great way to get started, and to avoid getting too deep or narrow too soon on a new project.

This issue with AI assistants giving incorrect or nonsensical answers also prompted the conversation to move onto that topic; Bernhard and his team are working on how future models could have some sense of causation, rather than just correlation, to try to help address this gap in current AI systems. 

He gave a particular example where machine learning models had been trained to identify patients with a particular type of illness (I didn’t catch the name); when trained, the model appeared to give excellent detection rates, and appeared to be able to determine with high accuracy whether or not a given patient suffered from this illness. 

However, when it was used in a clinical setting on new patients (presumably as a first test of the system), it failed — what had gone wrong? It turned out the model had spotted that patients with a thoracic (chest) tube had the illness, but those without the tube didn’t — as once a patient is being treated for the illness, they have such a tube fitted. As all the training data was based on known patients, it had used the presence of the tube to determine whether they had the illness. But of course new, unknown patients do not have a tube fitted, and hence the model failed. If models could have some sense of causation, this type of issue might be avoided.

This brings me onto one of the most interesting points raised during the discussion — Alena, who is a trained medical doctor, made the case that, rather than looking to AI assistants to help with potentially complex medical diagnoses, a real, tangible benefit to doctors all around the world would be for AI to help with all note-taking, paperwork, and admin tasks that eat up so much of a doctor’s time and energy.

She made the point that there are other problems with having AI / automated diagnosis machines, namely that you end up with a layering of biases. 

  • First there is the algorithmic bias, from the machine learning model and its training data. For example, in medicine there are issues with training data not being gender balanced, or being dominated by lighter skin tones, making the results less reliable when applied to a general population. 
  • And secondly, there is the automation bias — that causes humans to trust the answer from a machine, even when it contradicts other observations — which adds a further bias on top. This combination of biases is not good for doctors, and not good for patients! 

As an aside: there was a discussion on how the term “bias” is now often used almost exclusively to refer to algorithmic bias, but there is also inductive bias, which perhaps needs a new name! 

Sudeshna, whose PhD was in identifying gender bias in textbooks, was asked to comment on the issue of biases in AIs. She emphasised that results from AI models reflect biases present in the training data which generally reflect biases in human society. These biases can be cultural and/or driven by data-quality (garbage in -> garbage out), but also stem from the data tending to be from the Global North, where they lack local data from the rest of the world. 

Henning gave an example where his team had seen a similar issue when testing a model on answering questions about SDGs; the answers were extracted from the literature which is predominantly focused on SDGs from a Global North perspective. Henning and I were speaking to Carl Smith in the hallway after the talk, and Carl mentioned how in psychology research this type of issue is often termed the WEIRD bias; another term I learned today!

Having local data — at different scales — is important for AI models to generate answers in context, and without that data, it’s hard to see how local nuance and variety won’t be lost. However, there’s no simple solution to this, and whilst a comment was made that improving data quality (labelling, accuracy, etc) — and training models based on high quality data — was one of the best routes to improving performance, it was acknowledged that it can’t by itself fix the issues of datasets only representing a small fraction of the world’s population.

Overall the tone of the discussion was one of cautious optimism though, and the examples given by the panellists were generally positive instances of people using this new technology to help humans do things better, or quicker, or both.

Earlier in the session, Henning had referred to a book recently published by Springer which was generated using GPT, but which crucially had three human authors/editors who took responsibility (and accountability) for the published work. 

“This book was written by three experts who, with the support of GPT, provide a comprehensive insight into the possible uses of generative AI, such as GPT, in corporate finance.”

Translated opening sentence from the book’s description

Henning made a point of highlighting how current responsible uses of AI all have “humans-in-the-loop”, emphasising that AI is helping people produce things that they might not have the time or resource to. In this specific example, the book was written in approximately two to three months, and published within five — much shorter than the usual twelve months or more that a book usually takes.   

There was time towards the end for a small number of audience questions, and the first was whether we had (or could) learn something from the previous time new technology was unleashed on the public via the internet and had a transformative effect on the world; namely the rise of social media and user generated content and interaction, often dubbed Web 2.0.

It was at this point that Alena stepped in and gave Sudeshna the time to add her thoughts on the previous topic, that of how to address bias in the large language models.

Sudeshna made the very important comment that there is no fixed way in how we should look to address biases, because they aren’t static; the biases and issues we are addressing today are different from the ones of five or ten or twenty years ago. She mentioned her PhD study, on gender bias, and how today she would take a broader approach to gender classification. And so whatever methods we determine for addressing bias should reflect the fact that in ten years we will very likely see different biases, or see biases through a different lens.

Alena then gave a brilliant response to the question of whether anything was different this time vs when Facebook et al ushered in Web 2.0.

She said that back then, we’d had the unbridled optimism to say “go ahead, do brilliant things, society will benefit” to those tech companies. Whereas today, whilst we still want to say “go ahead, do brilliant things” to those companies, the difference is that today we – society / government / the people — are in the room, and have a voice. And that hopefully, because of that, we will do things better.

As the panel wrapped up, Bernhard made the observation that early predictions of the internet didn’t necessarily focus on the social side, and didn’t predict how social media would dominate. He suggests we view our predictions on AI in a similar way; they are likely to be wrong, and we need to keep an open mind.

Finally, Henning closed out the session with a reminder that it is possible to take practical steps, at first an individual then organisational level, which set the approach across a whole industry. His example was that of the SpringerNature policy of not recognizing ChatGPT as an author, which came about because they saw ChatGPT start to be listed as an author on some papers, and very quickly concluded that, because ChatGPT has no accountability, it cannot be an author. Other publishers followed suit, and the policy was effectively adopted across the industry. 

It makes you wonder what other steps could we take as individuals and organisations to bring about the responsible use of AI we all hope to encourage.   


Disclaimer: The write up above is based on my recollection of the panel discussion and some very basic notes I jotted down immediately afterwards! I have focused on those points that stood out to be, and it’s not meant to be an exhaustive summary of what was discussed. I have also  probably grouped things together that were separate points, and may have things slightly out of order. But I’ve strived to capture the essence, spirit and context of what was said, as best I can — please do ping me if you were there and think I’ve missed something!

Double disclaimer: For completeness I should point out that — as you can probably tell — I work at Digital Science, alongside Cat. Digital Science is part of the Holtzbrinck Group, as is Springer Nature, who supported the session. But the above is entirely my own work, faults and all.

The post Implications of AI for Science: Friend or Foe? An impressive opening to the Falling Walls Circle 2023 appeared first on Digital Science.

]]>
FuturePub London – The AI Edition https://www.digital-science.com/tldr/article/futurepub-london-the-ai-edition/ Mon, 06 Nov 2023 10:27:58 +0000 https://www.digital-science.com/?post_type=tldr_article&p=67563 #FuturePub returned to London for a special AI edition, and this one was on a whole new scale...180 people attended -- a new record -- and we had a much wider mix of attendees; government representatives, researchers, research administrators, healthcare professionals, publishers, finance & technology experts, and of course, start-up founders! Here's the lowdown on what went on...

The post FuturePub London – The AI Edition appeared first on Digital Science.

]]>

#FuturePub returned to London for our third event of 2023 at the amazing “Bounce” in Farringdon – a ping pong club and so much more. Indeed, as the conversations continued late into the night, with the pizza being passed around and the remaining drinks being consumed, it took us back to some of the very first FuturePubs, which had that same easy-going vibe.

But this one was on a whole new scale! We sold out of our initial 250 tickets well ahead of the event, and around 180 people attended on the night – a new record – with a much wider professional mix of attendees; government representatives, researchers, research administrators, healthcare professionals, publishers, finance & technology experts, and, of course, start-up founders!

Watch our two minute video snapshot of the event below (requires cookies) or directly on YouTube.

We timed and themed this particular FuturePub to kick off some interesting conversations around AI and research – the good, the bad and the ugly – ahead of the UK Government’s AI Safety Summit that took place at the beautiful and historic Bletchley Park on 1st and 2nd November 2023. The Summit was a closed event and has been criticised for a lack of breadth in its participants, but luckily the AI Fringe, a programme of events that we are proud to have been featured in, took place across London and the UK all week, with a range of exciting events open to all that gave everyone the opportunity to discuss developments in AI and its impact on society.

The AI Fringe helped raise the profile of a number of AI-related events taking place in London in late 2023, and we’d like to thank them for featuring #FuturePub and helping bring some new faces to our special edition event. Cheers, AI Fringe! 🍻

The atmosphere

It’s hard to put into words the atmosphere at a FuturePub event – the excited buzz in the room, the hubbub of conversations going on all around, and the relaxed, unpressured, informal-yet-stimulating feel to the evening. So you should definitely watch the video!

Perhaps the best examples of capturing the moment in words are Ian Mulvany’s blogs, which he writes immediately after these types of events and which represent his immediate thoughts and reactions at the time.

“I attended FuturePub last night, actually, I also spoke at it too. I love these events, I’ve attended a ton of them over the years, and last night’s was in association with AI fringe, so there was a nice ad-mixture of different communities, and I got to chat to some folk that I wouldn’t otherwise have met. A really good event, many thanks to Digital Science.” 

Ian Mulvany, from his personal write up of the event[1]

And as Ian says, he was not only an attendee but also a speaker – which brings us nicely onto…

The lightning talks

At #FuturePub, the format remains largely unchanged – once everyone has arrived and had the chance to relax and grab a drink and some nibbles, we get together for the lightning talks.

The format of the talks at #FuturePub is very simple – each speaker gets five minutes to speak with or without slides, followed by five minutes of audience questions. And, if their talk runs over, it comes out of their question time! This lets us fit six amazing speakers into an hour slot and also adds a little spicy sprinkling of jeopardy to proceedings, which our speakers invariably navigate with humour, grace, and occasionally the odd swear word!

Andy Dudfield – What happens when AI meets facts?

Andy Dudfield, Head of AI at Full Fact, kicked us off with a very quick overview of what fact-checkers around the world are doing with AI. He described how he and his team are finding AI useful in helping them prioritize what they should look at to fact-check, which is more important than ever given the sheer volume of people and content they would otherwise be swamped with. His talk prompted questions on both the tech stack they’re using (which he gave a run-through of) and also what he sees as the challenges ahead.

Find out more about how Full Fact uses AI and if you’d like to get involved read more about a number of suggestions around how you can help.

Daniel Hook – Specificity versus synthesis: An uncertainty principle for Large Language Models?

Following Andy, we heard from our good friend and CEO of Digital Science, Daniel Hook. Daniel is one of the founders of Symplectic and has been instrumental in Digital Science’s growth over the past decade, but at heart, he still very much considers himself a theoretical physicist which, given he still holds visiting positions at Imperial College London and Washington University in St Louis, seems entirely justified! If only he told people about being a physicist once in a while… 😉

Daniel approached his talk topic through the lens of his theoretical physics background – whilst he has recently written on specific topics in generative AI, here he focused the discussion at a higher level, on Large Language Models and how they are known to hallucinate facts. He observed that:

“While there is an active debate on whether this is a bug or a feature, the fact remains that we don’t understand AI to the level where we can get LLMs to trace back to their motivation for making a specific pronouncement.

Thus, LLM providers who work in the science space are faced with a challenge – they want to leverage the new capabilities of LLMs but need somehow to create references back to the original work, but in forcing an LLM to work in a way that creates referencability, one destroys its ability to create a synthesis from multiple sources, creating a fundamental playoff between specificity and synthesis.”

Daniel Hook, speaking at #FuturePub London, October 2023

You can read more from Daniel on our TL;DR site, where we have a number of articles on AI, or visit our resources page to find out how Digital Science and Dimensions have been using and adapting AI tools in the research space.

Natasha Punia & Damien Posterino – Inclusive Innovation: Leveraging AI to Empower Marginalised Communities into Employment

Our next talk from Earlybird – an exciting start-up leveraging AI to help support employment opportunities, especially for under-represented demographics – was due to be a joint effort between Damien Posterino and Natasha (Tash) Punia (who was formerly the Head of Operations at Figshare!). However, due to a last-minute change of plan, Damien presented by himself, and did a great job in conveying Earlybird’s origin story, goals and progress to date in just five minutes.

Earlybird founder Claudine Adelemi was featured in their talk as “Our why”, and her background and experiences are worth reading up on. She recently won a #Thrive20 Award for success in business and a celebration of female entrepreneurship and social impact, and Earlybird definitely looks like one to watch.

Speaking with Tash after the event, we were also struck by how much effort such a relatively small start-up is putting into helping the communities they serve – for example, in how they are working directly with refugees both to get feedback on their platform but also to give the refugees credit for working with Earlybird. The refugees can then include this on their CVs to increase their chances of securing employment in the UK. It is rare to see such thought and consideration given to those invited to take part in user feedback sessions, and is a testament to Tash and the team’s desire to do good. You can find the latest from Earlybird on their website.

Nikos Tzagkarakis – Hierarchical Representations: From Space to the model of Self

Nikos, Chief AI Officer at SiSaf, is well known to Digital Science – he won a Digital Science Catalyst Grant in 2019 and recently recorded an interview with Suze where he talks about the experience. He is also clearly no stranger to experimenting with AI, and throughout his talk conveyed both his passion for it and his technical expertise.

The focus of his talk was on cognition; a capability still beyond the current approaches to AI and their applications, where the big recent improvements have been primarily in perception. He argues that the current deep learning approaches lack the fundamental hierarchical representation required for cognition, but he was optimistic about the potential. He focused on three examples – Healthcare, Space Navigation and Conscious-like Agency, where cognition becomes increasingly important.

Find out more about Nikos and his work on his personal website, or reach out to him on LinkedIn.

Ian Mulvany – An open discussion on implementing LLM governance

Those who read the opening sections of this blog post will have already seen a quote from Ian, where he talked about his enjoyment of these events. Ian should know all about it – he’s been a long-time attendee, and you can even spot him in the photos from our very first FuturePub back in 2014!

Ian, who is CTO at BMJ, shared some of his insights from the approaches the BMJ is taking to AI governance in a world of large language models. He encouraged an open discussion and got things started by briefly explaining the LLM governance group they’ve created at BMJ that works on reviewing use cases and aims to support LLM use in a responsible, fair, and safe manner. He asked the audience “Are we doing it right? What are others doing?”, and in his write-up after the event noted:

“I had good feedback on BMJs current approach, and one great suggestion from the floor was to think about how to actively create the space to hear and listen to weaker voices inside the organisation.”

Ian Mulvany[1]

Ian regularly writes on a variety of topics on his personal blog. If you’d like to get in touch you can find him on LinkedIn.

Carl Miller – The shifting terrain of power in the age of AI

Our final speaker of the evening was Carl Miller, founder of the Centre for the Analysis of Social Media at Demos, who has recently recorded a new podcast series for Intelligence Squared entitled “Power Trip: The Age of AI“.

Carl opened with an observation that the sudden explosion in the use of generative AI has left a lot of us feeling like deer caught in the headlights; we’re not really sure what’s happening, where we should turn, and whether we need to get out of the way or not!

He then dived into the question of how power is changing in AI – across the tech itself, as well as in society, geopolitics, governance and humanity. His slides featured quotes from the people he’d spoken to during the podcast series, and he used those to give a very lively and emotive talk which helped bring a final burst of energy into the room just in time for all the discussions and conversations which followed the talks!

You can find Carl’s podcast on the IntelligenceSquared website, and if you’d like to know more about his recent work you can find details here or on his personal website.

Pizza and ping pong

Once the talks were finished, the pizzas came out and the conversations continued. We’d like to thank Khai, Rachael, Angela and the whole team at Bounce in Farringdon for keeping the drinks and mocktails flowing and the pizzas going. You helped fuel some amazing conversations!

As our final speaker was finishing up, the Games Gurus at Bounce were also preparing for some light-hearted competitions on the ping pong tables. I believe Michelle was in the lead on most of the games come the end of the evening, but it was a close run thing!

There was also plenty of chance for games outside the “official” competitions – including the match of the night, Figshare’s Mark Hahnel taking on Daniel Hook of Digital Science.

Mark clearly enjoys his ping pong – after a game against Daniel, he squared up against Ian Mulvany, and… well, Ian says it best himself…

“Figshare founder Mark Hahnel whipped me good at table tennis. 11 – 4. I got some good shots in, but I mean, 11 – 4.”

Ian Mulvany[1]

…but Ian didn’t let it spoil his enjoyment of what really was a fun, stimulating evening!

“It was a great event, if you get a chance you should make it to one in the future.”

Ian Mulvany[1]

Photo Gallery & Videos

A photo gallery with more shots from the night will be available soon, as will the talk recordings, but you can already watch the snapshot video here! Subscribe to the FuturePub Series on Cassyni to be notified when the talk recordings are available! Thanks to Huw James from Science Storylab for capturing the essence of the event so well.

See you next time!

We’re heading to Berlin next week to attend the Falling Walls Science Summit and will be hosting our first #FuturePub Berlin on Monday 6th November at the Hotel Palace Berlin. If you’re in town for that, we’d love to see you there! FuturePub London will return in the Spring of next year – date and venue still to be confirmed.

If you’re keen to bring FuturePub to your town or city, let us know. And if you’re interested in speaking at a future #FuturePub, please do let us know as early as possible by filling out this short proposal form!


References

[1] https://world.hey.com/ian.mulvany/some-thoughts-on-futurepub-october-2023-19e9c2c4

The post FuturePub London – The AI Edition appeared first on Digital Science.

]]>
From open science to project management and back https://www.digital-science.com/tldr/article/from-open-science-to-project-management-and-back/ Fri, 27 Oct 2023 14:39:06 +0000 https://www.digital-science.com/?post_type=tldr_article&p=67203 In this guest post, Joyce Kao, co-founder of the Swiss-based Open Innovation in Life Sciences Association, takes us through what motivated her and her cofounders to form the association, how it helps early career researchers find out more about Open Science, and how it's led her to her latest initiative, the Digital Research Academy.

The post From open science to project management and back appeared first on Digital Science.

]]>
Co-founders of Digital Research Academy Joyce (front right) and Heidi (front left) with the OILS Association and friends at the OILS ’23 conference.

Foreword by John Hammersley: Earlier this year I joined Jo Havemann on her podcast, Access 2 Perspectives, to chat about the founding of Overleaf, the changing academic landscape, and how it ties in with my new role at Digital Science.

Following our chat, Jo introduced me to Joyce Kao, co-founder of the Swiss-based Open Innovation in Life Sciences Association, who had also recently recorded a podcast with Jo to talk about their founder story.

Joyce is now focusing her spare time on a new initiative called the Digital Research Academy (DRA), and in this short guest post she walks us through what it is, what led her to it, and why she’s so excited by it!

Hello Open Science! – The story behind the Open Innovation in Life Sciences Association

My three favorite things when I was a research scientist were interacting with other scientists, the conferences, and planning (anything really – research, seminars, journal clubs, etc.). It was an early passion of mine to bring people together for achieving joint targets in the most harmonious way possible. Thus, when I was asked to chair the organizing committee of the inaugural Open Innovation in Life Sciences (OILS) conference in Zürich, Switzerland, I said yes in a heartbeat. I also said yes at the time because I was a postdoctoral researcher from New York University, who was a visiting researcher at the beginning of a long time collaboration at ETH Zürich and I was looking to connect quickly with the local research community. This first OILS conference committee was a mixed team of ETH Zürich and University of Zürich researchers. 

This 2018 inaugural OILS conference was a new international meeting organized by early career researchers (PhD students and PostDocs) in the life sciences to bring academia, industry, and government together for discussions on innovation based on open science principles. This event was an amazing success in its first year in raising funding and with almost 200 participants from all over the world. This event was a tipping point in my career journey because I was introduced to Open Science. Because of the thrill of success of the first event, it had to be done again. 

The Open Innovation in Life Sciences Association in 2022.

However, there were two main challenges that quickly became apparent:

1. How to pass on the conference organizing knowledge for years to come? PhD students and PostDocs move a lot and new people come and go.

2. How do we get early career researchers interested in helping with organizing the conference?

These challenges ultimately led my co-founder and I to establish the Open Innovation in Life Science Association. Our mission was to help ECRs learn more about Open Science. Establishing a non-profit association involves setting up governance and organizational infrastructure for decision making and knowledge preservation. The processes and knowledge base seemed to address the first challenge.

As for the second challenge, after a couple rounds of recruitment, we honed in on what attracted ECRs to the OILS association. A few joined OILS because they were Open Science supporters, but most joined up because they wanted to learn about project/event management, expand their professional network, and learn how to fundraise (e.g. write grants, cold calls/emails, etc.). Even though most came to gain these transferable skills, they all left knowing a bit more about Open Science so the association stayed true to its original mission.

While I am no longer an active member in the OILS association, it is still growing and doing even more exciting things beyond the annual conference. They have a strong following on LinkedIn (1300+ followers!) and also they just successfully organized their first Digital Health meets Open Science Hackathon called OpSciHack, which I got to witness as a mentor/judge!

Joyce at the recent OpSciHack event.

Always Be Learning – Open Science on an international scale

After founding OILS and running the association for a while, there was a very natural time point for passing on of the baton to the next generation of life science ECRs in Zurich. What people don’t always realize about me is that I am a trailing spouse. My spouse is an academic researcher and we go where the job is, which can be a very nomadic lifestyle for some. We moved to Aachen, Germany in 2021 when my spouse started his own research group and I needed to find a new position in the area. Given my experience with the long-term and complex process of setting up OILS and managing the different activities under the association, it was a very natural transition for me into a project management role. As an aside, many OILS Alumni become project/program managers (and/or consultants curiously enough). I now work as a Senior Project Manager at the local University Hospital in the Innovation Center for Digital Medicine dealing with mostly EU level projects. I manage some of our larger projects as well as help coordinate the activities of the project managers working on other EU-level projects. I would say that my current role is not unlike the work I did for the OILS Association from setting up operational infrastructure for an open and collaborative work environment to reading contracts and regulations about how European Funding Programmes work. 

I have, for sure, learned a lot in my current role and do find the landscape of EU funding quite interesting. I do especially like finding out about the Open Science policies being built into the grant requirements (e.g. Open Access publications, Data Management Plans, etc.). There are many grassroots or bottom-up efforts for Open Science and it is very comforting to see the complementary efforts of the top-down regulations striving for the same goal of sustainable research.

Looking to the future – The Digital Research Academy

The theme of Open Science is still active in my life these days beyond the EU regulations and grant requirements. I do occasionally advise the current and amazing ECRs running the OILS Association. I am also a mentor in the OLS Open Seeds program, where I train future Open Science leaders. This program is run by OLS, a UK-based non-profit organization. 

However, I am most excited right now by a recent initiative I am helping to develop in my free time called the Digital Research Academy (DRA). The DRA is a grassroots, community-first initiative where we are building a network of trainers, who provide training to the research community. The main topics of training center around the topics of Open Science, Data Literacy, and Research Software Engineering. With more government and higher education leaders supporting OS, the demand for quality OS training is increasing. All of our trainings are tailored to the needs of the research community.

You can read more about the DRA on our newly launched website: https://digital-research.academy/

Aside from just seemingly being addicted to building operational infrastructure for social startups, I have to admit that I am also thrilled to work with Heidi Seibold, who is an Open Science role model I looked up to while starting the OILS Association. Heidi left a prestigious research group leader position to pursue her Open Science passion and become a (quite successful) independent consultant in the field. She is the  driving force behind the Digital Research Academy.

So far it has been an amazing experience to  build the DRA. The support from our trainer community to date has been nothing short of incredible. We launched our first Train-the-Trainer program, which is the training program to become a DRA trainer. I have met so many passionate professionals coming from different research fields at all career levels. They are all trying to improve how we do research. At the same time, requests for training from the broader community have also been rolling in. All of this is incredibly promising and I am honored and excited to be able to support our trainer community in this journey.

The post From open science to project management and back appeared first on Digital Science.

]]>
Celebrating the women of TL;DR https://www.digital-science.com/tldr/article/celebrating-the-women-of-tldr/ Tue, 10 Oct 2023 14:21:39 +0000 https://www.digital-science.com/?post_type=tldr_article&p=66969 Women have been historically underrepresented in science, technology, engineering and maths (STEM), both in education and in the careers that follow, and one of the things that we strive to do at TL;DR is to amplify amazing contributors from across the breadth of the community we work with.

It’s fast approaching six months since the TL;DR site launched at the end of April, so we thought it was a great opportunity to look back and celebrate the amazing women who have been at the forefront of the conversation on TL;DR so far.

The post Celebrating the women of TL;DR appeared first on Digital Science.

]]>
Women have been historically underrepresented in science, technology, engineering and maths (STEM), both in education and in the careers that follow (see The STEM Gap for example), and one of the things that we strive to do at TL;DR is to amplify amazing contributors from across the breadth of the community we work with.

It’s fast approaching six months since the TL;DR site launched at the end of April, so we thought it was a great opportunity to look back and celebrate the amazing women who have been at the forefront of the conversation on TL;DR so far.

Here’s a look back on their TL;DR posts in 2023 to date, from our launch back in April up until today:

Featured Articles

April

Our new avenue for interesting things

The founding team of TL;DR featured three brilliant women: Dr Briony Fane, Dr Suze Kundu, and Dr Leslie McIntosh! They’ve all contributed further articles to TL;DR throughout the year, which you can find via their profile pages 🙂

May

An image of the Cheers season 1 cast with "research twitter" written across the front in the Cheers font

Research Twitter – where everybody knows your name

A chance to discover the origin of the “FunSizeSuze” moniker in this discussion piece from Dr Suze Kundu on social media platforms and their role in helping to build research communities and break down barriers to inclusion and cross-disciplinary research.

Down the rabbit hole – exploring inequalities in funding of climate change research

Misha Kidambi is Scientific Communications Manager at Digital Science, and earlier this year wrote about “A tragedy of inequalities” — highlighting differences in research funding of climate change research. Find out how the article came about, and the most surprising thing she found out — that STEM fields get 770% more funding than humanities, and only 3.8% of funding is allocated to climate research on Africa.

FuturePub is back! Here’s what happened on May the 4th

Co-hosted by Dr Suze Kundu, FuturePub is our fun, informal evening event showcasing what’s new in the world of scholarly publishing tech. Find out what happened at our first such event since the pandemic started!

June

Fruit flies and maggot brains – the magic of Soapbox Science

Soapbox Science is a novel public outreach platform for promoting women and non-binary scientists and the research they do. We caught up with Isla Watton, who is responsible for recruiting and training Soapbox Science Local Organising Teams and supporting the delivery of Soapbox Science events globally, and Hui Gong, a neuroscientist at The Francis Crick Institute and one of the London speakers for Soapbox Science 2023!

July

Dr Jessica Miles: From Michael Faraday to Microbiology to AI & beyond

This is the story of how a school science fair inspired in Dr Jessica Miles a passion for science communication, a PhD in microbiology, and a valuable perspective on the current AI debate.

Mind the Trust Gap

Trust. Five letters, multiple meanings, immense power. Trust arrives on foot and leaves on horseback.1 Trust is the basis for society, but foundations are fracturing in a world of growing divides. Dr Leslie McIntosh launches the first TL;DR campaign of 2023 focused on trust in research.

Another Happy Landing for FuturePub in San Francisco!

One sunny Thursday evening in July, Dr Suze Kundu and the team headed to the picturesque Presidio in San Francisco to host our popular #FuturePub event. This was only our second in the US, and first on the West Coast!

August

Vaccine Hesitancy and the Importance of Trust

With trust in research a critical issue, a small team led by Dr Briony Fane and Dr Hélène Draux take a detailed look at a key ‘trust marker’ in research publications on vaccine hesitancy.

AI and publishing concept image

AI and Publishing: Moving forward requires looking backward

As with the rest of the world, the research sector is concerned about the impact of generative AI. Guest author Dr Danny Kingsley asks: Is AI the disruption scholarly publishing needs?

Navigating Trust in Academic Research: The Rise of Data Availability Statements – Part I

In an era of miscommunication and escalating pressures on academic researchers, the bedrock of credibility and trustworthiness in the scholarly world is under the microscope like never before. In this blog series, Ann Campbell and Dr Jingwen Mu venture into the realms of research transparency, focusing first on the rise of Data Availability Statements.

Navigating Trust concept image

September

A good time to be working on AI: an interview with Professor Sue Black

She’s an award-winning Computer Scientist, Technology Evangelist and Digital Skills Expert who led the campaign to save Bletchley Park, and earlier this summer Professor Sue Black spoke to us about her experiences of AI and her hopes for the future.

Digital Science Speaker Series

The live, in-person Digital Science Speaker Series talks are back! This year we’re partnering with the Royal Institution (the Ri) to sponsor two of their public lectures in 2023, and Dr Suze Kundu kicked things off by interviewing Dr Chris van Tulleken ahead of his talk on “Ultra-Processed People”.

Silhouette of a person looking up at an iridescent starry sky with a galaxy streaming across it and the Speaker Series logo of two conversational speech bubbles superimposed on top

A multi-dimensional approach to assessing the impact of the UN’s Sustainable Development Goals (SDGs)

In this short, informal interview, Dr Briony Fane and Dr Juergen Wastl explain some of the methods behind their work on assessing how global research ties into the UN’s Sustainable Development Goals.

October (so far!)

A tale of two pharmas – Global North and Global South

Dr Briony Fane and Ann Campbell lead this analysis on funding & collaboration, and the localisation of SDGs in the pharmaceutical industry, using a bibliometric evaluation of scientific publications.

World Mental Health Day 2023 with Digital Science

For World Mental Health Day 2023, we spoke with Danielle Feger – Global Health and Wellbeing Manager at Digital Science – about what the day means for her, for our teams and for the broader research community.

Coming up

We also have some fantastic events coming up over the next month, including:

  • Monday 30th October: #FuturePub is back for its third outing of 2023 and this time, like so many things, it has an AI flavour about it! Join us at Bounce Farringdon at 6pm on 30th October for food, drink, six amazing talks from some of the best AI wranglers in research, and perhaps a few rounds of ping pong! Register here for your free ticket. Presentations will also be recorded and shared after the event.
  • Monday 6th November: #FuturePub then heads to Berlin for Berlin Science Week and the Falling Walls Science Summit! There will be a number of us from Digital Science attending in person, and we’re hosting #FuturePub on the evening of the 6th to kickstart that week. Event registration details coming soon!
  • Thursday 16th November: The Digital Science Speaker Series continues on 16th November with a talk at the Royal Institution from physicist Dame Athene Donald about gender equality in science with a talk entitled Not Just For The Boys. This talk will also be recorded and shared after the event.

Plus we’ll be continuing to publish new articles, interviews and videos on TL;DR, so keep an eye out for new content and follow @digitalsci / Digital Science to be amongst the first to know!

Get involved

Would you like to get involved in the discussion and contribute to TL;DR? Please ping myself or Suze with your idea directly, or send Digital Science a message! We’d love to hear from you 🙂

The post Celebrating the women of TL;DR appeared first on Digital Science.

]]>
World Mental Health Day 2023 https://www.digital-science.com/tldr/article/world-mental-health-day-2023/ Tue, 10 Oct 2023 13:54:16 +0000 https://www.digital-science.com/?post_type=tldr_article&p=67052 Suze, John and Danielle chat about the link between mental health and physical health, the challenges we must overcome to destigmatise mental health, the need for awareness days, and the initiatives that Digital Science have implemented to help both our internal and external communities.

The post World Mental Health Day 2023 appeared first on Digital Science.

]]>
World Health Organization campaign banner for World Mental Health Day 2023. Find out more at: https://www.who.int/campaigns/world-mental-health-day/2023

10th October is World Mental Health Day. This year’s theme is “Our Minds, Our Rights”, with a focus on how good mental health should be a basic human right for everyone, no matter their situation or circumstance.

At Digital Science, we offer a range of initiatives to support healthy bodies and minds for our workforce, so that we can best serve our amazing research community and enable them to do the most groundbreaking, societally-impactful work.

John and I sat down with Danielle Feger, Digital Science’s Global Health and Wellbeing Manager, to talk about the link between mental health and physical health, the challenges we must overcome to destigmatise mental health, the need for awareness days, and the initiatives that Digital Science has implemented to help both our internal and external communities.

Here’s our chat in full – if the embedded video doesn’t show below, you can watch it directly on YouTube:

For the resources I mention at the end of the chat, you can find Dr Zoë Ayres’s book, Managing your Mental Health during your PhD: A Survival Guide, here, and Dr Petra Boynton’s book, Being Well in Academia: Ways to Feel Stronger, Safer and More Connected (Insider Guides to Success in Academia), here.

You can also learn more about World Mental Health Day, particularly this year’s theme, on the World Health Organization website.

The post World Mental Health Day 2023 appeared first on Digital Science.

]]>
A good time to be working on AI: an interview with Professor Sue Black https://www.digital-science.com/tldr/article/a-good-time-to-be-working-on-ai-an-interview-with-professor-sue-black/ Thu, 14 Sep 2023 07:24:43 +0000 https://www.digital-science.com/?post_type=tldr_article&p=65773 "Because of my background, I'm always interested in how technology can serve the underserved in society, and how it can empower people to live their best lives. With AI, I'm not worried about robots taking over the world. I'm more worried about people using technology to do bad things to other people, rather than the technology itself.

One of the biggest issues we've got with technology is that most people in society, particularly those who aren't in tech, think that they can't understand it. I want to help change that, on a global scale."

The post A good time to be working on AI: an interview with Professor Sue Black appeared first on Digital Science.

]]>

“Because of my background, I’m always interested in how technology can serve the underserved in society, and how it can empower people to live their best lives.

With AI, I’m not worried about robots taking over the world.
I’m more worried about people using technology to do bad things to other people, rather than the technology itself.

One of the biggest issues we’ve got with technology is that most people in society, particularly those who aren’t in tech, think that they can’t understand it. I want to help change that, on a global scale.”

Professor Sue Black, in conversation, July 2023

Foreword by John Hammersley: She’s an award winning Computer Scientist, Technology Evangelist and Digital Skills Expert who led the campaign to save Bletchley Park, but to me, Sue Black will always be the friend who has the best excuse for skipping pitch practice, namely “I’m speaking at the UN”. 🙂 

We first met during the Bethnal Green Ventures (BGV) start-up accelerator programme in 2013, when Sue founded TechMums (then Savvify) and John Lees-Miller and I were starting out with Overleaf (then WriteLaTeX). And by happy coincidence, both start-ups won at the Nominet Internet Awards 2014!

Sue and I stayed in touch, and when she joined Durham University in 2018, she invited John and I to give talks to the students taking her Computational Thinking course, to give a perspective on life in industry after university. 

Recently I spoke to Sue about her work in AI, and how her experience advocating for underrepresented groups can be useful in helping ensure both responsible AI development and that access isn’t restricted to and controlled by the privileged few. She’s working on a new, UN-supported educational programme, and would love for those interested in helping — in any way — to get in touch. 

Topics of conversation

Early days in natural language processing

Hi Sue, it’s great to be chatting again! I regularly see you attending #techforgood events around the world, and this conversation came about in part because you mentioned you were in Geneva at an “AI for good” conference last month. How did that come about?

Lovely to be chatting to you too! I’ve become more interested in AI over the years — I first studied it at university in the 80s, and at that time I was really interested in natural language. For my Degree project in 1992 I wrote a natural language interface to a database, which was so difficult back then!

“AI was interesting (in the 80s/90s). But I never believed it would get to the state where we are now at all, because back then it was so hard to even write the code just to be able to ask a basic question.”

For example, I was building a natural language interface to something as relatively well structured as a family tree, to answer questions like “who is John’s great-grandmother?”, that sort of thing. That was so difficult and took such a long time… and that was for a typed input. It wasn’t even speech, right?

So, to have got to where we are now with voice recognition and ChatGPT, it just completely blows my mind that we’ve managed to get here in… well, it’s a long time to me (since the 80s!), but at the same time it’s a very short space of time.

Professor Sue Black and her PhD student Sarah Wyer both attended the AI for Good conference in Geneva, Switzerland in July 2023. Source: https://twitter.com/Dr_Black/status/1677400696084103168

A good time to be working on AI

One of my PhD students at Durham – Sarah Wyer – has been looking at GPT applications for a couple of years. Before ChatGPT exploded into the public sphere, we were looking at different things for her to do for her PhD. We had a conversation with one of my colleagues at Durham, Chris Wilcocks. He was all excited about the potential of…I think it was GPT2, if I remember rightly. He was telling us all about it, and we’re like “Oh my God, this is amazing, you’ve got to do your PhD in this area!” So Sarah and I became really excited about it, and we wanted to look at bias in these GP models.

“I’ve done loads of stuff around diversity and inclusion in my career, and so we figured we’d ask ‘what if we can find any bias with GPT too?’”

We thought it might take a while – and in a sense it did – because to start with it took us several months to get access to GPT3. You had to apply for an account and the waiting list was very long. But when we did get access it was amazing, and we started to look into whether any particular sort of prompts generated bias  

And at that point, it didn’t take very long to find bias! The first bit of research that Sarah did was taking some simple prompts – e.g. “Men can” and “Women can”, having GPT3 generate 10,000 outputs for each prompt and then doing some clustering analysis and stuff like that. We thought it might take a while to find some bias, but it only took a few seconds with these first few prompts.

You can probably guess the biases she found – for example, stuff men can do is “be superheroes” and “drink loads of alcohol”, that kind of thing. Women can… yeah, it’s very misogynistic and sexualized, that kind of stuff. Not very nice at all, and if you add in race (e.g. as “Black women can”), it gets even worse.

Sarah is taking snapshots over time and the results are improving, in that it no longer provides some of the worst answers. But that’s also a problem because it’s getting better because those things which we really don’t want to see are now being masked, and that process isn’t very transparent.

So that’s been her PhD work over the last two years (part-time), and we went to the AI summit last year, and now to the AI for Good conference!

Sue Black describes this as “An incredibly inspiring kick off speech by the wonderful Doreen Bogdan, Secretary General of ITU, challenging us to use AI for Good”. Source: https://twitter.com/Dr_Black/status/1676858921099710465

A practical approach

How does it feel to be back in AI, after working on it in the 80s & 90s? Did you keep an interest going over the years, amongst all the other stuff you’ve been doing?

No, not at all – after my PhD I thought, “I don’t want to do that again!” 

I started my PhD in formal methods, as that’s where the funding was. I did that for six months and whilst it’s clearly a good approach to software development in certain scenarios — safety critical software and stuff like that — it wasn’t a good match for my brain!

I think I have a more “practical” rather than “formal” kind of brain. It doesn’t work very well in that way, so whilst I can do it, I’m not good at it. So I moved over to reverse engineering, which is more practical and to me quite exciting, but then I ended up in a really complicated area of maths which I couldn’t understand properly! I was building on a Wide Spectrum Language, which is basically a language to end all languages… one that you can translate everything into and then everything back out of.

So I thought, “That’s a great idea; that’s sort of a practical solution for lots of problems,” but it was very formal, again, and even though it’s a really good idea it turned out to not be very practical… and also the maths involved just did my head in! I spent three months thinking to myself, “Do I carry on with this and hope that I understand all this math or do I change?” I ended up deciding I wasn’t going to carry on with it, and I changed into software engineering.

I was already really interested in software measurement because I thought again, that’s kind of practical, help for people out there adapting software systems. So then finally, that kind of resonated properly with me and I did some work around reformulating an algorithm to compute ripple effect metrics. And that was my PhD.

“I never thought we would be able to do all the things that we now can with AI.”

So, yeah, nothing around AI in there at all, in the end! And I kind of thought it (AI) was never going to get anywhere because it was just too hard, but of course, I didn’t foresee the big data revolution and social media and what that might enable to happen. I’m so excited that it’s worked out the way it has. It’s just incredible. I never thought that we would be able to do the things that we can now do at all.

Returning to AI with a broader experience

Are you better equipped to work on AI now? Or is it even more daunting?

Well firstly, I suppose I’m not going to be doing anything at a technical level ever again (laughs!) — that’s not the best use of my time. 

When I was younger, writing code was definitely what I wanted to do, but I kind of feel like now, “I’ve done that, I don’t need to do that again”! And other people, who are doing it all the time, will be much quicker than me! Going back into AI now is more about how I want the world to be — it’s at a much higher level, and thinking about how we can use AI to benefit people, and I guess because of my background of some sort of disadvantage and challenges, I’m always interested in how technology can serve the underserved in society in various different ways and how it can empower people to live their best lives. 

“Because of my background, I’m always interested in how technology can serve the underserved in society… and how it can empower people to live their best lives.” 

So that’s one aspect of it. And then the other one is from the opposite standpoint, which is how to mitigate bias, or make sure that people realize if there is bias within a system, how much that can impact people detrimentally. Again, usually it will be the underserved communities who are most impacted without realizing it.

A lot of what I’m interested in is how to help people across the world understand reality — as much as anyone understands reality — but enough so that they can make the right decisions for themselves and their families and people around them. That could be a refugee woman setting up a business so that she can earn enough money to support a family, or it could be someone running an AI company who hasn’t thought about how the way that they’re developing their software can have a detrimental impact on potentially millions or even billions of people around the planet.

Because the #AIforGood conference was at the UN, I was chatting to the Secretary General of the ITU about helping everybody with digital skills across the world. Some sort of global program… so I’m going to be working with them on that! So, that’s the most exciting thing that’s happened to me in the last while! 

We should worry about people, not the technology

I’m optimistic about AI, but the doom scenarios are interesting as well. Are we training and building something that will cause massive harm? Will we become too dependent on AI? Will we lose the ability to do certain things if we can get the answer immediately?

Were these sorts of questions a focus for the conference last month? What’s your view on them?

Yeah, this was discussed last week, and from my perspective there’s too much focus on the tech.

“I’m more worried about people using technology to do bad things to other people rather than the technology itself.”

Because I think the technology itself may be an issue way into the future, not immediately. Right now you can see — with stuff like Cambridge Analytica — how using information and data that people have found online can change the course of the way things go in the world… elections in different countries, Brexit, and so on. I think a lot of that is down to people misusing technology, and that’s the thing that worries me more than robots taking over the world.

“People are using data about other people to manipulate millions — or even billions — of people to behave not in their own best interests nor in humanity’s best interests. That worries me. I’m not worried about robots taking over the world.”

Helping others to not be scared of technology

That’s why we need to help educate as many people as possible, so that they can recognize these things. I think security, and understanding it, is one of the biggest issues facing society — there will always be scammers of all different sorts, and they’ll always be using the latest technology. We have to help people have the awareness to keep updating themselves on “So what’s the latest thing that I should be looking out for?” You can’t tell everybody; you need individuals to be able to find that stuff out for themselves.

It’s a first step, because I think one of the issues we’ve got with technology is that most people in society, particularly those who aren’t in tech, think that they can’t understand it. Whereas, of course, they can at a certain level, but because it’s got that kind of reputation, lots of people are scared of it and think they couldn’t ever understand it. And that’s one of the main things I was trying to get over in the TechMum’s program was that, “Yes, you can understand these things, you can do these things — don’t get put off by all the jargon.”

“Everyone can understand tech to a certain extent, and if they can recognise that, and not be scared of it, it can help make their lives better in terms of being able to stay safe and secure and understand what’s going on. And I guess that’s kind of like my lifelong challenge — to try and make that happen as much as possible for as many people as possible.”

The buzz around AI shining a spotlight on existing problems

It feels like the current focus on AI is shining a spotlight on some problems which already exist. For example, there was already a bias in internet search results, problems with social media, scammers, and so forth. Maybe people find it easier to think about it as technology being the problem, whereas it’s actually those that are (mis)using it. But although people may be focusing on the technology, it is at least bringing into focus how it will be used, who controls it, and…

And also who’s built it and tested it and all of that kind of stuff, from a software engineering point of view. I’ve been thinking about diversity in software teams — even though I wouldn’t have called it that — since I was doing my PhD. 

I can remember reading about the disaster with the London Ambulance service computer-aided dispatch system, where people died because all sorts of things went wrong in procurement / management. A lot of it was about people not working together, not thinking, not actually valuing the people that were running the manual system beforehand and just the technology people thinking they knew better than the people that were doing the job on the ground.

I’d almost call it “uninclusion”, in the sense of not being inclusive in terms of those people working together with each other. It seemed to be a common problem in the ’90s, when there was a lot of instances of changing manual systems and computerizing systems, where the outside consultants were brought in, not really working with the people who are running the system, and having things like switchover happening on a single day with no fallback, disaster-planning. Even at the time it was obviously a ridiculous thing to do, but it seemed to be happening everywhere, with millions and millions of pounds being spent on these types of projects.

I think more than technology, it’s always been about people either not valuing other people, or other people’s opinions or information that they should be, or not testing things properly.

Dressing the animatronics — biases in plain sight

Bringing us back to the “AI for good” conference you attended last month, was there anything particularly unexpected you came across whilst you were there?

Overall it was a great conference — really interesting people and really interesting tech on display. 

One thing does stick in my mind though: there were a number of robots at the event, of many different sorts including animals, and some of them were these sort of humanoid robots — animatronics. About five were women and one was a man and it was quite interesting to see how the ones that had humanoid bodies (i.e. that weren’t just a talking head on its own) were dressed. The man-robot was Japanese and kind of dressed like a Japanese businessman or professor, and quite like how a man would be dressed. Whereas the women were just dressed… well, in all sorts of seemingly random clothes that you’d probably describe as being “cheap hooker” kind of clothes.

And I was a bit like “Why? What’s going on here?” One of them had a wedding dress on it, and the impression it gave was that women are either cheap hookers or they’re getting married.

I don’t think they’d even thought about it — it didn’t seem like there was a deliberate attempt to give that impression, they’d just put some clothes on it… on her…. and that’s the clothes they put on them. So that was my main kind of “aha-but-not-in-a-good-way” moment at the conference. 

I should reiterate that there were lots of interesting speakers about all different sorts of things, and the interactive stands were very cool. It was a really great conference to go to, and it was great for meeting people doing different sorts of things. But it’s still notable that this — the dressing of the animatronics — was one of the things that stuck out to me.

Looking ahead: A new UN digital skills programme

“Let’s put human values first and stay true to UN values” – ITU Secretary General Doreen Bogdan giving the closing speech. Source: https://twitter.com/Dr_Black/status/1677343924661153792 

You mentioned your hopes for AI and hopes that it will help people — especially disadvantaged people — be the best version of themselves. What are your hopes for the next few years? What would you hope to see happen, what do you hope to be doing, and how could people best help you?

On a personal note, I’m really excited about working with the UN on digital skills globally. I’m very excited to be working to put together a program or programs that we can tailor for different groups of people in different countries. 

So for any readers out there, please get in touch if you have any expertise around delivering digital skills programs on a large scale, or in working with disadvantaged or underserved communities in any way. I’m going to be doing a lot of fact finding — in terms of delivering a worldwide program, my experience has focused in the UK, so it will be great to broaden my perspective. I’d be very interested in speaking with people at any level — right from the lowest level in terms of content all the way up to the highest levels in terms of delivery methods, agencies to work with, etc. 

For example, I was introduced to the person who runs the World Food Program — I’m hopeless at  remembering exact titles but basically, the UN food agency. I had a chat with him about whether there’s a way it might work where, along with food being delivered, there’s some way that we can help facilitate a digital skills program along with it.

So, any ideas, at any level, across the world, of people who’ve got real experience — either positive or negative — delivering these types of program, so that we can help work out what is the best way to run it. Or even experience of running programs across the world — it doesn’t have to be a digital skills program, but any experience in terms of the best way of engaging communities around the world, anything kind of relevant to that and again at all levels, from experience on the ground to experience of which agencies to work with, how to bring people together, who to bring together. All of that kind of stuff.

It sounds like an amazing project, a daunting one — maybe daunting was the word I was searching for there. It sounds like there’s quite a lot of work.

I don’t feel daunted at all, I guess I’m just feeling excited! Finally, I can have the sort of impact that I want to have on the world!


If you’ve enjoyed reading this article, and would like to get in touch with Sue to discuss the new digital skills program she’s working on, you can reach her via her website, or find her on X(Twitter) or LinkedIn

The post A good time to be working on AI: an interview with Professor Sue Black appeared first on Digital Science.

]]>
Another Happy Landing for FuturePub in San Francisco! https://www.digital-science.com/tldr/article/another-happy-landing-for-futurepub-in-san-francisco/ Thu, 20 Jul 2023 18:17:14 +0000 https://www.digital-science.com/?post_type=tldr_article&p=64548 Our popular #FuturePub event was a big success in beautiful San Francisco! See details of the event's lightning talks, good times, and a photo gallery from the evening.

The post Another Happy Landing for FuturePub in San Francisco! appeared first on Digital Science.

]]>

Last Thursday we headed to the picturesque Presidio in San Francisco to host our popular #FuturePub event. We’ve run over a dozen of these events to date – mainly in London – and this was our second in the US and first on the West Coast. 

For those new to FuturePub, the evenings are designed to be fun and informal – like meeting up for drinks with friends after work and at the same time giving people the opportunity to showcase the new and exciting things they’ve been working on in science and publishing tech. 

This one was a little more hectic than usual: a number of us from Digital Science were in San Francisco for Sci Foo, a pretty unique unconference being held that weekend at Google X. And because we like a challenge, we decided, at rather short notice, to host FuturePub the night before. Do or do not, as they say – so we did!

Of course, thanks to many late-night phone calls with Jocelyn from the excellent venue hire team at the Letterman Digital Arts Center, home of LucasFilm and Industrial Light & Magic, it all worked out beautifully. It was a fun, informal gathering at a great venue, with lots of conversation, smiles and some very exclusive Skywalker Ranch food and drink, because apparently we’re fancy on the West Coast! 

Here’s our summary video! You may need to accept cookies to view it below, or you can watch it directly on YouTube:

The venue

Following on from our last FuturePub which took place at the Royal Institution, largely regarded as the home of science communication, we were very lucky to hold our first West Coast FuturePub at the Letterman Digital Arts Center (LDAC). Perched up in the Presidio Park of San Francisco, and with views out towards the Golden Gate Bridge and the Palace of Fine Arts, LDAC is also home to the movie industry’s most groundbreaking innovators and engineers such as LucasFilm and Industrial Light & Magic. Given their ethos of “shaking up the status quo” of how movies are made, we felt that it was the perfect venue for us to discuss how the technology and novel innovations are shaking up research for the better.

Scenes from the Letterman Digital Arts Center, home of LucasFilm and Industrial Light and Magic. From left to right: statue of Philo Taylor Farnsworth, inventor of the all electronic television; a matte painting from Star Trek IV: The Voyage Home; treats at the Skywalker Ranch General Store; a bust of Boba Fett

Suze and Huw, our videographer, were taken on an exclusive tour of the campus once they had set up the Skywalker Room for FuturePub, a process which was made all the speedier thanks to the help of Juan, Teresita, Augustine and his team, and Victoria at Sessions at the Presidio. It was interesting to discover some of the engineering innovations that the teams on campus have been responsible for, and to observe first-hand matte paintings and memorabilia from some of our favourite movies. Of course, no tour of the LDAC campus would be complete without a trip to the Skywalker Ranch General Store and a pilgrimage to the Yoda Fountain.

Laden with bags full of Skywalker-related merchandise they didn’t need, Suze and Huw pay their respects to the metaphorically big man, Yoda, at his eponymous fountain at Letterman Digital Arts Center, Presidio, San Francisco

The lightning talks

We had five fantastic lightning talks covering everything from war to parrots, and much in-between! All five talks are now available to watch on Cassyni.

Up first we had Apurv Jain, Founder and CEO of MacroXStudio, showing how he and his team are able to use combine various alternative data sources – satellites, news, social, sensor and other data – to track economic activity in real-time with a high degree of accuracy, in contrast to government reports which are often lagging by several months. 

Apurv Jain, Founder and CEO of MacroXStudio

He demonstrated how their techniques can be used to get a better, more up-to-date picture of the state of Ukraine’s economy during the ongoing war. For more on this, take a look at their recent blog post

Apurv’s talk generated a lot of lively discussion and questions around the modelling approach, activity as a proxy for the economy, and the benefits of multiple data sources – and this led nicely to our second speaker, Suze Kundu of Digital Science (my awesome colleague and co-organizer), and her talk on “Data data data… everything, everywhere, all at once”.

Suze Kundu, Director of Researcher and Community Engagement at Digital Science

Suze gave a very visual presentation that highlighted the challenge researchers face searching the research literature, especially when it comes to topics outside their specific field. There is an ever-increasing volume of research outputs, and when researchers use traditional methods to find papers and collaborators they tend to get a very narrow selection of results back, usually all centred around their specific field. Suze demonstrated how – through incorporating work on knowledge graphs, ontology mapping, and semantic search, Dimensions provides a broader set of results, to help researchers find interesting (and potentially key) papers relevant to their research but from a field they wouldn’t have thought to look in. 

Our third presenter, John Chodacki, speaking in his role as member of the Coko Foundation Advisory Board, then took a look at a related problem in publishing – once a researcher and their collaborators have written a paper, the process of publishing it can be complex, convoluted and time consuming, both for the authors and for the journal they submit to. 

John Chodacki, member of the Coko Foundation Advisory

He demonstrated how Kotahi, an innovative new tool from the Coko Foundation, provides a simpler “push button” interface for publishers. With a single click, journals can export JATS, HTML and beautiful PDF. No XML, publishing vendors, or in-house technologists are required.

Here’s a video from John’s talk showing how it works (you may need to accept cookies to view it below, or you can watch it directly on Vimeo):

John then handed over the mic to Richard Price, founder and CEO of Academia.edu, who gave us a look back at the history of Academia.edu, and the reasons why he started it, before looking ahead to their future plans.

Richard Price, Founder and CEO of Academia.edu

They’ve recently introduced a couple of new initiatives on the academia.edu network; a social feed for finding and discussing interesting things in research, and a number of new, open-access journals. In his talk, Richard walked us through the example of a journal which was launched just two months ago. One of their key aims is to use the network of academic.edu users to help streamline and accelerate the pace of journal publishing – it will be fascinating to see the outcome of this experimental initiative, and how it works in practice now that the first set of journals are live. 

After Richard’s came our final talk of the evening, and it was very much a case of “And now for something completely different”! Irene Pepperberg shared her research on “Can a Grey parrot pass the “Marshmallow Test”?”

Irene Pepperberg, Adjunct Research Professor at Boston University and president of The Alex Foundation

Irene, Adjunct Research Professor at Boston University and president of The Alex Foundation, had flown in that evening to attend Sci Foo at the weekend, and was keen to squeeze in a trip to FuturePub too if she could! She had a lightning talk on African grey parrots ready to go and we had the perfect person to complete our line-up for the night.  

My wonderful colleague, Alison Mitchell, offered to wait at San Francisco airport to travel with Irene to the venue. By happy coincidence, Alison also used to look after an African grey parrot called Jasper! 

Irene gave us a quick overview of her work with grey parrots, and showed some lovely videos demonstrating how they use similar distraction techniques to children in order to prevent themselves from giving into temptation and eating the marshmallow!

She concluded her talk by suggesting a method to improve executive function using the parrot as a model. If you’d like to read more about her work, you can find a number of papers, talks and news articles listed on the Alex Foundation website.

Mapping universities

We also had some beautiful posters on display showcasing Simon Porter‘s work on mapping out “What does a university look like?“. Simon was on hand to answer questions and discuss how they are made – he’d purposefully spent the previous week preparing posters for a large number of the West Coast universities nearby which resulted in a lot of audience pride when locals spotted their alma mater. 

You can read more about Simon’s work here, and view the full project which contains the maps for dozens of universities worldwide on Figshare.

Photo Gallery

Finally (for now) here’s a gallery with some more photos from the evening taken by our videographer, Huw James. You can also view the lightning talks on Cassyni, where you can also see the lightning talks from our previous FuturePub.

See you next time!

We’re currently planning the next FuturePub London for later in 2023 – date and venue still to be confirmed.

If you’re interested in speaking at a future #FuturePub, please let us know by filling out this short proposal form. If you’re at the American Chemical Society’s Fall conference in San Francisco (12-18 August 2023) check out Suze’s talks and panel discussion appearance!

The post Another Happy Landing for FuturePub in San Francisco! appeared first on Digital Science.

]]>
Dr Jessica Miles: From Michael Faraday to Microbiology to AI & beyond https://www.digital-science.com/tldr/article/dr-jessica-miles-from-michael-faraday-to-microbiology-to-ai-beyond/ Wed, 05 Jul 2023 11:13:30 +0000 https://www.digital-science.com/?post_type=tldr_article&p=64093 This is the story of how a school science fair inspired a passion for science communication, a PhD in microbiology, and a valuable perspective on the current AI debate.

Dr Jessica Miles recently participated in the SSP2023 panel on AI and the Integrity of Scholarly Publishing, the writeup from which has just been published on the Scholarly Kitchen.

I caught up with Jessica to chat about how she came to be on the panel, her background in microbiology, and her thoughts on the future of scholarly publishing in an AI world. I got a Barack Obama impression for free 😊

The post Dr Jessica Miles: From Michael Faraday to Microbiology to AI & beyond appeared first on Digital Science.

]]>

This is the story of how a school science fair inspired a passion for science communication, a PhD in microbiology, and a valuable perspective on the current AI debate.

Dr Jessica Miles recently participated in the SSP2023 panel on AI and the Integrity of Scholarly Publishing, the writeup from which has just been published on the Scholarly Kitchen.

I caught up with Jessica to chat about how she came to be on the panel, her background in microbiology, and her thoughts on the future of scholarly publishing in an AI world. I got a Barack Obama impression for free 😊

Quick links

From school science fair to SSP2023

John Hammersley: Tell us how you came to be on the SSP2023 panel on AI

Jessica Miles:

It seems so surprising, right? (JH: Not at all!!) I asked myself that same question the other day — I was at the SpringerNature office and I met somebody who works on submissions who I’d connected with on LinkedIn. He stopped me and said: “I looked at your LinkedIn, your career, and it seems …he didn’t say bizarre, but went with ‘very interesting!’…and I’d love to chat about it.”

And it made me think — in respect to the SSP AI debate and my participation – that one of the reasons I was invited is precisely because I don’t have the normal profile of someone that you might expect to be participating in an AI debate. But if that’s the case, what’s interesting enough about my career that makes my perspective on this noteworthy? And the conclusion I came to is that, beyond my experience in scholarly publishing, it’s quite a bit to do with my experience with (and enthusiasm for!) science communication.  I’ve been interested in science communication for a long time – when I went to school, the university I picked specifically had a program in science communication.

From an early age I learned about Michael Faraday (and his role as a science communicator) and I thought that would be something really cool to do, even though I didn’t know what that was at first! I thought I maybe wanted to do science journalism, eventually landed into research and so when I think about a lot of the different things that I’ve been interested in or done, they really fit kind of nicely within that space of science communication that I’ve been following for a long time.

So getting back to the SSP debate, if you think about the audience being in the scholarly technology space, not an audience of experts on AI, this idea of communicating science in a different way but with this sort of publishing lens comes to mind. And so I think that’s why I was really excited to get the opportunity to do this and why I think it makes sense because it’s not like I’m giving a talk at Google or to a group of AI experts, it’s really about communicating science, but at the intersection of something else, something new.

What does Mary Shelley’s Frankenstein have to do with AI?

John: Yes, I agree it’s really important that different perspectives are included (in these discussions), and that we don’t reduce AI to just being about the technology because it’s totally not just that! Chat GPT’s explosion into the public consciousness is a great example — it wasn’t so much the technology but the interaction with it that caught the imagination. So I think it’s good when forums do try to include lots of different viewpoints. But I also know what you mean in terms of not feeling qualified, because your background isn’t in the technology side of it.

Jessica:

Exactly — my PhD is in microbiology, not machine learning, and I certainly wouldn’t call myself an AI expert. So, yeah, it’s a natural question people asked of me (after the debate). And that’s the answer that I came up with after some reflection! I think having that perspective informs the way I’m thinking about these technologies. For example, I think about one of the courses that I took where we read Mary Shelley’s Frankenstein. The text of Frankenstein is not necessarily considered particularly difficult and some might ask, “Why is this a college level text?” But one of the main ideas of her work is that she’s interrogating how society is thinking about new technologies — it’s a commentary on science, and I was thinking about that recently because it feels like we’re in another similar moment where we’re having to grapple with this new technology and some people are quite fearful and some people are quite excited.

And some people worry that it will take away the mystery of the world if we can recapitulate human thought, human intelligence.

Where does that leave us from a philosophical perspective? We’re starting (or continuing) to ask if anything is sacred anymore? There are a lot of really interesting questions, and having a historical perspective feels like a nice way to approach this moment so that it’s not so overwhelming. It’s like “okay there are historical parallels, and yes this might buck those trends but at least you have some kind of grounding to approach all of this.”

We spent a lot of time on Frankenstein — we must have spent several months on it — and it’s not necessarily something you would think would require that level of rigor. But when you consider it from the context of society and all the other things that were happening around that time — industrial revolution, and the rapid pace of change that brought — there is very much an allegory meaning behind it that belies its reputation as a simple text.

Generative AI – the new normal

John: Frankenstein is a nice example of the fear a new technology can generate. Yet, we’re very good at adapting to the new normal as new things are developed – stuff that would have been seen as magical and amazing previously is then seen as expected, trivial commonplace, once it’s been (repeatedly) demonstrated.

Generative AI – for text, images, and more – is quickly becoming the new normal. So I’m curious, because I didn’t go to SSP this year and I haven’t had a chance to look over the sessions –  what did you find most interesting at SSP, either from the sessions or just from the chats you had whilst you were there?

Jessica:

One topic that particularly resonated was that of the scholarly intellectual output of humans — that the ideas aren’t machine generated, and that manifests itself in terms of the written text. So you can imagine, especially in humanities, there’s a very heavy focus on making sure that the text isn’t machine generated, or at least that the ideas and the text are very much those of a human being.

On the science side however, it seemed to be not so much a concern that the text itself isn’t from humans, but what was seen as more worrying was that there’s even more active fraud with respect to data outputs. The opening plenary was from Elisabeth Bik, whom you might know from her work in ensuring scientific integrity and thoroughly investigating image manipulation.

John: Yes, Elisabeth Bik is a legend – I don’t know how she finds the energy in the face of ever more papers!

Jessica:

She talked about image manipulation and her efforts as a whole, and then focused on the potential implications for these new technologies – not so much on the text side, but on the image side with respect to making synthetic outputs. From her talk, it sounds like we’re in a challenging moment because although the general consensus was that we’re a little bit too early on with those technologies to really see the impact, everyone feels like there’s a brewing storm in terms of all the people who have had time to use these tools, and learn these tools – that we will see nefarious actors (e.g. paper mills) start to incorporate them in ways that we haven’t seen before. And that we’re ill-equipped as a publishing community to deal with it.

John: It’s interesting you mention that. Tim Vines shared a tweet today

Jessica: Tim was my debate partner!

John: It’s a small world indeed! I know Tim from when he founded Axios Review, back when Overleaf was also a new kid on the block 😊 He shared this tweet and it really highlighted how close we are to researchers being able to use AI to generate plausible scientific papers.

Source: https://twitter.com/TimHVines/status/1673172575139278855.

Jessica:

This feels almost fully autonomous, not fully, but with a sort of minimal intervention. And obviously I haven’t looked at this in great detail, but it’s a huge step forward from AI “simply” helping to fine-tune something a researcher has written, like we’ve had before with grammar tools.

Barack Obama?

John: Exactly. It feels like we’re almost at the point where a researcher can ask the AI to write the paper, like e.g. a CEO could say to Chat GPT “please write five paragraphs on explaining why the company retreat has to be canceled this year and make it apologetic and sound like Barack Obama.”

Jessica: (in a deep voice) Folks, the company retreat…

John:

Jessica: I’ve just been watching his new documentary on Netflix so I have his voice in my mind…

John: Okay.

Jessica: It’s actually quite good — he interviews people who have different roles at the same companies and internally tries to get at “what is working, what is a good job”? It’s very US focused but I thought it was quite interesting. Anyway, pivot, but we can go back to AI!

John: This is a nice aside! I saw a video excerpt of him speaking recently, where he was asked for the most important career advice for young people and I believe he said “Just learn how to get stuff done.” Because there are lots of people that can always explain problems, who can talk and talk about stuff, but if you’re the one that can put your hand up and say “That’s all right, I’ll sort that, I can handle that”, it can get you a long way.

What is the publishing industry’s moat?

John: But yes, back to AI, some of the new generative image stuff is a bit crazy – being able to use it to expand on an existing image, rather than just generate something standalone, suddenly makes it useful for a whole load of new things. And I see Google has now also released a music generator, which generates music for you based on your prompt. Everything seems to be happening faster, at a bigger scale than before, and I can see why scholarly publishing is trying to figure out how to ride this tsunami…

Jessica:

How are we going to keep up? Yeah…to borrow a phrase, “What is our moat?”

I think a lot of people are thinking about that, especially given that there’s not only Chat GPT but also all of the smaller models that proficient people can now train on their own. As a scholarly publisher, you’re serving a population that has an over-representation of people—your core demographic—who are going to be really fluent in these models, so what can you offer to them? What can you offer this group that they can’t kind of already do on their own? That’s the million dollar question.

John: Publishers would say they try to help with trust in science, and research integrity – for example, through peer review and editorial best-practices. But they also have an image problem, because there is also a tendency to chase volume, because volume generally equals more revenue, and if you’re chasing volume then you’re going to accept some papers that prove to be controversial. It’s an interesting dynamic between volume, trust and quality.

Jessica:

The volume question is always really interesting because there’s that perspective where people assume publishers have these commercial incentives to grow volume, which has some validity in an OA context of course, but there’s the other viewpoint which is that science is opening up and becoming more inclusive) and that almost by definition means publishing more papers from more authors. But restricting what people can publish…should that be up to the publishers? I do think there is a sense that publishers don’t want to be in the business of telling people they can or can’t publish. Because it’s one thing to say at a journal level, “we don’t think this paper meets the editorial bar”, it’s quite another thing to say, “we don’t think this paper ought to be published anywhere, ever”, right? In fact, I think many publishers would say the opposite: “any sound paper ought to be published somewhere.” We see this view borne out by the increasing investments publishers are making to support authors even before they submit to a journal and also to find another journal, if a paper isn’t accepted. 

Another consideration that was also mentioned at the conference is that writing papers is the way that scientists communicate with each other. Dr. Bik was saying that hundreds of thousands (I forget the exact number) of papers were published on COVID research in the last three years. And she said, “Why do we need that many papers?” Well, in retrospect it’s very easy to say that, but if you are an editor working during COVID, as I once was, for which paper are you going to say “we don’t actually need this one”? Everything was happening so fast, you didn’t know what papers would be the crucial ones in the long run – how can you make that judgment? So I do think more gets published, in response to perceived need.

To be clear: you’re not going to publish anything you don’t think is scientifically sound, but most of us are  not going to try to set the bar at “will this be valuable three or five years later”? That is quite a different bar than “is this just methodologically sound”?

And I don’t know if we’ve gone too far from the question on AI, but with higher volume comes this need to curate–which we’ve needed for a long time–and as the volume increases, the need to curate increases. This curation is another value-add for publishers, but also again something that AI can potentially be very good at given reasonable inputs. I want to be careful not to set up a false dichotomy of “publisher-curation” versus “AI-curation”, because publishers are very much already using AI for things like summarization and recommendation engines, but there is the question of whether publishers, moving forward, continue to drive this curation.

John: One reason you write papers as early career researchers is because you’re learning how to write a paper — you’re publishing some results that aren’t necessarily all that ground-breaking but it’s a record of what you’ve done, why you say your methodology is sound, and so forth. And in doing so you learn how to write a paper. It raises the question of how much e.g. undergraduate work should be published to give say third and fourth year students the opportunity to go through the process of getting their work published.

Jessica:

It is a fascinating question because not only is the pedagogy piece real–that early trainees, undergrads, etc need to learn how to write–but also because publishing is about putting your ideas out there and becoming known to the community.

You can present at conferences (which is another skill), but the scale of impact from that is typically much smaller, and typically it can be difficult to get to the point where you would present at a conference without having published. If you aren’t able to publish, then you’re not able to establish yourself within the community. So publishing is critical for early career researchers to get that first step on the ladder.

The value of getting things done

John: That brings us back almost full circle to where we started – you talked about Michael Faraday as a science communicator, and that you were into science as a kid…

Jessica: I definitely was — I did science fairs at school and that’s actually how I learned about Michael Faraday. And I hadn’t really thought about the communication of science as being distinct from the research itself but for some reason the communication aspect specifically really appealed to me. I have always been quite curious and into learning and storytelling, and being able to communicate ideas through stories.

One of the authors at an aforementioned science fair, and more recently 🙂

John: As we’re nearly at time, perhaps that’s a good point to end on — the value of science fairs, and the value of doing something for yourself and getting that experience is one thing AI is not going to replace; it might be able to create the poster for you, or it might write a paper for you, but if you’re the one that’s got to stand there and say what you’ve done, it’s usually pretty obvious if you know what you’re talking about or not, and there’s no AI substitute for that.

Jessica: Yet!


Jessica Miles holds a doctorate in Microbiology from Yale University and now leads strategic planning at Holtzbrinck Publishing Group. I (John Hammersley) am one of the co-founders of Overleaf and now lead community engagement at Digital Science.

The post Dr Jessica Miles: From Michael Faraday to Microbiology to AI & beyond appeared first on Digital Science.

]]>