OAI14

The 14th OAI Workshop will take place online 10-14 November 2025.

Speakers and talk are added to the program continously, as soon as they are confirmed. Make to sure to get back to this page to get the latest info.

Attendance will be global, with librarians, IT staff, research officer staff, and academic researchers prominent among them. 1000+ registrants are expected.

A number of institutions and bodies support this event. The Scientific Committee, which is in charge of the academic programme comprises:

  • Paul Ayris (chair) – UCL (University College London)
  • Abednego Corletey – IT consultant, Ghana
  • Elena Giglia – University of Turin
  • Tiberius Ignat – SKS Knowledge Services
  • Danny Kingsley – Deakin University
  • Jenny Lam – Chinese University of Hong Kong
  • Frank Manista – Jisc
  • Nokathula Mchunu – National Research Foundation of South Africa
  • Martyn Rittman - Crossref
  • Wouter Schallier – United Nations
  • Marco Tullney – Technische Informationsbibliothek
  • Jens Vigen - CERN
  • Kirsty Wallis – UCL (University College London)

Schedule

The content of papers and presentations/addresses delivered at OAI14 represents the views of the speakers/presenters themselves, not those of the OAI14 Scientific Committee.

  • 10 Nov
  • 11 Nov
  • 12 Nov
  • 13 Nov
  • 14 Nov

8:00-11:00 PDT | 12:00-15:00 CLT | 16:00-19:00 GMT | 17:00-20:00 CET | 02:00-05:00 (Nov. 11) AEST

Research Security and openness

Chaired by Dr Paul Ayris and Dr Tiberius Ignat

Open Science has long been celebrated for its potential to democratise knowledge, enhance transparency and reproducibility, and increase scientific impact. Yet recent global developments reveal growing tensions between the ideals of openness and the imperatives of research security. Since 2021, new recommendations and regulations have emerged worldwide, highlighting concerns about knowledge transfer, malign influence, and ethical violations associated with international scientific cooperation. The shifting geopolitical climate, marked by protectionism, restricted mobility, and disinformation, necessitates a critical reassessment of how Open Science can coexist with growing security concerns.

This session explores the evolving interplay between Research Security and Open Science, focusing on how research organisations, industry partnerships, and international collaborations can adapt. Contributors from University College London, KU Leuven Research & Development, and Radboud University Medical Centre will share insights and practical experiences addressing these challenges. Key questions include: How is Research Security implemented across different research environments? How can university-industry research maintain security without stifling openness? And what strategies can align Research Security with the core principles of Open Science?

A special emphasis will be placed on disaster diplomacy, examining how disaster-related research can simultaneously foster or hinder peace and cooperation, as well as how open data practices can both mitigate and exacerbate risks. Discussions will highlight the ethical and operational limits of openness, guided by the ethos of “as open as possible, as closed as necessary.”

The session aims to foster dialogue on integrating Research Security and Open Science in a way that preserves the benefits of international collaboration while ensuring ethical and secure research practices. By enhancing dialogue, presenting current practices, and fostering networking, we aim to support researchers and institutions in navigating this complex landscape and preparing for a future where responsible openness is both feasible and sustainable.

Research Security and openness November 10, 2025

Disaster Diplomacy: Balancing research ethics, open science, and perceived security

Laudable aspects of open science are known, accepted, and documented. Ethical limits of open science exist, with security frequently mentioned. Disaster diplomacy, which investigates how and why disaster-related activities do and do not influence conflict and cooperation, exemplifies the need for balance among: -Research ethics: Some investigations and material can be counterproductive to avoiding disasters and to reaching peace. -Open science: Publicly communicating data and scientific analyses can be used to augment disaster risk and conflict. -Perceived security: International collaboration and positive action are at times impeded by keeping confidential data on disasters, including conflict. Accepting practical, operational limits for ethical open science, whether or not securitisation is supported, can improve the science and impact value of disaster diplomacy research, internationally and locally. These limits are espoused by established ethoses of ‘responsible open science’ and ‘as open as possible, as closed as necessary’.

6:00-9:00 PDT | 10:00-13:00 CLT | 14:00-17:00 GMT | 15:00-18:00 CET | 00:00-03:00 (Nov. 12) AEST

Open infrastructure

Chaired by Dr Martyn Rittman and Marco Tullney

Scholarship relies on being able to create, store, and share research outputs across a diverse range of online platforms, ranging from repositories and publisher sites to search engines and large-scale analytic tools. Behind these platforms lie software, data models, and servers that constitute key infrastructure on which scholars rely. 

Open infrastructure is made up of community-governed, non-commercial services that exist for broad benefit and are often interdependent. Large, centralised data sources and aggregators play a significant role. In order for infrastructure to operate well, data and information need to move around with as little friction as possible. Paywalls, non-standard formats, and gaps in datasets work against these aims. In this session we will hear from several organisations about how they are meeting this challenge and provide services that work and are used by the community.

This session has contributions from open infrastructure organisations that facilitate the storage and availability of research outputs. They will discuss the challenge in achieving sustainability, and the infrastructure needs of regions in which they operate.

Open infrastructure November 11, 2025

TBD

6:00-9:00 PDT | 10:00-13:00 CLT | 14:00-17:00 GMT | 15:00-18:00 CET | 00:00-03:00 (Nov. 13) AEST

Navigating AI in Open Science

Chaired by Wouter Schallier and Jens Vigen

Navigating AI in Open Science November 12, 2025

Feynbot: Unlocking conversational access to High Energy Physics literature

Imagine having a conversation with the entire INSPIRE-HEP database—asking complex questions about particle physics research and receiving precise, contextual answers instantly. The Feynbot project aims to make this vision reality through an advanced Retrieval Augmented Generation system that transforms how researchers interact with scientific literature. This presentation unveils Feynbot's innovative approach that combines cutting-edge AI with the comprehensive INSPIRE-HEP collection. Natural language queries will unlock insights buried across thousands of research papers. Whether you're exploring theoretical frameworks, investigating experimental results, or tracking research trends, Feynbot will deliver accurate, detailed responses tailored to your specific needs. We'll explore the technical architecture behind this system, reveal proven strategies for eliminating AI hallucinations, and demonstrate how Feynbot bridges the gap between researchers' needs and RAG implementation. With tools like this, we will be able to reshape research workflow, opening new pathways for scientific exploration in particle physics.

How can researchers leverage Open Source AI with a data-centric approach

Open-source AI, particularly with open-weight models from Llama, Mistral, and DeepSeek, has shifted research focus from massive general-purpose systems toward compact, domain-specific solutions. This talk will overview the rapidly evolving open ecosystem of AI, emphasizing how researchers can adopt data-centric workflows to use AI responsibly and innovatively. By curating high-quality, rights-cleared datasets, and iteratively refining data quality, researchers can fine-tune open-weight models into transparent, reproducible, and sustainable tools. Adopting this approach democratizes AI, enhances methodological transparency, enabling labs of all sizes to build precise solutions tailored for embedded, privacy-sensitive, and energy-efficient AI applications.

Open Source AI: Propagation of Open Source Licenses in the Age of AI

Open source AI models have the potential to foster innovation and technological progress. Nevertheless, the definition of “open source” in the AI space is hotly discussed. A related – yet no less central – issue concerns the propagating effect of copyleft (or ShareAlike) clauses embedded in training data or upstream code. Do these clauses require downstream AI models, systems and their output to be released under the same open source terms, e.g. as copyright derivatives? This has major repercussions as such propagation would render entire AI projects fully open. The presentation will examine this question and find that this is presently unlikely to be the case, save for training data. Further to this finding, and in order to protect the effectiveness of ShareAlike clause, it will thus advocate for a new definition of copyright derivatives specific to the AI-context.

1:00-4:00 PDT | 5:00-8:00 CLT | 9:00-12:00 GMT | 10:00-13:00 CET | 19:00-22:00 AEST

Commercialization and Open Science

Chaired by Dr Paul Ayris and Kirsty Wallis

This session will examine the issue of Open Science in a number of different ways. First, how does Open Science sit alongside commercial interests? How are the models of commercial companies changing to embrace Open approaches and values? We will hear from one global Information company, Clarivate, about how their approach to the delivery of content and services has taken into account the new priorities and values inherent in Open Science.

A second speaker will then present the point of view of Enterprise and Innovation from a European University. How is the new Open Science agenda aligned with the ambitions and vision of universities. How can creativity in universities flourish if/when Open Science competes against more commercial models of working? Is there a model for innovation and enterprise where the two concept can be successfully married together?

The third issue to be discussed in the session is the Diamond Open Access Business Model (free to read, free to publish). A speaker skilled in commercial negotiations will present a paper which analyses the speed/or lack of it in moving to 100% immediate Open Access. What is the rate of change? is the Diamond Open Access model sustainable? Is it a gamechanger in the move to Open Access?

The final part of the meeting will comprise a Panel session where the speakers will be encouraged to identify a number of clear principles in the Commercialization/Open Science debate to act as guides for future activity globally. Members of the audience will be invited to submit question by Mentimeter to enlarge the discussion.

Commercialization and Open Science November 13, 2025

TBD

5:00-8:00 PDT | 9:00-12:00 CLT | 13:00-16:00 GMT | 14:00-17:00 CET | 23:00-02:00 AEST

Research Integrity

Chaired by Dr Elena Giglia and Dr Nokuthula Mchunu

Research integrity is increasingly at stake: retractions for scientific misconduct are worryingly growing on a daily basis, paper mills are flourishing, reproducibility is in crisis, and it seems to be strictly connected to the current research assessment criteria. The widespread misuse of generative AI is squaring the issue.

In our session we shall try to frame the issue and to see how Open Science practices can help going back to “fundamentals” of research, which is “show me” and not “trust me”.

We shall also explore the idea of “slowing down” the research process, in order to enable integrity and reproducibility.

Research Integrity November 14, 2025
14:10 - 14:30

Retractions: On The Rise, But Not Enough

In 2000, there were about 40 retractions from the scholarly literature. In 2023, there were more than 10,000. That is a dramatic increase, even accounting for the growing number of papers published per year. In this talk, I will explore the reasons for the increase, why it is good news, and why the real number should be even higher. Drawing on more than a decade and a half of experience at Retraction Watch, I will tell the stories of the sleuths who are finding problems in the literature. And I will talk about how everyone can avoid ending up in the Retraction Watch Database -- or end up there for a good reason.

14:30 - 14:50

The virus model of research fraud: integrity requires inoculation, testing, and quarantine

It's generally thought that fraud is rare, and science is self-correcting, but is that true? Sadly, there is increasingly evidence of widespread fraud in science, ranging from individual bad actors who build a glowing reputation on faked or manipulated data, to industrial scale fraud from so-called “paper mills”, who charge for authorship and/or citations. I’ll discuss why it is urgent to tackle these, with various measures comparable to testing for a virus, tracing contact, quarantine and vaccination. I list some red flags that aid paper mill detection, and show how open science practices are important to counteract fakery in science.

14:50 - 15:10

How the misuse of GenAI can spoil the publishing system

The recent explosion of generative AI has had a profound impact on society as a whole, particularly through the new possibilities enabled by text generation tools, with ChatGPT being the most well-known example. The research community has not been immune to this phenomenon. In this presentation, I will discuss how this technology has been used by researchers, with a particular focus on its role in questionable research practices related to scientific publishing, and the broader questions this raises about the research ecosystem as a whole.

15:20 - 15:30

Break

15:30 - 15:50

Placing Humans at the Centre of Open Research

Efforts to produce standards, guidelines, technologies and infrastructures that may boost sharing practices in science have resulted in great advances over the last few decades. However, they have also unintentionally boosted a technocratic, object-oriented vision of open research that is easy to appropriate for “open washing” and does not do justice to the very efforts of the humans at the heart of such work. The result is an open research landscape that fosters, rather than mitigate, existing inequities and divides in the production and use of scientific research. This talk reflects on the requirement of keeping social agency at the heart of open research practice, and what it means for the future development of policies, tools and services.

15:50 - 16:10

TBD

16:10 - 16:30

TBD

16:30 - 16:50

TBD

OAI14 Organizing Committee

Workshop Satellite Events

We also host watch parties in individual locations around the world for those who are not able to watch live online because of issues caused by time zones. Here topics from the recorded sessions are selected, played and discussed locally.