Towards Ethical Collaboration Between Humans and AI in Healthcare

Seminar
Author

Julia Ive

Published

September 2, 2025

Abstract

AI and LLMs are increasingly being deployed in healthcare, creating new opportunities to mine clinical notes, support decision-making, and collaborate with clinicians. At the same time, their use raises pressing challenges around bias, data sparsity, and privacy. In this talk, I will present a set of data-centric approaches to ethical human–AI collaboration applied to use cases in pediatric mental health and clinical decision support. I will begin by demonstrating how LLMs can extract depressive symptoms in pediatric settings, before moving to methods for detecting and mitigating bias in clinical notes related to pediatric anxiety. I will then introduce techniques for generating synthetic mental health records to alleviate data sparsity and protect patient privacy. Finally, I will present a clinical chatbot developed to reference UCLH guidelines without unsafe modifications. In a pilot study, the chatbot reduced clinician search time threefold, highlighting its potential to improve workflow and patient care. Finally, I will discuss how synthetic data can also be used to benchmark and refine such support systems.

Video

Speaker’s bio

Dr. Julia Ive is an Associate Professor in Healthcare AI at University College London. Her research focuses on responsible, data-centric, and human-centered approaches to leveraging LLMs for healthcare applications. Her work spans bias detection and mitigation in clinical text, privacy-preserving methods for sensitive data, and human–AI collaboration protocols for clinical decision support. She has previously worked at Queen Mary University of London and Imperial College London, where she pioneered techniques for medical data augmentation through synthetic text generation. Her broader expertise includes theoretical language modeling, multimodal machine learning, and reinforcement learning for text generation.