Addressing inequalities in patient-facing artificial intelligence symptom checkers: Fellowship proposal
Dr. Mel Ramasawmy
May 21, 3pm(UK), 11am(BR), 4pm (CET)
Location
Virtual (To be announced)
Abstract
Emerging artificial intelligence (AI) technologies, such as large language models (LLMs), are promoted as potential solutions to pressures in healthcare, in particular scarce capacity and the need to manage complex long-term health conditions, e.g. via AI integration into NHS 111. LLMs can be trained to take patient histories and conduct ‘intelligent triage’ to provide personalised symptom assessment and advice for patients or health care professionals. These LLM tools require patients to be able to describe their symptoms to them in a predictable way. However, the way patients describe their symptoms are known to vary widely according to socio-demographic, cultural, regional, and other factors. If these less typical descriptions of symptoms are not included in when LLMs are trained, it creates a risk that some patients may receive inaccurate advice. While developers of these tools, as well as regulators and commissioners within the healthcare system, may try to avoid ‘overt’ bias, they may not be aware of these types of ‘covert’ bias. As an anthropologist working in a healthcare policy context, I see the introduction of large language models as subject to the same risks to equity as other innovations which are developed around a ‘default’ demographic group. In this talk, I will present my Fellowship proposal and invite feedback.
Speaker’s bio
Mel is a qualitative researcher based at Queen Mary University of London. She applies her background in anthropology and health policy to questions at the intersection of digital health, equity, and patient experiences of access and care. Mel is currently developing a fellowship application around covert bias in patient-facing large language models, supported by the NIHR.