Skip to main content

Explainability, Machine Learning, and Healthcare

Meet the people who are building AI systems that are trustworthy, equitable, and solve real-world problems.

Discover how their work drives real-world impact by harnessing the power of AI to provide reliable information and improve the patient experience.
Meet the experts
Stylised representation of AI

Our lab is a leader in artificial intelligence (AI) research, tackling critical challenges across healthcare, natural language processing (NLP), and computer vision. With a team of over 15 researchers, we focus on building AI systems that are trustworthy, equitable, and solve real-world problems. Our mission is to harness vast data sources to create tools that improve decision-making, streamline workflows, and offer new insights across various domains. We apply cutting-edge techniques from NLP, computer vision, deep learning, and explainable AI to drive innovations that have a meaningful social impact. 

AI Code

In healthcare, our lab is pioneering AI solutions that help clinicians provide better, faster, and more personalised care. We focus on using AI to identify at-risk patients, improve triage processes, and optimise treatment plans. For example, one of our key projects is a system that flags high-risk patients in Accident & Emergency (A&E) departments, helping doctors prioritise urgent cases. Another tool we've developed helps reduce unnecessary blood tests for chemotherapy patients, saving resources and improving patient comfort. Additionally, we’ve built AI tools to assist dermatologists in identifying skin cancer, helping detect early-stage malignancies with greater accuracy. Our work in healthcare is driven by the belief that AI must not only be high-performing but also interpretable and equitable, ensuring that all patients benefit from its use. 

A core part of our research is creating AI systems that are transparent and explainable, especially in critical fields like healthcare. These systems provide clear, understandable explanations for their decisions, so clinicians can trust and rely on AI to support their diagnoses and treatments. For instance, the predictive models we’ve developed for chemotherapy patients not only identify those at higher risk of complications but also offer insights into why certain patients are flagged, empowering clinicians to make informed choices. This focus on trust and interpretability extends across all our work, from medical imaging to natural language processing and beyond.  

St Marys AI

Beyond healthcare, our lab also plays a leading role in advancing NLP and computer vision technologies. Chenghao Xiao, is working on improving information retrieval and language model pretraining. His research addresses how AI systems can efficiently and equitably retrieve relevant information from massive datasets. Chenghao’s work combines NLP with computer vision, creating models that understand language and images simultaneously. This is particularly valuable for applications that involve multi-modal data, like analysing medical images alongside clinical notes. Chenghao’s contributions have set new benchmarks in information retrieval and have the potential to transform how professionals in many fields access and interpret complex data. 

Junjie Shentu focuses on applying deep generative models to a variety of tasks, including medical imaging. His work is crucial for improving how AI systems interpret complex medical images like X-rays and MRIs. Making generative models more robust and generalisable allows AI tools to be more effective in diagnosing diseases such as cancer. His innovations in augmenting medical image databases have improved the training of diagnostic models, making them more accurate and adaptable to real-world conditions. 

Making sure AI models generalise to new data is one of the top challenges in AI. James Stirling, works on few-shot learning and compositional generalisation to address this issue, which is critical in medical scenarios where rare conditions or unusual cases frequently arise.  

Beyond healthcare, Thomas Hudson is exploring how large language models can be used in veterinary medicine, extracting insights from discussions between vets to improve animal care, demonstrating how his innovations can be applied to improve decision-making in various contexts. 

Matthew Watson focuses on explainable AI and its applications in healthcare. He collaborates with clinicians to develop models that predict complications in chemotherapy patients, offering a balance of high performance and trustworthiness. Matthew’s work also involves using advanced NLP techniques to monitor patients for clinical deterioration, reducing unnecessary alerts and helping doctors focus on critical cases. His efforts bridge the gap between cutting-edge AI research and real-world clinical needs, making our lab’s work both practical and impactful. 

In summary, our lab’s work spans a broad spectrum of AI research, from healthcare innovations to advancements in NLP and computer vision. By combining expertise across these domains, we are developing AI tools that not only enhance clinical decision-making but also push the boundaries of what AI can achieve in other fields. Our commitment to building ethical, explainable, and equitable AI systems ensures that our technologies are not only powerful but also trustworthy and adaptable, addressing diverse challenges across healthcare and beyond. 

 

Meet the experts

Meet our experts in Explainability, Machine Learning, and Healthcare who are building AI systems that solve real-world problems.

Scroll to view all faces

Discover new faces

Explore our other 100 Faces of Science themes and discover the incredible stories of people making a real-world impact across a wide range of fields. From sustainability and quantum research to AI and energy, each theme highlights the diverse talent driving innovation at Durham University.