Exploring the Requirements of Clinicians for Explainable AI Decision Support Systems in Intensive Care

Authors: Jeffrey N. Clark, Matthew Wragg, Emily Nielsen, Miquel Perello-Nieto, Nawid Keshtmand, Michael Ambler, Shiv Sharma, Christopher P. Bourdeaux, Amberly Brigden, Raul Santos-Rodriguez

Abstract: There is a growing need to understand how digital systems can support
clinical decision-making, particularly as artificial intelligence (AI) models
become increasingly complex and less human-interpretable. This complexity
raises concerns about trustworthiness, impacting safe and effective adoption of
such technologies. Improved understanding of decision-making processes and
requirements for explanations coming from decision support tools is a vital
component in providing effective explainable solutions. This is particularly
relevant in the data-intensive, fast-paced environments of intensive care units
(ICUs). To explore these issues, group interviews were conducted with seven ICU
clinicians, representing various roles and experience levels. Thematic analysis
revealed three core themes: (T1) ICU decision-making relies on a wide range of
factors, (T2) the complexity of patient state is challenging for shared
decision-making, and (T3) requirements and capabilities of AI decision support
systems. We include design recommendations from clinical input, providing
insights to inform future AI systems for intensive care.

Source: http://arxiv.org/abs/2411.11774v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these