Explainable AI in Healthcare: Ensuring Trust and Transparency in ML Clinical Decision Systems

Explainable AI in Healthcare: Ensuring Trust and Transparency in ML Clinical Decision Systems

Authors

  • Amit Taneja Senior Data Engineer at UMB Bank, USA. Author

DOI:

https://doi.org/10.63282/3050-9262.IJAIDSML-V4I1P106

Keywords:

Explainable AI (XAI), Machine Learning (ML), Clinical Decision Support Systems (CDSS), Interpretability, Trust in AI, Transparency, SHAP, LIME, Counterfactual Explanations, Healthcare AI

Abstract

Incorporation of Artificial Intelligence (AI) in healthcare has revolutionised, especially regarding the usage of Machine Learning (ML) in clinical decision making. However, as they become increasingly complex, they also become more opaque, thereby raising concerns about trust, accountability, and ethical transparency. Explainable AI (XAI) has proven critical in making black-box ML models human-interpretable, thereby offering human-interpretable insights into the decision-making process. This paper will address the context of XAI in healthcare, its significance in enhancing clinical safety, increasing trust, promoting regulatory compliance, and facilitating clinical adoption. In the paper, the XAI techniques currently used, including SHAP, LIME, attention mechanisms, counterfactual explanations, and rule-based systems, were discussed, and their efficiency and relevance in healthcare applications were compared. We also offer a step-by-step framework for incorporating XAI in the field of healthcare ML, which includes data preprocessing, model selection, and data visualisation strategies. Experimental outcomes demonstrate that XAI can be utilised to enhance interpretability at the expense of accuracy. Lastly, we conclude with the challenges, limitations, and future directions in the research field of explainable healthcare AI

References

[1] Chen, T., & Guestrin, C. (2016, August). Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 785-794).

[2] LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. nature, 521(7553), 436-444.

[3] Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 30.

[4] Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. (2019). Causability and explainability of artificial intelligence in medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 9(4), e1312.

[5] Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.

[6] Shortliffe, E. (Ed.). (2012). Computer-based medical consultations: MYCIN (Vol. 2). Elsevier.

[7] Miller, R. A., Pople Jr, H. E., & Myers, J. D. (1985). Internist-I is an experimental computer-based diagnostic consultant for general internal medicine. In Computer-assisted medical decision making (pp. 139-158). New York, NY: Springer New York.

[8] Breiman, L. (2001). Random forests. Machine learning, 45, 5-32.

[9] Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL & Tech., 31, 841.

[10] Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015, August). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21st ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1721-1730).

[11] Choi, E., Bahadori, M. T., Sun, J., Kulas, J., Schuetz, A., & Stewart, W. (2016). Retain: An interpretable predictive model for healthcare using reverse time attention mechanism. Advances in Neural Information Processing Systems, 29.

[12] Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018, April). 'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions. In Proceedings of the 2018 CHI conference on human factors in computing systems (pp. 1-14).

[13] Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion, 58, 82-115.

[14] Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5), 1-42.

[15] Saraswat, D., Bhattacharya, P., Verma, A., Prasad, V. K., Tanwar, S., Sharma, G., ... & Sharma, R. (2022). Explainable AI for healthcare 5.0: opportunities and challenges. IEE Access, 10, 84486-84517.

[16] Yang, C. C. (2022). Explainable artificial intelligence for predictive modelling in healthcare. Journal of Healthcare Informatics Research, 6(2), 228-239.

[17] Jones, C., Thornton, J., & Wyatt, J. C. (2021). Enhancing trust in clinical decision support systems: a framework for developers. BMJ health & care informatics, 28(1), e100247.

[18] Gretton, C. (2018). Trust and transparency in machine learning-based clinical decision support. Human and machine learning: visible, explainable, trustworthy and transparent, 279-292.

[19] Abu-Nasser, B. S. (2017). Medical expert systems survey. International Journal of Engineering and Information Systems, 1(7), 218-224.

[20] van der Waa, J., Nieuwburg, E., Cremers, A., & Neerincx, M. (2021). Evaluating XAI: A comparison of rule-based and example-based explanations. Artificial intelligence, 291, 103404.

Published

2023-03-30

Issue

Section

Articles

How to Cite

1.
Taneja A. Explainable AI in Healthcare: Ensuring Trust and Transparency in ML Clinical Decision Systems: Explainable AI in Healthcare: Ensuring Trust and Transparency in ML Clinical Decision Systems. IJAIDSML [Internet]. 2023 Mar. 30 [cited 2025 Sep. 29];4(1):51-9. Available from: https://ijaidsml.org/index.php/ijaidsml/article/view/197