The Influence of Explainability on Stakeholder Trust in AI-Based Credit Risk Assessment Tools
DOI:
https://doi.org/10.63282/3050-9262.IJAIDSML-V5I3P110Keywords:
Explainable AI, credit risk assessment, stakeholder trust, SHAP, LIME, financial AI, transparency, accountability, model interpretability, trust modelingAbstract
As artificial intelligence (AI) systems become central in financial decision-making, particularly in credit risk assessment, ensuring stakeholder trust is paramount. This study investigates the role of explainability in enhancing trust among key stakeholders-loan officers, borrowers, and regulatory personnel-in AI-driven credit scoring tools. Through a mixed-methods approach involving experimental simulation, stakeholder interviews, and quantitative modeling, we demonstrate that explainable AI (XAI) significantly improves perceptions of fairness, accountability, and transparency. Our results reveal that local interpretability techniques, such as SHAP and LIME, positively influence trust levels across stakeholder categories, particularly when combined with model performance metrics. The study also presents a trust quantification model and offers a novel framework integrating explainability with regulatory compliance standards. These findings underscore the necessity of embedding explainability mechanisms into credit risk AI systems to foster trust, ensure responsible deployment, and satisfy regulatory expectations
References
[1] Ribeiro et al. (2016). “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. [KDD 2016].
[2] Lundberg & Lee (2017). A Unified Approach to Interpreting Model Predictions. [NIPS 2017].
[3] Doshi-Velez & Kim (2017). Towards a Rigorous Science of Interpretable Machine Learning. [arXiv preprint].
[4] Chen et al. (2020). This Looks Like That: Deep Learning for Interpretable Image Recognition. [NeurIPS 2020].
[5] Poursabzi-Sangdeh et al. (2021). Manipulating and Measuring Model Interpretability. [CHI 2021].
[6] Bhatt et al. (2020). Explainable Machine Learning in Deployment. [FAccT 2020].
[7] Holzinger et al. (2019). Causability and Explainability of AI in Medicine. [Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery].
[8] Guidotti et al. (2018). A Survey of Methods for Explaining Black Box Models. [ACM Computing Surveys].
[9] Freitas (2014). Comprehensible Classification Models: A Position Paper. [ACM SIGKDD Explorations].
[10] Barredo Arrieta et al. (2020). Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges. [Information Fusion].
[11] B. Letham, C. Rudin, T. H. McCormick, and D. Madigan. Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model. Annals of Applied Statistics, 2015.