The Trust Threshold: How Public Perception of AI Harm Moderates the Impact of FinTech Innovation on Systemic Banking Stability

Authors

  • Rajitha Gentyala Frisco, Texas, USA. Author

DOI:

https://doi.org/10.63282/3050-9262.IJAIDSML-V5I3P118

Keywords:

Artificial Intelligence, Banking Stability, Public Perception, Algorithmic Harm, Financial Innovation, Systemic Risk, Trust Threshold, Socio-Political Risk, AI Governance, Consumer Confidence

Abstract

The rapid integration of artificial intelligence into financial services has ushered in an era of unprecedented innovation, yet it has simultaneously introduced complex socio-technical risks that challenge conventional understandings of banking stability. While substantial research has examined the technical dimensions of AI model governance and regulatory compliance, comparatively little attention has been devoted to understanding how public perceptions of AI-related harm influence the relationship between technological innovation and systemic financial resilience. This study addresses this critical gap by investigating the moderating role of public trust and perceived socio-political harm in the innovation-stability nexus within the banking sector. Drawing upon diffusion of innovation theory and systemic risk frameworks, we develop and test a conceptual model wherein public perception of AI harm—encompassing concerns regarding algorithmic fairness, data privacy, and opaque decision-making—moderates the impact of aggressive AI adoption on long-term banking stability. The research employs a mixed-methods approach, combining longitudinal analysis of banking stability indicators across major financial institutions with survey data capturing public sentiment toward AI deployment in financial services. Preliminary findings suggest that while AI innovation initially enhances operational efficiency and profitability, these benefits are contingent upon maintaining public confidence in the fairness and integrity of automated systems. When perceptions of harm exceed a critical threshold, the stability benefits of innovation diminish significantly, potentially triggering customer attrition, regulatory intervention, and systemic contagion effects. The study draws upon foundational insights from Cao et al. (2021), who examined consumer trust dynamics in algorithmic financial advising, and extends the work of König et al. (2022), who explored the reputational contagion mechanisms linking perceived AI failures to broader institutional stability. By illuminating the psychological and sociological dimensions of AI governance, this research contributes to emerging scholarships on trustworthy AI and offers practical guidance for financial institutions seeking to balance innovation imperatives with the maintenance of public trust. The findings underscore the necessity of embedding public perception monitoring into systemic risk assessment frameworks and highlight the importance of transparent, explainable AI architectures in preserving the social license upon which banking stability ultimately depends.

References

[1] M. Aitken, M. Ng, E. Toreini, A. van Moorsel, K. P. L. Coopamootoo, and K. Elliott, "Keeping it Human: A Focus Group Study of Public Attitudes Towards AI in Banking," in Computer Security, 2020, pp. 21–38.

[2] PwC, "AI in financial services: navigating the risk - opportunity equation," Dec. 2023. [Online]. Available: https://www.pwc.co.uk/industries/financial-services/understanding-regulatory-developments/ai-in-financial-services-navigating-the-risk-opportunity-equation.html

[3] T. Schütz, C. Schröder, and C. Rennhak, "Acceptance of Automated Investment Advisory: An Experimental Study of the Relevance of Trust Attributes of a Robo-Advisor," Management International Review, vol. 63, no. 2, pp. 185–208, 2023.

[4] Ben David, D., Resheff, Y. S., & Tron, T. (2021). Explainable AI and Adoption of Financial Algorithmic Advisors: An Experimental Study. arXiv. https://arxiv.org/abs/2101.02555

[5] S. D. Kim, G. Andreeva, and M. Rovatsou, "The Double-Edged Sword of Big Data and Information Technology for the Disadvantaged: A Cautionary Tale from Open Banking," arXiv preprint arXiv:2307.13408, 2023.

[6] P. Dave and J. Dastin, "Insight: U.S. banks deploy AI to monitor customers, workers amid tech backlash," Reuters, Apr. 19, 2021. [Online]. Available: https://www.reuters.com/technology/us-banks-deploy-ai-monitor-customers-workers-amid-tech-backlash-2021-04-19/

[7] R. Kumar, A. Koshiyama, K. da Costa, N. Kingsman, M. Tewarrie, E. Kazim, A. Roy, P. Treleaven, and Z. Lovell, "Deep learning model fragility and implications for financial stability and regulation," Bank of England Staff Working Paper No. 1,038, Sep. 2023.

[8] López, D., & Martins, J. (2022). The impact of artificial intelligence adoption on banking performance: Evidence from European banks. Journal of Banking & Finance, 138, 106358.

[9] A. Alonso Robisco and J. M. Carbó, "Can machine learning models save capital for banks? Evidence from a Spanish credit portfolio," Journal of Banking and Finance, vol. 145, 106646, 2022.

[10] Banque de France, "A specific regulatory framework for global systemically important banks," Banque de France Bulletin, no. 247, article 2, Aug. 2023. [Online]. Available: https://www.banque-france.fr/en/publications-and-statistics/publications/specific-regulatory-framework-global-systemically-important-banks

[11] Drehmann, M., & Juselius, M. (2014). Evaluating early warning indicators of banking crises: Satisfactory or misleading? International Journal of Forecasting, 30(3), 759–780.https://doi.org/10.1016/j.ijforecast.2013.10.001

[12] Khatri, N., & Brown, G. D. (2010). Designing classification for knowledge management processes. Journal of Knowledge Management, 14(2), 175–188.

[13] I. Irakoze, F. Nahayo, D. Ikpe, S. A. Gyamerah, and F. Viens, "Mathematical Modeling and Stability Analysis of Systemic Risk in the Banking Ecosystem," Journal of Mathematics, vol. 2023, Article ID 5628621, 2023.

[14] J. Danielsson and A. Uthemann, "Artificial intelligence and financial crises," arXiv preprint arXiv:2407.17048, 2024.

[15] A. Bin-Salem, F. Di Girolamo, and F. Petroulakis, "Depositors' perceptions and bank stability," European Central Bank Working Paper Series, No. 2897, Feb. 2024.

[16] Li, X., Li, C., & Liu, R. (2021). Artificial intelligence adoption and digital transformation in financial services: A review and research agenda. Electronic Commerce Research and Applications, 48, 101061. https://doi.org/10.1016/j.elerap.2021.101061

[17] Feng, F., Wang, S., & Ma, C. (2021). Big data analytics for financial risk management: A survey. International Journal of Information Management, 61, 102388. https://doi.org/10.1016/j.ijinfomgt.2021.102388

Published

2024-09-30

Issue

Section

Articles

How to Cite

1.
Gentyala R. The Trust Threshold: How Public Perception of AI Harm Moderates the Impact of FinTech Innovation on Systemic Banking Stability . IJAIDSML [Internet]. 2024 Sep. 30 [cited 2026 Mar. 9];5(3):169-90. Available from: https://ijaidsml.org/index.php/ijaidsml/article/view/434