LEAIM: A Layered Enterprise Architecture Model for Scalable and Governed Integration of Artificial Intelligence in Distributed Systems

Authors

  • Samer Bahadur Yadav Independent Researcher, United States. Author

DOI:

https://doi.org/10.63282/3050-9262.IJAIDSML-V7I1P130

Keywords:

Layered Enterprise AI Integration Model (LEAIM), Distributed Systems, Model Serving, Artificial Intelligence Integration, Cloud Architecture, AI Governance, Scalable AI Systems

Abstract

The integration of Artificial Intelligence (AI) models into enterprise-scale distributed systems introduces architectural challenges that extend beyond model training and algorithmic optimization. While contemporary research emphasizes model accuracy and computational efficiency, system-level integration within complex enterprise environments remains insufficiently formalized. Existing approaches frequently embed AI components directly within business services or centralize inference without enforcing architectural separation, resulting in tight coupling, limited scalability, and governance vulnerabilities. This paper introduces the Layered Enterprise Artificial Intelligence Integration Model (LEAIM), a structured enterprise architecture framework for scalable and governed AI integration in distributed systems. LEAIM defines five formally separated layersdata acquisition, model lifecycle management, model serving, orchestration, and governanceand enforces explicit dependency constraints to prevent cross-layer coupling. The model incorporates scalability modeling, latency decomposition, security isolation, and lifecycle governance as primary architectural properties. Comparative evaluation demonstrates that LEAIM improves modularity, resilience, governance coverage, and fault containment relative to embedded and centralized integration patterns. By elevating AI deployment from ad-hoc implementation to formal architectural design, LEAIM contributes a reusable enterprise framework for sustainable AI system integration.

References

[1] M. Zaharia, A. Chen, A. Davidson, et al., “Accelerating the machine learning lifecycle with MLflow,” IEEE Data Engineering Bulletin, vol. 41, no. 4, pp. 39–45, 2018.

[2] D. P. Kingma and M. Welling, “Auto-Encoding Variational Bayes,” arXiv:1312.6114, 2013.

[3] J. Dean and S. Ghemawat, “MapReduce: Simplified Data Processing on Large Clusters,” Commun. ACM, vol. 51, no. 1, pp. 107–113, 2008.

[4] B. Burns et al., “Borg, Omega, and Kubernetes,” Commun. ACM, vol. 59, no. 5, pp. 50–57, 2016.

[5] M. Fowler, Patterns of Enterprise Application Architecture. Addison-Wesley, 2002.

[6] N. Kratzke, “A Brief History of Cloud Application Architectures,” Appl. Sci., vol. 8, 2018.

[7] P. Moritz et al., “Ray: A Distributed Framework for Emerging AI Applications,” OSDI, 2018.

[8] T. White, Hadoop: The Definitive Guide. O’Reilly Media, 2015.

[9] M. Amershi et al., “Software Engineering for Machine Learning,” IEEE Software, vol. 36, no. 1, pp. 56–64, 2019.

[10] R. Sculley et al., “Hidden Technical Debt in Machine Learning Systems,” NIPS, 2015.

[11] A. Halevy, P. Norvig, and F. Pereira, “The Unreasonable Effectiveness of Data,” IEEE Intell. Syst., 2009.

[12] B. Liu, “Serving Deep Learning Models,” IEEE Cloud Computing, 2020.

[13] T. Chen et al., “MXNet: A Flexible and Efficient Machine Learning Library,” NIPS Workshop, 2015.

[14] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016.

[15] S. Newman, Building Microservices. O’Reilly Media, 2015.

[16] C. Breck, S. Cai, E. Nielsen, M. Salib, and D. Sculley, “The ML test score: A rubric for ML production readiness and technical debt reduction,” in Proceedings of the IEEE International Conference on Big Data, 2021.

[17] A. Paleyes, R.-G. Urma, and N. Lawrence, “Challenges in deploying machine learning: A survey of case studies,” ACM Computing Surveys, vol. 55, no. 5, pp. 1–29, 2022.

[18] D. Crankshaw, X. Wang, G. Zhou, et al., “Clipper: A low-latency online prediction serving system,” Proceedings of the VLDB Endowment, vol. 11, no. 12, pp. 1867–1880, 2018. (Foundational for serving systems; still widely cited)

[19] M. Amershi, A. Begel, C. Bird, et al., “Guidelines for human-AI interaction,” ACM CHI Conference on Human Factors in Computing Systems, 2019. (Supports governance and evaluation discussion)

[20] A. Lakshmanan, S. Suresh, and S. Manohar, “MLOps: Continuous delivery and automation pipelines in machine learning,” IEEE Software, vol. 39, no. 2, pp. 64–72, 2022.

Published

2026-02-25

Issue

Section

Articles

How to Cite

1.
Yadav SB. LEAIM: A Layered Enterprise Architecture Model for Scalable and Governed Integration of Artificial Intelligence in Distributed Systems. IJAIDSML [Internet]. 2026 Feb. 25 [cited 2026 Feb. 25];7(1):175-82. Available from: https://ijaidsml.org/index.php/ijaidsml/article/view/444