Security-Centric Artificial Intelligence: Strengthening Machine Learning Systems against Emerging Threats

Authors

  • Ishva Jitendrakumar Kanani Independent Researcher, USA. Author
  • Raghavendra Sridhar Independent Researcher, USA. Author
  • Rashi Nimesh Kumar Dhenia Independent Researcher, USA. Author

DOI:

https://doi.org/10.63282/3050-9262.IJAIDSML-V4I3P108

Keywords:

Machine Learning, Artificial Intelligence, Emerging Threats, Security-Centric

Abstract

As artificial intelligence and machine learning (AI/ML) technologies become pervasive across industries, their vulnerabilities to security threats have emerged as a critical concern. This paper explores the foundational principles, methodologies, and implications of security-centric AI, a paradigm that embeds security principles into every stage of the machine learning lifecycle. We examine adversarial attacks, data poisoning, model inversion, and other AI-specific threats, and highlight methodologies such as adversarial training, secure federated learning, and robust data pipelines. By focusing on trustworthy and threat-resilient AI, this paper outlines a framework for building secure, scalable, and ethically aligned AI systems

References

[1] Carlini, N., & Wagner, D. (2017). Towards Evaluating the Robustness of Neural Networks. IEEE SP.

[2] Madry, A., et al. (2018). Towards Deep Learning Models Resistant to Adversarial Attacks. ICLR.

[3] Goodfellow, I., et al. (2015). Explaining and Harnessing Adversarial Examples. ICLR.

[4] Papernot, N., et al. (2016). Distillation as a Defense to Adversarial Perturbations. IEEE SP.

[5] Dwork, C., & Roth, A. (2014). The Algorithmic Foundations of Differential Privacy. FnTCS.

[6] Shokri, R., et al. (2017). Membership Inference Attacks Against ML Models. IEEE SP.

[7] Bonawitz, K., et al. (2019). Towards Federated Learning at Scale. MLSys.

[8] Tramer, F., et al. (2016). Stealing Machine Learning Models via Prediction APIs. USENIX Security.

[9] Biggio, B., et al. (2013). Evasion Attacks at Test Time. ECML PKDD.

[10] Liu, Y., et al. (2018). Transferable Adversarial Examples. ICLR.

[11] Moosavi-Dezfooli, S.-M., et al. (2016). DeepFool. CVPR.

[12] Jia, R., & Liang, P. (2017). Data Poisoning in Collaborative Filtering. NeurIPS.

[13] Wang, B., et al. (2019). Neural Cleanse: Backdoor Mitigation. IEEE SP.

[14] Li, X., et al. (2021). Few-Shot Learning Adversarial Vulnerability. ACM CCS.

[15] NIST AI Risk Management Framework. (2023). https://www.nist.gov/itl/ai-risk-management-framework

[16] RobustBench. (2023). https://robustbench.github.io

[17] MITRE ATLAS™. (2023). https://atlas.mitre.org

[18] Salem, A., et al. (2019). ML-Leaks: Membership Inference. NDSS.

[19] Hitaj, B., et al. (2017). GAN-Based Information Leakage in Deep Learning. ACM CCS.

[20] Song, C., et al. (2017). Models that Remember Too Much. ACM CCS.

[21] Dhenia, R. N. K. (2020). Harnessing big data and NLP for real-time market sentiment analysis across global news and social

media. International Journal of Science and Research (IJSR), 9(2), 1974–1977. https://doi.org/10.21275/MS2002135041

[22] Dhenia, R. N. K., & Kanani, I. J. (2020). Data visualization best practices: Enhancing comprehension and decision making

with effective visual analytics. International Journal of Science and Research (IJSR), 9(8), 1620–1624.

https://doi.org/10.21275/MS2008135218

[23] Dhenia, R. N. K. (2020). Leveraging data analytics to combat pandemics: Real-time analytics for public health response.

International Journal of Science and Research (IJSR), 9(12), 1945–1947. https://doi.org/10.21275/MS2012134656

[24] Kanani, I. J. (2020). Security misconfigurations in cloud-native web applications. International Journal of Science and

Research (IJSR), 9(12), 1935–1938. https://doi.org/10.21275/MS2012131513

[25] Kanani, I. J. (2020). Securing data in motion and at rest: A cryptographic framework for cloud security. International Journal

of Science and Research (IJSR), 9(2), 1965–1968. https://doi.org/10.21275/MS2002133823

[26] Kanani, I. J., & Sridhar, R. (2020). Cloud-native security: Securing serverless architectures. International Journal of Science

and Research (IJSR), 9(8), 1612–1615. https://doi.org/10.21275/MS2008134043

[27] Sridhar, R. (2020). Leveraging open-source reuse: Implications for software maintenance. International Journal of Science and

Research (IJSR), 9(2), 1969–1973. https://doi.org/10.21275/MS2002134347

[28] Sridhar, R. (2020). Preserving architectural integrity: Addressing the erosion of software design. International Journal of

Science and Research (IJSR), 9(12), 1939–1944. https://doi.org/10.21275/MS2012134218

[29] Sridhar, R., & Dhenia, R. N. K. (2020). An analytical study of NoSQL database systems for big data applications.

International Journal of Science and Research (IJSR), 9(8), 1616–1619. https://doi.org/10.21275/MS200813452

Published

2023-10-30

Issue

Section

Articles

How to Cite

1.
Kanani IJ, Sridhar R, Dhenia RNK. Security-Centric Artificial Intelligence: Strengthening Machine Learning Systems against Emerging Threats. IJAIDSML [Internet]. 2023 Oct. 30 [cited 2025 Sep. 15];4(3):72-5. Available from: https://ijaidsml.org/index.php/ijaidsml/article/view/201