Adversarial Machine Learning: Exploring Security Vulnerabilities in AI-Driven Systems
DOI:
https://doi.org/10.63282/3050-9262.IJAIDSML-V3I1P101Keywords:
Adversarial Machine Learning, Evasion Attacks, Poisoning Attacks, Model Robustness, Adversarial Training, Defensive Distillation, Federated Learning, Explainable AI (XAI), Deep Neural Networks (DNNs), Cybersecurity in AIAbstract
Adversarial Machine Learning (AML) has emerged as a critical area of research in the field of artificial intelligence (AI) and cybersecurity. As AI-driven systems become increasingly integrated into various sectors, including finance, healthcare, and autonomous vehicles, the security of these systems is of paramount importance. AML explores the vulnerabilities of machine learning (ML) models to adversarial attacks, where malicious actors manipulate input data to deceive the models. This paper provides a comprehensive overview of AML, including its theoretical foundations, types of attacks, defense mechanisms, and real-world implications. We also discuss the challenges and future directions in this rapidly evolving field, emphasizing the need for robust and secure AI systems
References
[1] Goodfellow, I. J., Shlens, J., & Szegedy, C. (2014). Explaining and Harnessing Adversarial Examples. arXiv preprint arXiv:1412.6572.
[2] Carlini, N., & Wagner, D. (2017). Towards Evaluating the Robustness of Neural Networks. In 2017 IEEE Symposium on Security and Privacy (SP) (pp. 39-57).
[3] Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z. B., & Swami, A. (2016). The Limitations of Deep Learning in Adversarial Settings. In 2016 IEEE European Symposium on Security and Privacy (EuroS&P) (pp. 372-387).
[4] Liu, Y., Chen, X., Liu, C., & Song, D. (2017). Delving into Transferable Adversarial Examples and Black-box Attacks. In 31st AAAI Conference on Artificial Intelligence (AAAI-17).
[5] Papernot, N., McDaniel, P., Wu, X., Jha, S., & Swami, A. (2016). Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. In 2016 IEEE Symposium on Security and Privacy (SP) (pp. 582-597).
[6] TechTarget. Adversarial machine learning: Definition & overview. Retrieved from https://www.techtarget.com/searchenterpriseai/definition/adversarial-machine-learning
[7] Title of the article. Journal of Engineering Research and Reports, Volume(Issue), Pages. Retrieved from https://journaljerr.com/index.php/JERR/article/view/1413
[8] Palo Alto Networks. What are adversarial attacks on AI & machine learning? Retrieved from https://www.paloaltonetworks.com/cyberpedia/what-are-adversarial-attacks-on-AI-Machine-Learning
[9] Lakera AI. Adversarial machine learning. Retrieved from https://www.lakera.ai/blog/adversarial-machine-learning
[10] Coursera. Adversarial machine learning: Understanding and defending against attacks. Retrieved from https://www.coursera.org/articles/adversarial-machine-learning
[11] Viso AI. Adversarial machine learning: An overview. Retrieved from https://viso.ai/deep-learning/adversarial-machine-learning/
[12] Title of the paper. IEEE Transactions on Neural Networks and Learning Systems, Volume(Issue), Pages. Retrieved from https://ieeexplore.ieee.org/iel7/9739/5451756/09887796.pdf
[13] DataCamp. Adversarial machine learning: A data science perspective. Retrieved from https://www.datacamp.com/blog/adversarial-machine-learning