Automated Machine Learning (AutoML): Challenges and Future Trends in AI Model Optimization

Authors

  • Dr. Elias Novák Institute of Artificial Intelligence, Prague University of Technology, Czech Republic Author

DOI:

https://doi.org/10.63282/3050-9262.IJAIDSML-V1I1P102

Keywords:

AutoML, Machine Learning, Hyperparameter Tuning, Model Selection, Explainability, Scalability, Federated Learning, Reinforcement Learning, Edge Computing, Predictive Analytics

Abstract

Automated Machine Learning (AutoML) has emerged as a pivotal technology in the field of artificial intelligence, aiming to automate the end-to-end process of machine learning model development. This paper provides a comprehensive overview of AutoML, including its definition, key components, and the challenges it faces. We delve into the current state of AutoML, exploring various techniques and algorithms used in automated model selection, hyperparameter tuning, and neural architecture search. Additionally, we discuss the practical applications of AutoML across different industries and highlight the ethical and computational challenges that need to be addressed. Finally, we outline future trends and research directions in AutoML, emphasizing the importance of explainability, scalability, and integration with other AI technologies

References

[1] Feurer, M., Klein, A., Eggensperger, K., Springenberg, J. T., Blum, M., & Hutter, F. (2015). Efficient and robust automated machine learning. In Advances in Neural Information Processing Systems (pp. 2962-2970).

[2] Olson, R. S., & Moore, J. H. (2019). TPOT: A tree-based pipeline optimization tool for automating machine learning. In Automated Machine Learning (pp. 151-160). Springer.

[3] Jin, H., Song, Q., & Hu, X. (2019). Auto-Keras: An efficient neural architecture search system. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 2331-2340).

[4] Bergstra, J., Bardenet, R., Bengio, Y., & Kégl, B. (2011). Algorithms for hyper-parameter optimization. In Advances in Neural Information Processing Systems (pp. 2546-2554).

[5] Zoph, B., & Le, Q. V. (2017). Neural architecture search with reinforcement learning. In International Conference on Learning Representations.

[6] Zhang, C., Bengio, S., Hardt, M., Recht, B., & Vinyals, O. (2017). Understanding deep learning requires rethinking generalization. In International Conference on Learning Representations.

[7] Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.

[8] Kumar, A., & Zhang, T. (2017). Sample efficient active learning of causal trees. In Advances in Neural Information Processing Systems (pp. 6470-6480).

[9] Lipton, Z. C. (2018). The mythos of model interpretability. Queue, 16(3), 31-57.

[10] McMahan, H. B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. A. (2017). Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics (pp. 1273-1282).

Published

2020-02-06

Issue

Section

Articles

How to Cite

1.
Novák E. Automated Machine Learning (AutoML): Challenges and Future Trends in AI Model Optimization. IJAIDSML [Internet]. 2020 Feb. 6 [cited 2025 Sep. 15];1(1):11-2. Available from: https://ijaidsml.org/index.php/ijaidsml/article/view/12