Automated machine learning: past, present and future

M Baratchi, C Wang, S Limmer, JN van Rijn… - Artificial intelligence …, 2024 - Springer
Automated machine learning (AutoML) is a young research area aiming at making high-
performance machine learning techniques accessible to a broad set of users. This is …

A state-of-the-art review on adversarial machine learning in image classification

A Bajaj, DK Vishwakarma - Multimedia Tools and Applications, 2024 - Springer
Computer vision applications like traffic monitoring, security checks, self-driving cars,
medical imaging, etc., rely heavily on machine learning models. It raises an essential …

Adversarial robustness of neural networks from the perspective of lipschitz calculus: A survey

MM Zühlke, D Kudenko - ACM Computing Surveys, 2024 - dl.acm.org
We survey the adversarial robustness of neural networks from the perspective of Lipschitz
calculus in a unifying fashion by expressing models, attacks and safety guarantees, that is, a …

Machine learning robustness: A primer

HB Braiek, F Khomh - Trustworthy AI in Medical Imaging, 2025 - Elsevier
This chapter explores the foundational concept of robustness in Machine Learning (ML) and
its integral role in establishing trustworthiness in Artificial Intelligence (AI) systems. The …

From robustness to explainability and back again

X Huang, J Marques-Silva - arxiv preprint arxiv:2306.03048, 2023 - arxiv.org
In contrast with ad-hoc methods for eXplainable Artificial Intelligence (XAI), formal
explainability offers important guarantees of rigor. However, formal explainability is hindered …

FairNNV: The Neural Network Verification Tool For Certifying Fairness

AM Tumlin, D Manzanas Lopez, P Robinette… - Proceedings of the 5th …, 2024 - dl.acm.org
Ensuring fairness in machine learning (ML) is vital, especially as these models are
increasingly used in socially critical financial decision-making processes such as credit …

Comparing differentiable logics for learning with logical constraints

T Flinkow, BA Pearlmutter, R Monahan - arxiv preprint arxiv:2407.03847, 2024 - arxiv.org
Extensive research on formal verification of machine learning systems indicates that
learning from data alone often fails to capture underlying background knowledge such as …

[HTML][HTML] A qualitative AI security risk assessment of autonomous vehicles

K Grosse, A Alahi - Transportation Research Part C: Emerging …, 2024 - Elsevier
This paper systematically analyzes the security risks associated with artificial intelligence
(AI) components in autonomous vehicles (AVs). Given the increasing reliance on AI for …

How secure are large language models (llms) for navigation in urban environments?

C Wen, J Liang, S Yuan, H Huang, Y Fang - arxiv preprint arxiv …, 2024 - arxiv.org
In the field of robotics and automation, navigation systems based on Large Language
Models (LLMs) have recently shown impressive performance. However, the security aspects …

Adversarial training of deep neural networks guided by texture and structural information

Z Wang, H Wang, C Tian, Y ** - Proceedings of the 31st ACM …, 2023 - dl.acm.org
Adversarial training (AT) is one of the most effective ways for deep neural network models to
resist adversarial examples. However, there is still a significant gap between robust training …