[HTML][HTML] A qualitative AI security risk assessment of autonomous vehicles

K Grosse, A Alahi - Transportation Research Part C: Emerging …, 2024 - Elsevier
This paper systematically analyzes the security risks associated with artificial intelligence
(AI) components in autonomous vehicles (AVs). Given the increasing reliance on AI for …

[PDF][PDF] Backdoor threats from compromised foundation models to federated learning

X Li, S Wang, C Wu, H Zhou… - arxiv preprint arxiv …, 2023 - lixi1994.github.io
Federated learning (FL) represents a novel paradigm to machine learning, addressing
critical issues related to data privacy and security, yet suffering from data insufficiency and …

Backdoor Attack and Defense on Deep Learning: A Survey

Y Bai, G **ng, H Wu, Z Rao, C Ma… - IEEE Transactions …, 2024 - ieeexplore.ieee.org
Deep learning, as an important branch of machine learning, has been widely applied in
computer vision, natural language processing, speech recognition, and more. However …

Unveiling backdoor risks brought by foundation models in heterogeneous federated learning

X Li, C Wu, J Wang - Pacific-Asia Conference on Knowledge Discovery …, 2024 - Springer
The foundation models (FMs) have been used to generate synthetic public datasets for the
heterogeneous federated learning (HFL) problem where each client uses a unique model …

Vulnerabilities of foundation model integrated federated learning under adversarial threats

C Wu, X Li, J Wang - arxiv preprint arxiv:2401.10375, 2024 - arxiv.org
Federated Learning (FL) addresses critical issues in machine learning related to data
privacy and security, yet suffering from data insufficiency and imbalance under certain …

Poisoning Attacks and Defenses Against Machine Learning Classifiers

X Li - 2024 - search.proquest.com
Data Poisoning (DP) is a potent attack that leads trained classifiers to exhibit undesirable
behaviors. DP attacks present significant risks to machine learning classifiers across various …