A survey of adversarial defenses and robustness in nlp

S Goyal, S Doddapaneni, MM Khapra… - ACM Computing …, 2023 - dl.acm.org
In the past few years, it has become increasingly evident that deep neural networks are not
resilient enough to withstand adversarial perturbations in input data, leaving them …

A review of formal methods applied to machine learning

C Urban, A Miné - arxiv preprint arxiv:2104.02466, 2021 - arxiv.org
We review state-of-the-art formal methods applied to the emerging field of the verification of
machine learning systems. Formal methods can provide rigorous correctness guarantees on …

A survey of safety and trustworthiness of large language models through the lens of verification and validation

X Huang, W Ruan, W Huang, G **, Y Dong… - Artificial Intelligence …, 2024 - Springer
Large language models (LLMs) have exploded a new heatwave of AI for their ability to
engage end-users in human-level conversations with detailed and articulate answers across …

Are formal methods applicable to machine learning and artificial intelligence?

M Krichen, A Mihoub, MY Alzahrani… - … Conference of Smart …, 2022 - ieeexplore.ieee.org
Formal approaches can provide strict correctness guarantees for the development of both
hardware and software systems. In this work, we examine state-of-the-art formal methods for …

Evaluating the robustness of neural networks: An extreme value theory approach

TW Weng, H Zhang, PY Chen, J Yi, D Su, Y Gao… - arxiv preprint arxiv …, 2018 - arxiv.org
The robustness of neural networks to adversarial examples has received great attention due
to security implications. Despite various attack approaches to crafting visually imperceptible …

Sok: Certified robustness for deep neural networks

L Li, T **e, B Li - 2023 IEEE symposium on security and privacy …, 2023 - ieeexplore.ieee.org
Great advances in deep neural networks (DNNs) have led to state-of-the-art performance on
a wide range of tasks. However, recent studies have shown that DNNs are vulnerable to …

Automatic perturbation analysis for scalable certified robustness and beyond

K Xu, Z Shi, H Zhang, Y Wang… - Advances in …, 2020 - proceedings.neurips.cc
Linear relaxation based perturbation analysis (LiRPA) for neural networks, which computes
provable linear bounds of output neurons given a certain amount of input perturbation, has …

Causality-based neural network repair

B Sun, J Sun, LH Pham, J Shi - … of the 44th International Conference on …, 2022 - dl.acm.org
Neural networks have had discernible achievements in a wide range of applications. The
wide-spread adoption also raises the concern of their dependability and reliability. Similar to …

On recurrent neural networks for learning-based control: recent results and ideas for future developments

F Bonassi, M Farina, J **e, R Scattolini - Journal of Process Control, 2022 - Elsevier
This paper aims to discuss and analyze the potentialities of Recurrent Neural Networks
(RNN) in control design applications. The main families of RNN are considered, namely …

Robustness verification for transformers

Z Shi, H Zhang, KW Chang, M Huang… - arxiv preprint arxiv …, 2020 - arxiv.org
Robustness verification that aims to formally certify the prediction behavior of neural
networks has become an important tool for understanding model behavior and obtaining …