A survey of adversarial defenses and robustness in nlp
In the past few years, it has become increasingly evident that deep neural networks are not
resilient enough to withstand adversarial perturbations in input data, leaving them …
resilient enough to withstand adversarial perturbations in input data, leaving them …
A review of formal methods applied to machine learning
We review state-of-the-art formal methods applied to the emerging field of the verification of
machine learning systems. Formal methods can provide rigorous correctness guarantees on …
machine learning systems. Formal methods can provide rigorous correctness guarantees on …
A survey of safety and trustworthiness of large language models through the lens of verification and validation
Large language models (LLMs) have exploded a new heatwave of AI for their ability to
engage end-users in human-level conversations with detailed and articulate answers across …
engage end-users in human-level conversations with detailed and articulate answers across …
Are formal methods applicable to machine learning and artificial intelligence?
Formal approaches can provide strict correctness guarantees for the development of both
hardware and software systems. In this work, we examine state-of-the-art formal methods for …
hardware and software systems. In this work, we examine state-of-the-art formal methods for …
Evaluating the robustness of neural networks: An extreme value theory approach
The robustness of neural networks to adversarial examples has received great attention due
to security implications. Despite various attack approaches to crafting visually imperceptible …
to security implications. Despite various attack approaches to crafting visually imperceptible …
Sok: Certified robustness for deep neural networks
Great advances in deep neural networks (DNNs) have led to state-of-the-art performance on
a wide range of tasks. However, recent studies have shown that DNNs are vulnerable to …
a wide range of tasks. However, recent studies have shown that DNNs are vulnerable to …
Automatic perturbation analysis for scalable certified robustness and beyond
Linear relaxation based perturbation analysis (LiRPA) for neural networks, which computes
provable linear bounds of output neurons given a certain amount of input perturbation, has …
provable linear bounds of output neurons given a certain amount of input perturbation, has …
Causality-based neural network repair
Neural networks have had discernible achievements in a wide range of applications. The
wide-spread adoption also raises the concern of their dependability and reliability. Similar to …
wide-spread adoption also raises the concern of their dependability and reliability. Similar to …
On recurrent neural networks for learning-based control: recent results and ideas for future developments
This paper aims to discuss and analyze the potentialities of Recurrent Neural Networks
(RNN) in control design applications. The main families of RNN are considered, namely …
(RNN) in control design applications. The main families of RNN are considered, namely …
Robustness verification for transformers
Robustness verification that aims to formally certify the prediction behavior of neural
networks has become an important tool for understanding model behavior and obtaining …
networks has become an important tool for understanding model behavior and obtaining …