A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability

X Huang, D Kroening, W Ruan, J Sharp, Y Sun… - Computer Science …, 2020 - Elsevier
In the past few years, significant progress has been made on deep neural networks (DNNs)
in achieving human-level performance on several long-standing tasks. With the broader …

A review on data-driven constitutive laws for solids

JN Fuhg, G Anantha Padmanabha, N Bouklas… - … Methods in Engineering, 2024 - Springer
This review article highlights state-of-the-art data-driven techniques to discover, encode,
surrogate, or emulate constitutive laws that describe the path-independent and path …

A survey of safety and trustworthiness of large language models through the lens of verification and validation

X Huang, W Ruan, W Huang, G **, Y Dong… - Artificial Intelligence …, 2024 - Springer
Large language models (LLMs) have exploded a new heatwave of AI for their ability to
engage end-users in human-level conversations with detailed and articulate answers across …

Simple and principled uncertainty estimation with deterministic deep learning via distance awareness

J Liu, Z Lin, S Padhy, D Tran… - Advances in neural …, 2020 - proceedings.neurips.cc
Bayesian neural networks (BNN) and deep ensembles are principled approaches to
estimate the predictive uncertainty of a deep learning model. However their practicality in …

The marabou framework for verification and analysis of deep neural networks

G Katz, DA Huang, D Ibeling, K Julian… - … Aided Verification: 31st …, 2019 - Springer
Deep neural networks are revolutionizing the way complex systems are designed.
Consequently, there is a pressing need for tools and techniques for network analysis and …

Efficient and accurate estimation of lipschitz constants for deep neural networks

M Fazlyab, A Robey, H Hassani… - Advances in neural …, 2019 - proceedings.neurips.cc
Tight estimation of the Lipschitz constant for deep neural networks (DNNs) is useful in many
applications ranging from robustness certification of classifiers to stability analysis of closed …

Are formal methods applicable to machine learning and artificial intelligence?

M Krichen, A Mihoub, MY Alzahrani… - … Conference of Smart …, 2022 - ieeexplore.ieee.org
Formal approaches can provide strict correctness guarantees for the development of both
hardware and software systems. In this work, we examine state-of-the-art formal methods for …

Concolic testing for deep neural networks

Y Sun, M Wu, W Ruan, X Huang… - Proceedings of the 33rd …, 2018 - dl.acm.org
Concolic testing combines program execution and symbolic analysis to explore the
execution paths of a software program. In this paper, we develop the first concolic testing …

Lipschitz-margin training: Scalable certification of perturbation invariance for deep neural networks

Y Tsuzuku, I Sato, M Sugiyama - Advances in neural …, 2018 - proceedings.neurips.cc
High sensitivity of neural networks against malicious perturbations on inputs causes security
concerns. To take a steady step towards robust classifiers, we aim to create neural network …

Abduction-based explanations for machine learning models

A Ignatiev, N Narodytska, J Marques-Silva - Proceedings of the AAAI …, 2019 - aaai.org
The growing range of applications of Machine Learning (ML) in a multitude of settings
motivates the ability of computing small explanations for predictions made. Small …