Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Advances in adversarial attacks and defenses in computer vision: A survey
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …
ability to accurately solve complex problems is employed in vision research to learn deep …
Algorithms for verifying deep neural networks
Deep neural networks are widely used for nonlinear function approximation, with
applications ranging from computer vision to control. Although these networks involve the …
applications ranging from computer vision to control. Although these networks involve the …
Certifying llm safety against adversarial prompting
Large language models (LLMs) are vulnerable to adversarial attacks that add malicious
tokens to an input prompt to bypass the safety guardrails of an LLM and cause it to produce …
tokens to an input prompt to bypass the safety guardrails of an LLM and cause it to produce …
A survey of safety and trustworthiness of large language models through the lens of verification and validation
Large language models (LLMs) have exploded a new heatwave of AI for their ability to
engage end-users in human-level conversations with detailed and articulate answers across …
engage end-users in human-level conversations with detailed and articulate answers across …
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
The field of defense strategies against adversarial attacks has significantly grown over the
last years, but progress is hampered as the evaluation of adversarial defenses is often …
last years, but progress is hampered as the evaluation of adversarial defenses is often …
Overfitting in adversarially robust deep learning
It is common practice in deep learning to use overparameterized networks and train for as
long as possible; there are numerous studies that show, both theoretically and empirically …
long as possible; there are numerous studies that show, both theoretically and empirically …
[PDF][PDF] Beta-crown: Efficient bound propagation with per-neuron split constraints for neural network robustness verification
Bound propagation based incomplete neural network verifiers such as CROWN are very
efficient and can significantly accelerate branch-and-bound (BaB) based complete …
efficient and can significantly accelerate branch-and-bound (BaB) based complete …
First three years of the international verification of neural networks competition (VNN-COMP)
This paper presents a summary and meta-analysis of the first three iterations of the annual
International Verification of Neural Networks Competition (VNN-COMP), held in 2020, 2021 …
International Verification of Neural Networks Competition (VNN-COMP), held in 2020, 2021 …
Certified adversarial robustness via randomized smoothing
We show how to turn any classifier that classifies well under Gaussian noise into a new
classifier that is certifiably robust to adversarial perturbations under the L2 norm. While this" …
classifier that is certifiably robust to adversarial perturbations under the L2 norm. While this" …
Uncovering the limits of adversarial training against norm-bounded adversarial examples
Adversarial training and its variants have become de facto standards for learning robust
deep neural networks. In this paper, we explore the landscape around adversarial training in …
deep neural networks. In this paper, we explore the landscape around adversarial training in …