Unveiling code pre-trained models: Investigating syntax and semantics capacities
Code models have made significant advancements in code intelligence by encoding
knowledge about programming languages. While previous studies have explored the …
knowledge about programming languages. While previous studies have explored the …
One prompt word is enough to boost adversarial robustness for pre-trained vision-language models
Abstract Large pre-trained Vision-Language Models (VLMs) like CLIP despite having
remarkable generalization ability are highly vulnerable to adversarial examples. This work …
remarkable generalization ability are highly vulnerable to adversarial examples. This work …
Improving the accuracy-robustness trade-off of classifiers via adaptive smoothing
While prior research has proposed a plethora of methods that build neural classifiers robust
against adversarial robustness, practitioners are still reluctant to adopt them due to their …
against adversarial robustness, practitioners are still reluctant to adopt them due to their …
Aroid: Improving adversarial robustness through online instance-wise data augmentation
Deep neural networks are vulnerable to adversarial examples. Adversarial training (AT) is
an effective defense against adversarial examples. However, AT is prone to overfitting which …
an effective defense against adversarial examples. However, AT is prone to overfitting which …
MixedNUTS: Training-Free Accuracy-Robustness Balance via Nonlinearly Mixed Classifiers
Adversarial robustness often comes at the cost of degraded accuracy, impeding the real-life
application of robust classification models. Training-based solutions for better trade-offs are …
application of robust classification models. Training-based solutions for better trade-offs are …
[PDF][PDF] Towards Robust Visual Classification through Adversarial Training
L Li - 2024 - kclpure.kcl.ac.uk
Although deep neural networks (DNNs) have demonstrated remarkable capabilities, they
are vulnerable to adversarial examples. Adversarial examples are input data perturbed by …
are vulnerable to adversarial examples. Adversarial examples are input data perturbed by …