Training data influence analysis and estimation: A survey
Z Hammoudeh, D Lowd - Machine Learning, 2024 - Springer
Good models require good training data. For overparameterized deep models, the causal
relationship between training data and model predictions is increasingly opaque and poorly …
relationship between training data and model predictions is increasingly opaque and poorly …
Certifiably Robust RAG against Retrieval Corruption
Retrieval-augmented generation (RAG) has been shown vulnerable to retrieval corruption
attacks: an attacker can inject malicious passages into retrieval results to induce inaccurate …
attacks: an attacker can inject malicious passages into retrieval results to induce inaccurate …
Provable Robustness against a Union of L_0 Adversarial Attacks
Z Hammoudeh, D Lowd - Proceedings of the AAAI Conference on …, 2024 - ojs.aaai.org
Sparse or L0 adversarial attacks arbitrarily perturb an unknown subset of the features. L0
robustness analysis is particularly well-suited for heterogeneous (tabular) data where …
robustness analysis is particularly well-suited for heterogeneous (tabular) data where …
Semi-supervised image manipulation localization with residual enhancement
Images have become a significant medium for information transmission, while image
forensics has garnered widespread attention from researchers. Due to the scarcity of finely …
forensics has garnered widespread attention from researchers. Due to the scarcity of finely …
Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks
Generalization of machine learning models can be severely compromised by data
poisoning, where adversarial changes are applied to the training data. This vulnerability has …
poisoning, where adversarial changes are applied to the training data. This vulnerability has …
FCert: Certifiably Robust Few-Shot Classification in the Era of Foundation Models
Few-shot classification with foundation models (eg, CLIP, DINOv2, PaLM-2) enables users
to build an accurate classifier with a few labeled training samples (called support samples) …
to build an accurate classifier with a few labeled training samples (called support samples) …
Feature Partition Aggregation: A Fast Certified Defense Against a Union of Attacks
Z Hammoudeh, D Lowd - The Second Workshop on New Frontiers …, 2023 - openreview.net
Sparse or $\ell_0 $ adversarial attacks arbitrarily perturb an unknown subset of the features.
$\ell_0 $ robustness analysis is particularly well-suited for heterogeneous (tabular) data …
$\ell_0 $ robustness analysis is particularly well-suited for heterogeneous (tabular) data …
Prompting the Unseen: Detecting Hidden Backdoors in Black-Box Models
ZX Huang, JW Chen, ZP Zhang, CM Yu - arxiv preprint arxiv:2411.09540, 2024 - arxiv.org
Visual prompting (VP) is a new technique that adapts well-trained frozen models for source
domain tasks to target domain tasks. This study examines VP's benefits for black-box model …
domain tasks to target domain tasks. This study examines VP's benefits for black-box model …
Provable Robustness Against a Union of Adversarial Attacks
Z Hammoudeh, D Lowd - arxiv preprint arxiv:2302.11628, 2023 - arxiv.org
Sparse or $\ell_0 $ adversarial attacks arbitrarily perturb an unknown subset of the features.
$\ell_0 $ robustness analysis is particularly well-suited for heterogeneous (tabular) data …
$\ell_0 $ robustness analysis is particularly well-suited for heterogeneous (tabular) data …
Scalable Methods for Robust Machine Learning
AJ Levine - 2023 - search.proquest.com
In recent years, machine learning systems have been developed that demonstrate
remarkable performance on many tasks. However, naive metrics of performance, such as …
remarkable performance on many tasks. However, naive metrics of performance, such as …