フォロー
Nicholas Carlini
Nicholas Carlini
Google DeepMind
確認したメール アドレス: google.com - ホームページ
タイトル
引用先
引用先
Towards evaluating the robustness of neural networks
N Carlini, D Wagner
2017 IEEE Symposium on Security and Privacy (SP), 39-57, 2017
106192017
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
K Sohn, D Berthelot, CL Li, Z Zhang, N Carlini, ED Cubuk, A Kurakin, ...
arXiv preprint arXiv:2001.07685, 2020
41302020
Mixmatch: A holistic approach to semi-supervised learning
D Berthelot, N Carlini, I Goodfellow, N Papernot, A Oliver, CA Raffel
Advances in Neural Information Processing Systems, 5050-5060, 2019
39042019
Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples
A Athalye, N Carlini, D Wagner
ICML 2018, 2018
37172018
Adversarial examples are not easily detected: Bypassing ten detection methods
N Carlini, D Wagner
Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security …, 2017
21302017
Extracting training data from large language models
N Carlini, F Tramer, E Wallace, M Jagielski, A Herbert-Voss, K Lee, ...
30th USENIX Security Symposium (USENIX Security 21), 2633-2650, 2021
19422021
The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks
N Carlini, C Liu, J Kos, Ú Erlingsson, D Song
1504*2019
Audio adversarial examples: Targeted attacks on speech-to-text
N Carlini, D Wagner
2018 IEEE Security and Privacy Workshops (SPW), 1-7, 2018
13862018
ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring
D Berthelot, N Carlini, ED Cubuk, A Kurakin, K Sohn, H Zhang, C Raffel
arXiv preprint arXiv:1911.09785, 2019
13172019
Universal and transferable adversarial attacks on aligned language models
A Zou, Z Wang, N Carlini, M Nasr, JZ Kolter, M Fredrikson
arXiv preprint arXiv:2307.15043, 2023
10702023
On Evaluating Adversarial Robustness
N Carlini, A Athalye, N Papernot, W Brendel, J Rauber, D Tsipras, ...
arXiv preprint arXiv:1902.06705, 2019
10582019
On adaptive attacks to adversarial example defenses
F Tramer, N Carlini, W Brendel, A Madry
Advances in Neural Information Processing Systems 33, 1633-1645, 2020
9512020
Hidden Voice Commands.
N Carlini, P Mishra, T Vaidya, Y Zhang, M Sherr, C Shields, D Wagner, ...
USENIX Security Symposium, 513-530, 2016
8122016
cleverhans v2. 0.0: an adversarial machine learning library
N Papernot, N Carlini, I Goodfellow, R Feinman, F Faghri, A Matyasko, ...
arXiv preprint arXiv:1610.00768, 2016
769*2016
Membership inference attacks from first principles
N Carlini, S Chien, M Nasr, S Song, A Terzis, F Tramer
2022 IEEE Symposium on Security and Privacy (SP), 1897-1914, 2022
7142022
Quantifying memorization across neural language models
N Carlini, D Ippolito, M Jagielski, K Lee, F Tramer, C Zhang
arXiv preprint arXiv:2202.07646, 2022
6962022
Measuring Robustness to Natural Distribution Shifts in Image Classification
R Taori, A Dave, V Shankar, N Carlini, B Recht, L Schmidt
arXiv preprint arXiv:2007.00644, 2020
6232020
Extracting Training Data from Diffusion Models
N Carlini, J Hayes, M Nasr, M Jagielski, V Sehwag, F Tramèr, B Balle, ...
arXiv preprint arXiv:2301.13188, 2023
6032023
Control-flow bending: On the effectiveness of control-flow integrity
N Carlini, A Barresi, M Payer, D Wagner, TR Gross
24th {USENIX} Security Symposium ({USENIX} Security 15), 161-176, 2015
6002015
Deduplicating Training Data Makes Language Models Better
K Lee, D Ippolito, A Nystrom, C Zhang, D Eck, C Callison-Burch, N Carlini
arXiv preprint arXiv:2107.06499, 2021
5882021
現在システムで処理を実行できません。しばらくしてからもう一度お試しください。
論文 1–20