팔로우
Florian Tramèr
Florian Tramèr
Assistant Professor of Computer Science, ETH Zurich
inf.ethz.ch의 이메일 확인됨 - 홈페이지
제목
인용
인용
연도
Advances and open problems in federated learning
P Kairouz, HB McMahan, B Avent, A Bellet, M Bennis, AN Bhagoji, ...
Foundations and Trends® in Machine Learning 14 (1), 2019
70382019
On the opportunities and risks of foundation models
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 2021
50112021
Ensemble Adversarial Training: Attacks and Defenses
F Tramèr, A Kurakin, N Papernot, I Goodfellow, D Boneh, P McDaniel
International Conference on Learning Representations (ICLR), 2018
34922018
Stealing Machine Learning Models via Prediction APIs
F Tramèr, F Zhang, A Juels, MK Reiter, T Ristenpart
25th USENIX security symposium (USENIX Security 16), 601-618, 2016
24262016
Extracting Training Data from Large Language Models
N Carlini, F Tramèr, E Wallace, M Jagielski, A Herbert-Voss, K Lee, ...
30th USENIX Security Symposium (USENIX Security 21), 2633--2650, 2021
20062021
On evaluating adversarial robustness
N Carlini, A Athalye, N Papernot, W Brendel, J Rauber, D Tsipras, ...
arXiv preprint arXiv:1902.06705, 2019
10582019
On adaptive attacks to adversarial example defenses
F Tramèr, N Carlini, W Brendel, A Madry
Conference on Neural Information Processing Systems (NeurIPS) 33, 2020
9592020
Membership Inference Attacks From First Principles
N Carlini, S Chien, M Nasr, S Song, A Terzis, F Tramèr
43rd IEEE Symposium on Security and Privacy (S&P 2022), 2022
7182022
Quantifying memorization across neural language models
N Carlini, D Ippolito, M Jagielski, K Lee, F Tramèr, C Zhang
International Conference on Learning Representations (ICLR), 2023
7162023
The space of transferable adversarial examples
F Tramèr, N Papernot, I Goodfellow, D Boneh, P McDaniel
arXiv preprint arXiv:1704.03453, 2017
7022017
Extracting training data from diffusion models
N Carlini, J Hayes, M Nasr, M Jagielski, V Sehwag, F Tramer, B Balle, ...
32nd USENIX Security Symposium (USENIX Security 23), 5253-5270, 2023
6302023
Physical adversarial examples for object detectors
K Eykholt, I Evtimov, E Fernandes, B Li, A Rahmati, F Tramèr, A Prakash, ...
12th USENIX Workshop on Offensive Technologies (WOOT 18), 2018
6162018
Label-Only Membership Inference Attacks
CAC Choo, F Tramèr, N Carlini, N Papernot
International Conference on Machine Learning (ICML), 1964--1974, 2021
553*2021
Slalom: Fast, verifiable and private execution of neural networks in trusted hardware
F Tramèr, D Boneh
International Conference on Learning Representations (ICLR), 2019
4892019
Adversarial training and robustness for multiple perturbations
F Tramèr, D Boneh
Conference on Neural Information Processing Systems (NeurIPS) 32, 2019
4492019
Advances and open problems in federated learning
P Kairouz, HB McMahan, B Avent, A Bellet, M Bennis, AN Bhagoji, ...
arXiv preprint arXiv:1912.04977, 0
426*
Sentinet: Detecting localized universal attacks against deep learning systems
E Chou, F Tramer, G Pellegrino
2020 IEEE Security and Privacy Workshops (SPW), 48-54, 2020
4002020
Large language models can be strong differentially private learners
X Li, F Tramèr, P Liang, T Hashimoto
International Conference on Learning Representations (ICLR), 2022
3662022
On the opportunities and risks of foundation models. CoRR abs/2108.07258 (2021)
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
Preprint at https://arxiv. org/abs/2108.07258 2108, 2021
357*2021
Scalable extraction of training data from (production) language models
M Nasr, N Carlini, J Hayase, M Jagielski, AF Cooper, D Ippolito, ...
arXiv preprint arXiv:2311.17035, 2023
3012023
현재 시스템이 작동되지 않습니다. 나중에 다시 시도해 주세요.
학술자료 1–20