팔로우
Eric Wallace
제목
인용
인용
연도
Extracting Training Data from Large Language Models
N Carlini, F Tramer, E Wallace, M Jagielski, A Herbert-Voss, K Lee, ...
USENIX Security 2021, 2020
19642020
AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts
T Shin, Y Razeghi, RL Logan IV, E Wallace, S Singh
EMNLP 2020, 2020
18862020
Beyond the Imitation Game: Quantifying and Extrapolating the Capabilities of Language Models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
TMLR 2023, 2022
13502022
Calibrate Before Use: Improving Few-Shot Performance of Language Models
TZ Zhao*, E Wallace*, S Feng, D Klein, S Singh
ICML 2021, 2021
13032021
Universal Adversarial Triggers for Attacking and Analyzing NLP
E Wallace, S Feng, N Kandpal, M Gardner, S Singh
EMNLP 2019, 2019
9352019
InCoder: A Generative Model for Code Infilling and Synthesis
D Fried, A Aghajanyan, J Lin, S Wang, E Wallace, F Shi, R Zhong, W Yih, ...
ICLR 2023, 2022
6372022
Extracting Training Data from Diffusion Models
N Carlini, J Hayes, M Nasr, M Jagielski, V Sehwag, F Tramer, B Balle, ...
USENIX Security 2023, 2023
6032023
Evaluating Models' Local Decision Boundaries via Contrast Sets
M Gardner, Y Artzi, V Basmova, J Berant, B Bogin, S Chen, P Dasigi, ...
EMNLP Findings 2020, 2020
4942020
Pretrained Transformers Improve Out-of-Distribution Robustness
D Hendrycks, X Liu, E Wallace, A Dziedzic, R Krishnan, D Song
ACL 2020, 2020
4852020
Large Language Models Struggle to Learn Long-Tail Knowledge
N Kandpal, H Deng, A Roberts, E Wallace, C Raffel
ICML 2023, 2022
4312022
Pathologies of Neural Models Make Interpretations Difficult
S Feng, E Wallace, II Grissom, M Iyyer, P Rodriguez, J Boyd-Graber
EMNLP 2018, 2018
3832018
Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers
Z Li*, E Wallace*, S Shen*, K Lin*, K Keutzer, D Klein, JE Gonzalez
ICML 2020, 2020
3332020
Do NLP Models Know Numbers? Probing Numeracy in Embeddings
E Wallace*, Y Wang*, S Li, S Singh, M Gardner
EMNLP 2019, 2019
3152019
Scalable Extraction of Training Data from (Production) Language Models
M Nasr, N Carlini, J Hayase, M Jagielski, AF Cooper, D Ippolito, ...
arXiv preprint arXiv:2311.17035, 2023
2812023
Deduplicating Training Data Mitigates Privacy Risks in Language Models
N Kandpal, E Wallace, C Raffel
ICML 2022, 2022
2512022
Koala: A Dialogue Model for Academic Research
X Geng*, A Gudibande*, H Liu*, E Wallace*, P Abbeel, S Levine, D Song
BAIR Blog, 2023
2222023
Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models
RL Logan IV, I Balažević, E Wallace, F Petroni, S Singh, S Riedel
ACL Findings 2022, 2021
2042021
Concealed Data Poisoning Attacks on NLP Models
E Wallace*, TZ Zhao*, S Feng, S Singh
NAACL 2021, 2020
1972020
Trick Me If You Can: Human-in-the-loop Generation of Adversarial Question Answering Examples
E Wallace, P Rodriguez, S Feng, I Yamada, J Boyd-Graber
TACL 2019, 2019
193*2019
The False Promise of Imitating Proprietary LLMs
A Gudibande*, E Wallace*, C Snell*, X Geng, H Liu, P Abbeel, S Levine, ...
ICLR 2024, 2023
1732023
현재 시스템이 작동되지 않습니다. 나중에 다시 시도해 주세요.
학술자료 1–20