متابعة
Weijia Shi
Weijia Shi
بريد إلكتروني تم التحقق منه على uw.edu - الصفحة الرئيسية
عنوان
عدد مرات الاقتباسات
عدد مرات الاقتباسات
السنة
REPLUG: Retrieval-augmented black-box language models
W Shi, S Min, M Yasunaga, M Seo, R James, M Lewis, L Zettlemoyer, ...
NAACL 2024, 2023
508*2023
Fine-grained human feedback gives better rewards for language model training
Z Wu, Y Hu, W Shi, N Dziri, A Suhr, P Ammanabrolu, NA Smith, ...
NeurIPS 2023 (Spotlight), 2024
263*2024
One embedder, any task: Instruction-finetuned text embeddings
H *Su, W *Shi, J Kasai, Y Wang, Y Hu, M Ostendorf, W Yih, NA Smith, ...
ACL 2023, 2022
2482022
Selective annotation makes language models better few-shot learners
H Su, J Kasai, CH Wu, W Shi, T Wang, J Xin, R Zhang, M Ostendorf, ...
ICLR 2023, 2022
231*2022
Detecting pretraining data from large language models
W Shi, A Ajith, M Xia, Y Huang, D Liu, T Blevins, D Chen, L Zettlemoyer
ICLR 2024, 2023
2142023
Embedding uncertain knowledge graphs
X Chen, M Chen, W Shi, Y Sun, C Zaniolo
AAAI 2019, 2019
1592019
Examining gender bias in languages with grammatical gender
P Zhou, W Shi, J Zhao, KH Huang, M Chen, R Cotterell, KW Chang
EMNLP 2019, 2019
144*2019
Promptcap: Prompt-guided task-aware image captioning
Y Hu, H Hua, Z Yang, W Shi, NA Smith, J Luo
ICCV 2023, 2022
143*2022
Trusting your evidence: Hallucinate less with context-aware decoding
W Shi, X Han, M Lewis, Y Tsvetkov, L Zettlemoyer, SW Yih
NAACL 2024, 2023
1372023
RECOMP: Improving retrieval-augmented LMs with context compression and selective augmentation
F Xu, W Shi, E Choi
ICLR 2024, 2024
135*2024
Retrieval-augmented multimodal language modeling
M Yasunaga, A Aghajanyan, W Shi, R James, J Leskovec, P Liang, ...
ICML 2023, 2022
1332022
Ra-dit: Retrieval-augmented dual instruction tuning
XV Lin, X Chen, M Chen, W Shi, M Lomeli, R James, P Rodriguez, J Kahn, ...
ICLR 2024, 2023
1072023
On tractable representations of binary neural networks
W Shi, A Shih, A Darwiche, A Choi
KR 2020, 2020
100*2020
Do membership inference attacks work on large language models?
M Duan, A Suri, N Mireshghallah, S Min, W Shi, L Zettlemoyer, Y Tsvetkov, ...
COLM 2024, 2024
75*2024
Retrofitting contextualized word embeddings with paraphrases
W Shi, M Chen, P Zhou, KW Chang
EMNLP 2019, 2019
75*2019
kNN-Prompt: Nearest Neighbor Zero-Shot Inference
W Shi, J Michael, S Gururangan, L Zettlemoyer
ACL 2022, 2022
68*2022
Nonparametric masked language modeling
S Min, W Shi, M Lewis, X Chen, W Yih, H Hajishirzi, L Zettlemoyer
ACL 2023, 2022
662022
In-context pretraining: Language modeling beyond document boundaries
W Shi, S Min, M Lomeli, C Zhou, M Li, V Lin, NA Smith, L Zettlemoyer, ...
ICLR 2024 (Spotlight), 2023
65*2023
Knowledge Card: Filling LLMs' Knowledge Gaps with Plug-in Specialized Language Models
S Feng, W Shi, Y Bai, V Balachandran, T He, Y Tsvetkov
ICLR 2024 (Oral), 2023
61*2023
Scaling expert language models with unsupervised domain discovery
S Gururangan, M Li, M Lewis, W Shi, T Althoff, NA Smith, L Zettlemoyer
arXiv preprint arXiv:2303.14177, 2023
61*2023
يتعذر على النظام إجراء العملية في الوقت الحالي. عاود المحاولة لاحقًا.
مقالات 1–20