Следене
Ruiqi Zhong
Ruiqi Zhong
Потвърден имейл адрес: berkeley.edu - Начална страница
Заглавие
Позовавания
Позовавания
Година
Incoder: A generative model for code infilling and synthesis
D Fried, A Aghajanyan, J Lin, S Wang, E Wallace, F Shi, R Zhong, W Yih, ...
arXiv preprint arXiv:2204.05999, 2022
6412022
Unifiedskg: Unifying and multi-tasking structured knowledge grounding with text-to-text language models
T Xie, CH Wu, P Shi, R Zhong, T Scholak, M Yasunaga, CS Wu, M Zhong, ...
arXiv preprint arXiv:2201.05966, 2022
320*2022
DS-1000: A natural and reliable benchmark for data science code generation
Y Lai, C Li, Y Wang, T Zhang, R Zhong, L Zettlemoyer, W Yih, D Fried, ...
International Conference on Machine Learning, 18319-18345, 2023
2352023
Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections
R Zhong, K Lee, Z Zhang, D Klein
EMNLP 2021, Findings, 2021
1742021
Meta-learning via language model in-context tuning
Y Chen, R Zhong, S Zha, G Karypis, H He
arXiv preprint arXiv:2110.07814, 2021
1402021
Foundational challenges in assuring alignment and safety of large language models
U Anwar, A Saparov, J Rando, D Paleka, M Turpin, P Hase, ES Lubana, ...
arXiv preprint arXiv:2404.09932, 2024
1362024
Semantic evaluation for text-to-sql with distilled test suites
R Zhong, T Yu, D Klein
EMNLP 2020, 2020
1242020
Fine-grained sentiment analysis with faithful attention
R Zhong, S Shao, K McKeown
arXiv preprint arXiv:1908.06870, 2019
572019
Are Larger Pretrained Language Models Uniformly Better? Comparing Performance at the Instance Level
R Zhong, D Ghosh, D Klein, J Steinhardt
ACL 2021, Findings, 2021
482021
Do models explain themselves? counterfactual simulatability of natural language explanations
Y Chen, R Zhong, N Ri, C Zhao, H He, J Steinhardt, Z Yu, K McKeown
arXiv preprint arXiv:2307.08678, 2023
462023
Describing differences between text distributions with natural language
R Zhong, C Snell, D Klein, J Steinhardt
International Conference on Machine Learning, 27099-27116, 2022
422022
Learning by distilling context
C Snell, D Klein, R Zhong
arXiv preprint arXiv:2209.15189, 2022
412022
Subspace embedding and linear regression with orlicz norm
A Andoni, C Lin, Y Sheng, P Zhong, R Zhong
International Conference on Machine Learning, 224-233, 2018
402018
Goal driven discovery of distributional differences via language descriptions
R Zhong, P Zhang, S Li, J Ahn, D Klein, J Steinhardt
Advances in Neural Information Processing Systems 36, 40204-40237, 2023
392023
Approximating how single head attention learns
C Snell, R Zhong, D Klein, J Steinhardt
arXiv preprint arXiv:2103.07601, 2021
372021
Goal-driven explainable clustering via language descriptions
Z Wang, J Shang, R Zhong
arXiv preprint arXiv:2305.13749, 2023
332023
Describing differences in image sets with natural language
L Dunlap, Y Zhang, X Wang, R Zhong, T Darrell, J Steinhardt, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2024
282024
Semantic scaffolds for pseudocode-to-code generation
R Zhong, M Stern, D Klein
ACL 2020, 2020
252020
Detecting gang-involved escalation on social media using context
S Chang, R Zhong, E Adams, FT Lee, S Varia, D Patton, W Frey, C Kedzie, ...
EMNLP 2018, 2018
202018
GAIA-A Multi-media Multi-lingual Knowledge Extraction and Hypothesis Generation System.
T Zhang, A Subburathinam, G Shi, L Huang, D Lu, X Pan, M Li, B Zhang, ...
TAC, 2018
162018
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–20