Folgen
Chenglei Si
Chenglei Si
Bestätigte E-Mail-Adresse bei stanford.edu - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Bloom: A 176b-parameter open-access multilingual language model
T Le Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, R Castagné, ...
17432023
Prompting gpt-3 to be reliable
C Si, Z Gan, Z Yang, S Wang, J Wang, J Boyd-Graber, L Wang
arXiv preprint arXiv:2210.09150, 2022
2632022
Between words and characters: A brief history of open-vocabulary modeling and tokenization in NLP
SJ Mielke, Z Alyafeai, E Salesky, C Raffel, M Dey, M Gallé, A Raja, C Si, ...
arXiv preprint arXiv:2112.10508, 2021
194*2021
CharBERT: Character-aware pre-trained language model
W Ma, Y Cui, C Si, T Liu, S Wang, G Hu
arXiv preprint arXiv:2011.01513, 2020
1202020
Better robustness by more coverage: Adversarial and mixup data augmentation for robust finetuning
C Si, Z Zhang, F Qi, Z Liu, Y Wang, Q Liu, M Sun
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 …, 2021
103*2021
The Prompt Report: A Systematic Survey of Prompting Techniques
S Schulhoff, M Ilie, N Balepur, K Kahadze, A Liu, C Si, Y Li, A Gupta, ...
arXiv preprint arXiv:2406.06608, 2024
85*2024
Best practices and lessons learned on synthetic data for language models
R Liu, J Wei, F Liu, C Si, Y Zhang, J Rao, S Zheng, D Peng, D Yang, ...
arXiv preprint arXiv:2404.07503, 2024
84*2024
Ignore this title and HackAPrompt: Exposing systemic vulnerabilities of LLMs through a global prompt hacking competition
S Schulhoff, J Pinto, A Khan, LF Bouchard, C Si, S Anati, V Tagliabue, ...
Proceedings of the 2023 Conference on Empirical Methods in Natural Language …, 2023
78*2023
Can llms generate novel research ideas? a large-scale human study with 100+ nlp researchers
C Si, D Yang, T Hashimoto
arXiv preprint arXiv:2409.04109, 2024
552024
What does bert learn from multiple-choice reading comprehension datasets?
C Si, S Wang, MY Kan, J Jiang
arXiv preprint arXiv:1910.12391, 2019
512019
Design2code: How far are we from automating front-end engineering?
C Si, Y Zhang, Z Yang, R Liu, D Yang
arXiv e-prints, arXiv: 2403.03163, 2024
38*2024
Measuring inductive biases of in-context learning with underspecified demonstrations
C Si, D Friedman, N Joshi, S Feng, D Chen, H He
arXiv preprint arXiv:2305.13299, 2023
332023
Re-examining calibration: The case of question answering
C Si, C Zhao, S Min, J Boyd-Graber
arXiv preprint arXiv:2205.12507, 2022
32*2022
Benchmarking robustness of machine reading comprehension models
C Si, Z Yang, Y Cui, W Ma, T Liu, S Wang
arXiv preprint arXiv:2004.14004, 2020
292020
Sub-character tokenization for Chinese pretrained language models
C Si, Z Zhang, Y Chen, F Qi, X Wang, Z Liu, Y Wang, Q Liu, M Sun
Transactions of the Association for Computational Linguistics 11, 469-487, 2023
25*2023
What's in a name? answer equivalence for open-domain question answering
C Si, C Zhao, J Boyd-Graber
arXiv preprint arXiv:2109.05289, 2021
242021
Large Language Models Help Humans Verify Truthfulness--Except When They Are Convincingly Wrong
C Si, N Goyal, ST Wu, C Zhao, S Feng, H Daumé III, J Boyd-Graber
arXiv preprint arXiv:2310.12558, 2023
202023
Towards Bidirectional Human-AI Alignment: A Systematic Review for Clarifications, Framework, and Future Directions
H Shen, T Knearem, R Ghosh, K Alkiek, K Krishna, Y Liu, Z Ma, S Petridis, ...
arXiv preprint arXiv:2406.09264, 2024
172024
Getting more out of mixture of language model reasoning experts
C Si, W Shi, C Zhao, L Zettlemoyer, J Boyd-Graber
arXiv preprint arXiv:2305.14628, 2023
172023
Sentiment aware neural machine translation
C Si, K Wu, A Aw, MY Kan
Proceedings of the 6th Workshop on Asian Translation, 200-206, 2019
162019
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20