フォロー
Yijia Shao
Yijia Shao
Stanford University
確認したメール アドレス: pku.edu.cn - ホームページ
タイトル
引用先
引用先
Continual Pre-training of Language Models
Z Ke, Y Shao, H Lin, T Konishi, G Kim, B Liu
The Eleventh International Conference on Learning Representations (ICLR 2023), 2023
1202023
Quiet-star: Language models can teach themselves to think before speaking
E Zelikman, G Harik, Y Shao, V Jayasiri, N Haber, ND Goodman
arXiv preprint arXiv:2403.09629, 2024
662024
Assisting in writing wikipedia-like articles from scratch with large language models
Y Shao, Y Jiang, TA Kanell, P Xu, O Khattab, MS Lam
arXiv preprint arXiv:2402.14207, 2024
402024
Continual training of language models for few-shot learning
Z Ke, H Lin, Y Shao, H Xu, L Shu, B Liu
arXiv preprint arXiv:2210.05549, 2022
352022
Adapting a language model while preserving its general knowledge
Z Ke, Y Shao, H Lin, H Xu, L Shu, B Liu
arXiv preprint arXiv:2301.08986, 2023
202023
Class-incremental learning based on label generation
Y Shao, Y Guo, D Zhao, B Liu
arXiv preprint arXiv:2306.12619, 2023
132023
Class incremental learning via likelihood ratio based task prediction
H Lin, Y Shao, W Qian, N Pan, Y Guo, B Liu
arXiv preprint arXiv:2309.15048, 2023
122023
LUNA: language understanding with number augmentations on transformers via number plugins and pre-training
H Han, J Xu, M Zhou, Y Shao, S Han, D Zhang
arXiv preprint arXiv:2212.02691, 2022
122022
Show, Don't Tell: Aligning Language Models with Demonstrated Feedback
O Shaikh, M Lam, J Hejna, Y Shao, M Bernstein, D Yang
arXiv preprint arXiv:2406.00888, 2024
102024
Quiet-star: Language models can teach themselves to think before speaking, 2024
E Zelikman, G Harik, Y Shao, V Jayasiri, N Haber, ND Goodman
URL https://arxiv. org/abs/2403.09629 2403, 2021
102021
Cmg: A class-mixed generation approach to out-of-distribution detection
M Wang, Y Shao, H Lin, W Hu, B Liu
Joint European Conference on Machine Learning and Knowledge Discovery in …, 2022
92022
Accent: An automatic event commonsense evaluation metric for open-domain dialogue systems
S Ghazarian, Y Shao, R Han, A Galstyan, N Peng
arXiv preprint arXiv:2305.07797, 2023
52023
Privacylens: Evaluating privacy norm awareness of language models in action
Y Shao, T Li, W Shi, Y Liu, D Yang
arXiv preprint arXiv:2409.00138, 2024
42024
Anameta: A table understanding dataset of field metadata knowledge shared by multi-dimensional data analysis tasks
X He, M Zhou, M Zhou, J Xu, X Lv, T Li, Y Shao, S Han, Z Yuan, D Zhang
arXiv preprint arXiv:2209.00946, 2022
42022
Personalization of large language models: A survey
Z Zhang, RA Rossi, B Kveton, Y Shao, D Yang, H Zamani, F Dernoncourt, ...
arXiv preprint arXiv:2411.00027, 2024
32024
Into the unknown unknowns: Engaged human learning through participation in language model agent conversations
Y Jiang, Y Shao, D Ma, SJ Semnani, MS Lam
arXiv preprint arXiv:2408.15232, 2024
32024
FormLM: Recommending Creation Ideas for Online Forms by Modelling Semantic and Structural Information
Y Shao, M Zhou, Y Zhong, T Wu, H Han, S Han, G Huang, D Zhang
arXiv preprint arXiv:2211.05284, 2022
12022
Collaborative Gym: A Framework for Enabling and Evaluating Human-Agent Collaboration
Y Shao, V Samuel, Y Jiang, J Yang, D Yang
arXiv preprint arXiv:2412.15701, 2024
2024
現在システムで処理を実行できません。しばらくしてからもう一度お試しください。
論文 1–18