A survey of large language models WX Zhao, K Zhou, J Li, T Tang, X Wang, Y Hou, Y Min, B Zhang, J Zhang, ... arXiv preprint arXiv:2303.18223, 2023 | 4134* | 2023 |
Qwen2. 5 Technical Report A Yang, B Yang, B Zhang, B Hui, B Zheng, B Yu, C Li, D Liu, F Huang, ... arXiv preprint arXiv:2412.15115, 2024 | 872* | 2024 |
A survey of pretrained language models based text generation J Li, T Tang, WX Zhao, JY Nie, JR Wen arXiv preprint arXiv:2201.05273, 2022 | 438* | 2022 |
Not all languages are created equal in llms: Improving multilingual capability by cross-lingual-thought prompting H Huang, T Tang, D Zhang, WX Zhao, T Song, Y Xia, F Wei arXiv preprint arXiv:2305.07004, 2023 | 97 | 2023 |
A survey on long text modeling with transformers Z Dong, T Tang, L Li, WX Zhao arXiv preprint arXiv:2302.14502, 2023 | 57 | 2023 |
Few-shot knowledge graph-to-text generation with pretrained language models J Li, T Tang, WX Zhao, Z Wei, NJ Yuan, JR Wen arXiv preprint arXiv:2106.01623, 2021 | 56 | 2021 |
Bamboo: A comprehensive benchmark for evaluating long text modeling capacities of large language models Z Dong, T Tang, J Li, WX Zhao, JR Wen arXiv preprint arXiv:2309.13345, 2023 | 47 | 2023 |
Textbox 2.0: A text generation library with pre-trained language models T Tang, J Li, Z Chen, Y Hu, Z Yu, W Dai, Z Dong, X Cheng, Y Wang, ... arXiv preprint arXiv:2212.13005, 2022 | 39* | 2022 |
Learning to transfer prompts for text generation J Li, T Tang, JY Nie, JR Wen, WX Zhao arXiv preprint arXiv:2205.01543, 2022 | 39 | 2022 |
Language-specific neurons: The key to multilingual capabilities in large language models T Tang, W Luo, H Huang, D Zhang, X Wang, X Zhao, F Wei, JR Wen arXiv preprint arXiv:2402.16438, 2024 | 38 | 2024 |
Mvp: Multi-task supervised pre-training for natural language generation T Tang, J Li, WX Zhao, JR Wen arXiv preprint arXiv:2206.12131, 2022 | 37 | 2022 |
Context-tuning: Learning contextualized prompts for natural language generation T Tang, J Li, WX Zhao, JR Wen arXiv preprint arXiv:2201.08670, 2022 | 30 | 2022 |
Beyond imitation: Leveraging fine-grained quality signals for alignment G Guo, R Zhao, T Tang, WX Zhao, JR Wen arXiv preprint arXiv:2311.04072, 2023 | 20 | 2023 |
Not All Metrics Are Guilty: Improving NLG Evaluation by Diversifying References T Tang, H Lu, YE Jiang, H Huang, D Zhang, WX Zhao, T Kocmi, F Wei arXiv preprint arXiv:2305.15067, 2023 | 16* | 2023 |
The Web Can Be Your Oyster for Improving Large Language Models J Li, T Tang, WX Zhao, J Wang, JY Nie, JR Wen arXiv preprint arXiv:2305.10998, 2023 | 16* | 2023 |
ELMER: A non-autoregressive pre-trained language model for efficient and effective text generation J Li, T Tang, WX Zhao, JY Nie, JR Wen arXiv preprint arXiv:2210.13304, 2022 | 16 | 2022 |
Learning to imagine: Visually-augmented natural language generation T Tang, Y Chen, Y Du, J Li, WX Zhao, JR Wen arXiv preprint arXiv:2305.16944, 2023 | 13 | 2023 |
Zero-shot visual question answering with language model feedback Y Du, J Li, T Tang, WX Zhao, JR Wen arXiv preprint arXiv:2305.17006, 2023 | 9 | 2023 |
Towards effective ancient chinese translation: Dataset, model, and evaluation G Guo, J Yang, F Lu, J Qin, T Tang, WX Zhao CCF International Conference on Natural Language Processing and Chinese …, 2023 | 6 | 2023 |
Eliteplm: an empirical study on general language ability evaluation of pretrained language models J Li, T Tang, Z Gong, L Yang, Z Yu, Z Chen, J Wang, WX Zhao, JR Wen arXiv preprint arXiv:2205.01523, 2022 | 5 | 2022 |