GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model S Tan, WL Tam, Y Wang, W Gong, Y Yang, H Tang, K He, J Liu, J Wang, ... The 61st Annual Meeting Of The Association For Computational Linguistics …, 2023 | 13 | 2023 |
R-Eval: A Unified Toolkit for Evaluating Domain Knowledge of Retrieval Augmented Large Language Models S Tu, Y Wang, J Yu, Y Xie, Y Shi, X Wang, J Zhang, L Hou, J Li | 7 | 2024 |
From MOOC to MAIC: Reshaping Online Teaching and Learning through LLM-driven Agents J Yu, Z Zhang, D Zhang-li, S Tu, Z Hao, RM Li, H Li, Y Wang, H Li, L Gong, ... arXiv preprint arXiv:2409.03512, 2024 | 3 | 2024 |
A Solution-based LLM API-using Methodology for Academic Information Seeking Y Wang, J Yu, Z Yao, J Zhang, Y Xie, S Tu, Y Fu, Y Feng, J Zhang, ... arXiv preprint arXiv:2405.15165, 2024 | 2 | 2024 |
NHGMI: Heterogeneous graph multi-view infomax with node-wise contrasting samples selection Q Li, H Ni, Y Wang Knowledge-Based Systems 289, 111520, 2024 | 2 | 2024 |
Are Intermediate Layers and Labels Really Necessary? A General Language Model Distillation Method S Tan, WL Tam, Y Wang, W Gong, S Zhao, P Zhang, J Tang arXiv preprint arXiv:2306.06625, 2023 | 2 | 2023 |
[Industry] GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model S Tan, WL Tam, Y Wang, W Gong, S Zhao, P Zhang, J Tang The 61st Annual Meeting Of The Association For Computational Linguistics, 2023 | 1 | 2023 |
Authorship style transfer with inverse transfer data augmentation Z Shao, J Zhang, H Li, X Huang, C Zhou, Y Wang, J Gong, C Li, H Chen AI Open 5, 94-103, 2024 | | 2024 |