Urmăriți
Zhengyan Zhang
Zhengyan Zhang
Adresă de e-mail confirmată pe mails.tsinghua.edu.cn - Pagina de pornire
Titlu
Citat de
Citat de
Anul
Graph neural networks: A review of methods and applications
J Zhou, G Cui, S Hu, Z Zhang, C Yang, Z Liu, L Wang, C Li, M Sun
AI Open 1, 57-81, 2020
70772020
ERNIE: Enhanced Language Representation with Informative Entities
Z Zhang, X Han, Z Liu, X Jiang, M Sun, Q Liu
ACL 2019, 2019
18312019
Pre-trained models: Past, present and future
X Han, Z Zhang, N Ding, Y Gu, X Liu, Y Huo, J Qiu, Y Yao, A Zhang, ...
AI Open 2, 225-250, 2021
9532021
KEPLER: A unified model for knowledge embedding and pre-trained language representation
X Wang, T Gao, Z Zhu, Z Zhang, Z Liu, J Li, J Tang
TACL, 2019
7732019
Cpt: Colorful prompt tuning for pre-trained vision-language models
Y Yao, A Zhang, Z Zhang, Z Liu, TS Chua, M Sun
arXiv preprint arXiv:2109.11797, 2021
2742021
Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger
F Qi, M Li, Y Chen, Z Zhang, Z Liu, Y Wang, M Sun
arXiv preprint arXiv:2105.12400, 2021
2332021
Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning
D Guo, D Yang, H Zhang, J Song, R Zhang, R Xu, Q Zhu, S Ma, P Wang, ...
arXiv preprint arXiv:2501.12948, 2025
2072025
DeepSeek-V3 Technical Report
A Liu, B Feng, B Xue, B Wang, B Wu, C Lu, C Zhao, C Deng, C Zhang, ...
arXiv preprint arXiv:2412.19437, 2024
1632024
Moefication: Transformer feed-forward layers are mixtures of experts
Z Zhang, Y Lin, Z Liu, P Li, M Sun, J Zhou
Findings of ACL 2022, 2021
138*2021
A unified framework for community detection and network representation learning
C Tu, X Zeng, H Wang, Z Zhang, Z Liu, M Sun, B Zhang, L Lin
IEEE Transactions on Knowledge and Data Engineering 31 (6), 1051-1065, 2018
1302018
CPM: A large-scale generative Chinese pre-trained language model
Z Zhang, X Han, H Zhou, P Ke, Y Gu, D Ye, Y Qin, Y Su, H Ji, J Guan, F Qi, ...
AI Open 2, 93-99, 2021
1252021
Better Robustness by More Coverage: Adversarial Training with Mixup Augmentation for Robust Fine-tuning
C Si, Z Zhang, F Qi, Z Liu, Y Wang, Q Liu, M Sun
arXiv preprint arXiv:2012.15699, 2020
1042020
Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-Level Backdoor Attacks
Z Zhang, G Xiao, Y Li, T Lv, F Qi, Z Liu, Y Wang, X Jiang, M Sun
arXiv preprint arXiv:2101.06969, 2021
992021
Cpm-2: Large-scale cost-effective pre-trained language models
Z Zhang, Y Gu, X Han, S Chen, C Xiao, Z Sun, Y Yao, F Qi, J Guan, P Ke, ...
AI Open 2, 216-224, 2021
912021
TransNet: Translation-Based Network Representation Learning for Social Relation Extraction.
C Tu, Z Zhang, Z Liu, M Sun
IJCAI, 2864-2870, 2017
912017
Finding Skill Neurons in Pre-trained Transformer-based Language Models
X Wang, K Wen, Z Zhang, L Hou, Z Liu, J Li
EMNLP 2022, 2022
742022
Train No Evil: Selective Masking for Task-guided Pre-training
Y Gu, Z Zhang, X Wang, Z Liu, M Sun
arXiv preprint arXiv:2004.09733, 2020
712020
Open chinese language pre-trained model zoo
H Zhong, Z Zhang, Z Liu, M Sun
Technical report, 2019
672019
Cokebert: Contextual knowledge selection and embedding towards enhanced pre-trained language models
Y Su, X Han, Z Zhang, Y Lin, P Li, Z Liu, J Zhou, M Sun
AI Open 2, 127-134, 2021
662021
Knowledge Inheritance for Pre-trained Language Models
Y Qin, Y Lin, J Yi, J Zhang, X Han, Z Zhang, Y Su, Z Liu, P Li, M Sun, ...
arXiv preprint arXiv:2105.13880, 2021
572021
Sistemul nu poate realiza operația în acest moment. Încercați din nou mai târziu.
Articole 1–20