Sledovat
Yanbin Zhao
Yanbin Zhao
E-mailová adresa ověřena na: baidu.com
Název
Citace
Citace
Rok
Ernie 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation
Y Sun, S Wang, S Feng, S Ding, C Pang, J Shang, J Liu, X Chen, Y Zhao, ...
arXiv preprint arXiv:2107.02137, 2021
5442021
LGESQL: line graph enhanced text-to-SQL model with mixed local and non-local relations
R Cao, L Chen, Z Chen, Y Zhao, S Zhu, K Yu
arXiv preprint arXiv:2106.01093, 2021
1572021
Ernie 3.0 titan: Exploring larger-scale knowledge enhanced pre-training for language understanding and generation
S Wang, Y Sun, Y Xiang, Z Wu, S Ding, W Gong, S Feng, J Shang, Y Zhao, ...
arXiv preprint arXiv:2112.12731, 2021
972021
Neural graph matching networks for Chinese short text matching
L Chen, Y Zhao, B Lyu, L Jin, Z Chen, S Zhu, K Yu
Proceedings of the 58th annual meeting of the Association for Computational …, 2020
622020
ShadowGNN: Graph projection neural network for text-to-SQL parser
Z Chen, L Chen, Y Zhao, R Cao, Z Xu, S Zhu, K Yu
arXiv preprint arXiv:2104.04689, 2021
532021
Unsupervised dual paraphrasing for two-stage semantic parsing
R Cao, S Zhu, C Yang, C Liu, R Ma, Y Zhao, L Chen, K Yu
arXiv preprint arXiv:2005.13485, 2020
482020
Semi-supervised text simplification with back-translation and asymmetric denoising autoencoders
Y Zhao, L Chen, Z Chen, K Yu
Proceedings of the AAAI Conference on Artificial Intelligence 34 (05), 9668-9675, 2020
482020
Ernie 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation. arXiv 2021
Y Sun, S Wang, S Feng, S Ding, C Pang, J Shang, J Liu, X Chen, Y Zhao, ...
arXiv preprint arXiv:2107.02137, 2021
392021
Line graph enhanced AMR-to-text generation with mix-order graph attention networks
Y Zhao, L Chen, Z Chen, R Cao, S Zhu, K Yu
Proceedings of the 58th Annual meeting of the association for computational …, 2020
292020
Credit: Coarse-to-fine sequence generation for dialogue state tracking
Z Chen, L Chen, Z Xu, Y Zhao, S Zhu, K Yu
arXiv preprint arXiv:2009.10435, 2020
112020
ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language Understanding and Generation. CoRR abs/2107.02137 (2021)
Y Sun, S Wang, S Feng, S Ding, C Pang, J Shang, J Liu, X Chen, Y Zhao, ...
arXiv preprint arXiv:2107.02137, 2021
62021
Ernie 3.0: large-scale knowledge enhanced pre-training for language understanding and generation (2021)
Y Sun, S Wang, S Feng, S Ding, C Pang, J Shang, J Liu, X Chen, Y Zhao, ...
arXiv preprint arXiv:2107.02137, 2024
42024
Method and apparatus of training natural language processing model, and method and apparatus of processing natural language
D Siyu, P Chao, W Shuohuan, Y Zhao, J Shang, Y Sun, F Shikun, H Tian, ...
US Patent 12,131,728, 2024
32024
Dual learning for dialogue state tracking
Z Chen, L Chen, Y Zhao, S Zhu, K Yu
National Conference on Man-Machine Speech Communication, 293-305, 2022
32022
DIALOGUE MODEL TRAINING METHOD
Y Zhao, S Ding, S Wang, Y Sun, H Tian, H WU, H Wang
US Patent App. 18/747,641, 2024
2024
Method for pre-training language model
J Shang, W Shuohuan, D Siyu, Y Zhao, P Chao, Y Sun, H Tian, H Wu, ...
US Patent App. 18/179,627, 2023
2023
Model training method, system, device, and medium
W Shuohuan, G Weibao, Z Wu, Y Sun, D Siyu, HAN Yaqian, Y Zhao, ...
US Patent App. 18/118,339, 2023
2023
Method for pre-training model, device, and storage medium
J Shang, W Shuohuan, D Siyu, Y Zhao, P Chao, Y Sun
US Patent App. 17/889,218, 2023
2023
Systém momentálně nemůže danou operaci provést. Zkuste to znovu později.
Články 1–18