Sledovat
Yu-kun Li (李宇琨)
Yu-kun Li (李宇琨)
E-mailová adresa ověřena na: baidu.com
Název
Citace
Citace
Rok
ERNIE: Enhanced representation through knowledge integration
Y Sun, S Wang, YK Li, S Feng, X Chen, H Zhang, X Tian, D Zhu, H Tian, ...
arXiv preprint arXiv:1904.09223, 2019
12232019
ERNIE 2.0: A continual pre-training framework for language understanding
Y Sun, S Wang, YK Li, S Feng, H Tian, H Wu, H Wang
arXiv preprint arXiv:1907.12412, 2019
9532019
DeepSeek-Coder: When the Large Language Model Meets Programming--The Rise of Code Intelligence
D Guo, Q Zhu, D Yang, Z Xie, K Dong, W Zhang, G Chen, X Bi, Y Wu, ...
arXiv preprint arXiv:2401.14196, 2024
4712024
Binary relevance for multi-label learning: an overview
ML Zhang, YK Li, XY Liu, X Geng
Frontiers of Computer Science 12, 191-202, 2018
4682018
Towards class-imbalance aware multi-label learning
ML Zhang, YK Li, H Yang, XY Liu
IEEE Transactions on Cybernetics 52 (6), 4459-4471, 2020
2112020
ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation
D Xiao, H Zhang, YK Li, Y Sun, H Tian, H Wu, H Wang
arXiv preprint arXiv:2001.11314, 2020
1532020
DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models
D Dai, C Deng, C Zhao, RX Xu, H Gao, D Chen, J Li, W Zeng, X Yu, Y Wu, ...
arXiv preprint arXiv:2401.06066, 2024
1492024
Deepseek-v2: A strong, economical, and efficient mixture-of-experts language model
A Liu, B Feng, B Wang, B Wang, B Liu, C Zhao, C Dengr, C Ruan, D Dai, ...
arXiv preprint arXiv:2405.04434, 2024
1122024
DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence
Q Zhu, D Guo, Z Shao, D Yang, P Wang, R Xu, Y Wu, Y Li, H Gao, S Ma, ...
arXiv preprint arXiv:2406.11931, 2024
1112024
Leveraging implicit relative labeling-importance information for effective multi-label learning
YK Li, ML Zhang, X Geng
2015 IEEE International Conference on Data Mining, 251-260, 2015
1042015
DeepSeek LLM: Scaling Open-Source Language Models with Longtermism
X Bi, D Chen, G Chen, S Chen, D Dai, C Deng, H Ding, K Dong, Q Du, ...
arXiv preprint arXiv:2401.02954, 2024
762024
Leveraging implicit relative labeling-importance information for effective multi-label learning
ML Zhang, QW Zhang, JP Fang, YK Li, X Geng
IEEE Transactions on Knowledge and Data Engineering 33 (5), 2057-2070, 2019
592019
ERNIE-Gram: Pre-Training with Explicitly N-Gram Masked Language Modeling for Natural Language Understanding
D Xiao, YK Li, H Zhang, Y Sun, H Tian, H Wu, H Wang
arXiv preprint arXiv:2010.12148, 2020
522020
Deepseek-v3 technical report
A Liu, B Feng, B Xue, B Wang, B Wu, C Lu, C Zhao, C Deng, C Zhang, ...
arXiv preprint arXiv:2412.19437, 2024
472024
X-MOL: large-scale pre-training for molecular understanding and diverse molecular analysis
D Xue, H Zhang, D Xiao, Y Gong, G Chuai, Y Sun, H Tian, H Wu, YK Li, ...
bioRxiv 2020.12.23.424259, 2021
462021
Artificial intelligence based method and apparatus for generating information
LI Yukun, Y Liu, Y Sun, YU Dianhai
US Patent 10,528,667, 2020
222020
Enhancing binary relevance for multi-label learning with controlled label correlations exploitation
YK Li, ML Zhang
Pacific Rim International Conference on Artificial Intelligence, 91-103, 2014
132014
Deepseek-vl2: Mixture-of-experts vision-language models for advanced multimodal understanding
Z Wu, X Chen, Z Pan, X Liu, W Liu, D Dai, H Gao, Y Ma, C Wu, B Wang, ...
arXiv preprint arXiv:2412.10302, 2024
82024
Deepseekmoe: Towards ultimate expert specialization in mixture-of-experts language models. CoRR, abs/2401.06066, 2024. doi: 10.48550
D Dai, C Deng, C Zhao, RX Xu, H Gao, D Chen, J Li, W Zeng, X Yu, Y Wu, ...
arXiv preprint ARXIV.2401.06066, 0
8
Search method and apparatus based on artificial intelligence
LI Yukun, Y Liu, Y Sun, YU Dianhai
US Patent 11,151,177, 2021
12021
Systém momentálně nemůže danou operaci provést. Zkuste to znovu později.
Články 1–20