Följ
Peiyi Wang
Peiyi Wang
Verifierad e-postadress på stu.pku.edu.cn
Titel
Citeras av
Citeras av
År
Large Language Models are not Fair Evaluators
P Wang, L Li, L Chen, D Zhu, B Lin, Y Cao, Q Liu, T Liu, Z Sui
ACL2024, 2023
4192023
DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models
Z Shao, P Wang, Q Zhu, R Xu, J Song, M Zhang, YK Li, Y Wu, D Guo
arXiv preprint arXiv:2402.03300, 2024
271*2024
Deepseek llm: Scaling open-source language models with longtermism
X Bi, D Chen, G Chen, S Chen, D Dai, C Deng, H Ding, K Dong, Q Du, ...
arXiv preprint arXiv:2401.02954, 2024
2172024
Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations
P Wang, L Li, Z Shao, RX Xu, D Dai, Y Li, D Chen, Y Wu, Z Sui
ACL2024, 2023
189*2023
MIT: A Large-Scale Dataset towards Multi-Modal Multilingual Instruction Tuning
L Li, Y Yin, S Li, L Chen, P Wang, S Ren, M Li, Y Yang, J Xu, X Sun, ...
arXiv preprint arXiv:2306.04387, 2023
165*2023
Deepseek-coder-v2: Breaking the barrier of closed-source models in code intelligence
Q Zhu, D Guo, Z Shao, D Yang, P Wang, R Xu, Y Wu, Y Li, H Gao, S Ma, ...
arXiv preprint arXiv:2406.11931, 2024
157*2024
Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning
D Guo, D Yang, H Zhang, J Song, R Zhang, R Xu, Q Zhu, S Ma, P Wang, ...
arXiv preprint arXiv:2501.12948, 2025
1292025
Incorporating Hierarchy into Text Encoder: a Contrastive Learning Approach for Hierarchical Text Classification
Z Wang, P Wang, L Huang, X Sun, H Wang
ACL2022, 2022
1272022
Deepseek-v2: A strong, economical, and efficient mixture-of-experts language model
A Liu, B Feng, B Wang, B Wang, B Liu, C Zhao, C Dengr, C Ruan, D Dai, ...
arXiv preprint arXiv:2405.04434, 2024
1252024
Deepseek-v3 technical report
A Liu, B Feng, B Xue, B Wang, B Wu, C Lu, C Zhao, C Deng, C Zhang, ...
arXiv preprint arXiv:2412.19437, 2024
1092024
Speculative decoding: Exploiting speculative execution for accelerating seq2seq generation
H Xia, T Ge, P Wang, SQ Chen, F Wei, Z Sui
arXiv preprint arXiv:2203.16487, 2022
73*2022
Unlocking efficiency in large language model inference: A comprehensive survey of speculative decoding
H Xia, Z Yang, Q Dong, P Wang, Y Li, T Ge, T Liu, W Li, Z Sui
Findings of ACL2024, 2024
692024
Silkie: Preference distillation for large visual language models
L Li, Z Xie, M Li, S Chen, P Wang, L Chen, Y Yang, B Wang, L Kong
arXiv preprint arXiv:2312.10665, 2023
692023
A Two-Stream AMR-enhanced Model for Document-level Event Argument Extraction
R Xu, P Wang, T Liu, S Zeng, B Chang, Z Sui
NAACL2022, 2022
592022
HPT: Hierarchy-aware Prompt Tuning for Hierarchical Text Classification
Z Wang, P Wang, T Liu, Y Cao, Z Sui, H Wang
EMNLP2022, 2022
592022
An enhanced span-based decomposition method for few-shot sequence labeling
P Wang, R Xu, T Liu, Q Zhou, Y Cao, B Chang, Z Sui
NAACL2022, 2021
552021
Making large language models better reasoners with alignment
P Wang, L Li, L Chen, F Song, B Lin, Y Cao, T Liu, Z Sui
arXiv preprint arXiv:2309.02144, 2023
522023
PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain
L Chen, Y Zhang, S Ren, H Zhao, Z Cai, Y Wang, P Wang, X Meng, T Liu, ...
Findings of ACL2024, 2024
47*2024
First target and opinion then polarity: Enhancing target-opinion correlation for aspect sentiment triplet extraction
L Huang, P Wang, S Li, T Liu, X Zhang, Z Cheng, D Yin, H Wang
arXiv preprint arXiv:2102.08549, 2021
372021
Learning Robust Representations for Continual Relation Extraction via Adversarial Class Augmentation
P Wang, Y Song, T Liu, B Lin, Y Cao, S Li, Z Sui
EMNLP2022, 2022
212022
Systemet kan inte utföra åtgärden just nu. Försök igen senare.
Artiklar 1–20