关注
Yijin Liu
Yijin Liu
Wechat AI, Tencent, China
在 tencent.com 的电子邮件经过验证
标题
引用次数
引用次数
年份
GCDT: A Global Context Enhanced Deep Transition Architecture For Sequence Labeling
Y Liu, F Meng, J Zhang, J Xu, Y Chen, J Zhou
Proceedings of ACL 2019, 2019
1122019
CM-Net: A Novel Collaborative Memory Network For Spoken Language Understanding
Y Liu, F Meng, J Zhang, J Zhou, Y Chen, J Xu
Proceedings of EMNLP 2019, 2019
932019
Prevent the language model from being overconfident in neural machine translation
M Miao, F Meng, Y Liu, XH Zhou, J Zhou
arXiv preprint arXiv:2105.11098, 2021
482021
Faster Depth-Adaptive Transformers
Y Liu, F Meng, J Zhou, Y Chen, J Xu
Proceedings of AAAI 2021, 2020
452020
Improving translation faithfulness of large language models via augmenting instructions
Y Chen, Y Liu, F Meng, Y Chen, J Xu, J Zhou
arXiv preprint arXiv:2308.12674, 2023
222023
WeChat Neural Machine Translation Systems for WMT20
F Meng, J Yan, Y Liu, Y Gao, X Zeng, Q Zeng, P Li, M Chen, J Zhou, S Liu, ...
Fifth Conference on Machine Translation (WMT20), 2020
222020
Scheduled sampling based on decoding steps for neural machine translation
Y Liu, F Meng, Y Chen, J Xu, J Zhou
arXiv preprint arXiv:2108.12963, 2021
192021
Wechat neural machine translation systems for wmt21
X Zeng, Y Liu, E Li, Q Ran, F Meng, P Li, J Xu, J Zhou
arXiv preprint arXiv:2108.02401, 2021
182021
Conditional bilingual mutual information based adaptive training for neural machine translation
S Zhang, Y Liu, F Meng, Y Chen, J Xu, J Liu, J Zhou
arXiv preprint arXiv:2203.02951, 2022
172022
Confidence-aware scheduled sampling for neural machine translation
Y Liu, F Meng, Y Chen, J Xu, J Zhou
arXiv preprint arXiv:2107.10427, 2021
172021
Instruction position matters in sequence generation with large language models
Y Liu, X Zeng, F Meng, J Zhou
arXiv preprint arXiv:2308.12097, 2023
152023
Bilingual mutual information based adaptive training for neural machine translation
Y Xu, Y Liu, F Meng, J Zhang, J Xu, J Zhou
arXiv preprint arXiv:2105.12523, 2021
132021
Accelerating inference in large language models with a unified layer skipping strategy
Y Liu, F Meng, J Zhou
arXiv preprint arXiv:2404.06954, 2024
72024
Depth-adaptive graph recurrent network for text classification
Y Liu, F Meng, Y Chen, J Xu, J Zhou
arXiv preprint arXiv:2003.00166, 2020
42020
LCS: A language converter strategy for zero-shot neural machine translation
Z Sun, Y Liu, F Meng, J Xu, Y Chen, J Zhou
arXiv preprint arXiv:2406.02876, 2024
22024
Comments as natural logic pivots: Improve code generation via comment perspective
Y Chen, Y Liu, F Meng, Y Chen, J Xu, J Zhou
arXiv preprint arXiv:2404.07549, 2024
22024
Branchnorm: Robustly scaling extremely deep transformers
Y Liu, X Zeng, F Meng, J Zhou
arXiv preprint arXiv:2305.02790, 2023
22023
Towards Multiple References Era--Addressing Data Leakage and Limited Reference Diversity in NLG Evaluation
X Zeng, Y Liu, F Meng, J Zhou
arXiv preprint arXiv:2308.03131, 2023
12023
Towards robust online dialogue response generation
L Cui, F Meng, Y Liu, J Zhou, Y Zhang
arXiv preprint arXiv:2203.03168, 2022
12022
系统目前无法执行此操作,请稍后再试。
文章 1–19