Spremljaj
Yongchang Hao
Yongchang Hao
Preverjeni e-poštni naslov na ualberta.ca - Domača stran
Naslov
Navedeno
Navedeno
Leto
Understanding and Improving Sequence-to-Sequence Pretraining for Neural Machine Translation
W Wang, W Jiao, Y Hao, X Wang, S Shi, Z Tu, M Lyu
Annual Meeting of the Association for Computational Linguistics (ACL) 1 …, 2022
512022
Flora: Low-Rank Adapters Are Secretly Gradient Compressors
Y Hao, Y Cao, L Mou
International Conference on Machine Learning (ICML), 2024
49*2024
Multi-Task Learning with Shared Encoder for Non-Autoregressive Machine Translation
Y Hao, S He, W Jiao, Z Tu, M Lyu, X Wang
North American Chapter of the Association for Computational Linguistics …, 2021
322021
Teacher Forcing Recovers Reward Functions for Text Generation
Y Hao, Y Liu, L Mou
Advances in Neural Information Processing Systems (NeurIPS), 2022
152022
An equal-size hard EM algorithm for diverse dialogue generation
Y Wen, Y Hao, Y Cao, L Mou
International Conference on Learning Representations (ICLR), 2023
92023
Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models
J Li, Y Hao, H Xu, X Wang, Y Hong
International Conference on Computational Linguistics (COLING), 2025
12025
NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks
Y Hao, Y Cao, L Mou
ENLSP @ NeurIPS 2024, 2024
12024
LLMR: Knowledge Distillation with a Large Language Model-Induced Reward
D Li, Y Hao, L Mou
Joint International Conference on Computational Linguistics, Language …, 2024
12024
ULPT: Prompt Tuning with Ultra-Low-Dimensional Optimization
Z Wu, Y Hao, L Mou
arXiv preprint arXiv:2502.04501, 2025
2025
Ginger: An Efficient Curvature Approximation with Linear Complexity for General Neural Networks
Y Hao, Y Cao, L Mou
arXiv preprint arXiv:2402.03295, 2024
2024
Discovering Reward Functions for Language Models
Y Hao
University of Alberta, 2023
2023
Sistem trenutno ne more izvesti postopka. Poskusite znova pozneje.
Članki 1–11