Follow
Jiafei Lyu
Jiafei Lyu
PhD of Control Science and Engineering, Tsinghua University
Verified email at mails.tsinghua.edu.cn - Homepage
Title
Cited by
Cited by
Year
Mildly conservative Q-learning for offline reinforcement learning
J Lyu, X Ma, X Li, Z Lu
NeurIPS 2022 (Spotlight), 2022
1252022
Nuclear power plants with artificial intelligence in industry 4.0 era: Top-level design and current applications—A systemic review
C Lu, J Lyu, L Zhang, A Gong, Y Fan, J Yan, X Li
IEEE Access 8, 194315-194332, 2020
762020
Efficient Continuous Control with Double Actors and Regularized Critics
J Lyu, X Ma, J Yan, X Li
In proceedings of 36th AAAI Conference on Artificial Intelligence (AAAI-22 oral), 2021
592021
Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model
K Yang, J Tao, J Lyu, C Ge, J Chen, Q Li, W Shen, X Zhu, X Li
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2024), 2023
432023
Double Check Your State Before Trusting It: Confidence-Aware Bidirectional Offline Model-Based Imagination
J Lyu, X Li, Z Lu
NeurIPS 2022 (Spotlight), 2022
222022
Uncertainty-driven Trajectory Truncation for Model-based Offline Reinforcement Learning
J Zhang, J Lyu, X Ma, J Yan, J Yang, L Wan, X Li
ECAI 2023 (Oral); ICRA 2023 L-DOD Workshop, 2023
20*2023
Bias-reduced multi-step hindsight experience replay for efficient multi-goal reinforcement learning
R Yang, J Lyu, Y Yang, J Yan, F Luo, D Luo, L Li, X Li
arXiv preprint arXiv:2102.12962, 2021
11*2021
Exploration and Anti-Exploration with Distributional Random Network Distillation
K Yang, J Tao, J Lyu, X Li
International Conference on Machine Learning (ICML 2024), 2024
102024
Off-Policy RL Algorithms Can be Sample-Efficient for Continuous Control via Sample Multiple Reuse
J Lyu, L Wan, Z Lu, X Li
Information Sciences, 2023
92023
Value Activation for Bias Alleviation: Generalized-activated Deep Double Deterministic Policy Gradients
J Lyu, Y Yang, J Yan, X Li
Neurocomputing, 2021
92021
Normalization Enhances Generalization in Visual Reinforcement Learning
L Li, J Lyu, G Ma, Z Wang, Z Yang, X Li, Z Li
AAMAS 2024 (Oral); Generalization in Planning Workshop@NeurIPS 2023, 2023
82023
PEARL: Zero-shot Cross-task Preference Alignment and Robust Reward Learning for Robotic Manipulation
R Liu, Y Du, F Bai, J Lyu, X Li
International Conference on Machine Learning (ICML 2024), 2024
6*2024
SEABO: A Simple Search-Based Method for Offline Imitation Learning
J Lyu, X Ma, L Wan, R Liu, X Li, Z Lu
International Conference on Learning Representations (ICLR 2024), 2024
52024
Understanding what affects generalization gap in visual reinforcement learning: Theory and empirical evidence
J Lyu, L Wan, X Li, Z Lu
Journal of Artificial Intelligence Research, 2024
52024
Mind the Model, Not the Agent: The Primacy Bias in Model-Based RL
Z Qiao, J Lyu, X Li
ECAI, 2024
5*2024
State Advantage Weighting for Offline RL
J Lyu, A Gong, L Wan, Z Lu, X Li
ICLR2023 tiny paper; 3rd Offline Reinforcement Learning Workshop at NeurIPS 2022, 2022
52022
A two-stage reinforcement learning-based approach for multi-entity task allocation
A Gong, K Yang, J Lyu, X Li
Engineering Applications of Artificial Intelligence 136, 108906, 2024
42024
Cross-Domain Policy Adaptation by Capturing Representation Mismatch
J Lyu, C Bai, J Yang, Z Lu, X Li
International Conference on Machine Learning (ICML 2024), 2024
42024
Prag: Periodic regularized action gradient for efficient continuous control
X Li, Z Qiao, A Gong, J Lyu, C Yu, J Yan, X Li
Pacific Rim International Conference on Artificial Intelligence, 106-119, 2022
32022
Enhancing visual reinforcement learning with State–Action Representation
M Yan, J Lyu, X Li
Knowledge-Based Systems 304, 112487, 2024
22024
The system can't perform the operation now. Try again later.
Articles 1–20