Volgen
Tengyang Xie
Tengyang Xie
Assistant Professor of Computer Science, University of Wisconsin-Madison
Geverifieerd e-mailadres voor cs.wisc.edu - Homepage
Titel
Geciteerd door
Geciteerd door
Jaar
Bellman-consistent pessimism for offline reinforcement learning
T Xie, CA Cheng, N Jiang, P Mineiro, A Agarwal
Advances in neural information processing systems 34, 6683-6694, 2021
3032021
Towards Optimal Off-Policy Evaluation for Reinforcement Learning with Marginalized Importance Sampling
T Xie, Y Ma, YX Wang
Advances in Neural Information Processing Systems, 9665-9675, 2019
1942019
Policy finetuning: Bridging sample-efficient offline and online reinforcement learning
T Xie, N Jiang, H Wang, C Xiong, Y Bai
Advances in neural information processing systems 34, 27395-27407, 2021
1842021
Adversarially trained actor critic for offline reinforcement learning
CA Cheng, T Xie, N Jiang, A Agarwal
International Conference on Machine Learning, 3852-3878, 2022
1502022
Batch value-function approximation with only realizability
T Xie, N Jiang
International Conference on Machine Learning, 11404-11413, 2021
1282021
Provably efficient q-learning with low switching cost
Y Bai, T Xie, N Jiang, YX Wang
Advances in Neural Information Processing Systems, 8004-8013, 2019
1192019
Q* Approximation Schemes for Batch Reinforcement Learning: A Theoretical Comparison
T Xie, N Jiang
Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence …, 2020
1092020
The role of coverage in online reinforcement learning
T Xie, DJ Foster, Y Bai, N Jiang, SM Kakade
arXiv preprint arXiv:2210.04157, 2022
762022
Direct nash optimization: Teaching language models to self-improve with general preferences
C Rosset, CA Cheng, A Mitra, M Santacroce, A Awadallah, T Xie
arXiv preprint arXiv:2404.03715, 2024
752024
Interpretable Preferences via Multi-Objective Reward Modeling and Mixture-of-Experts
H Wang, W Xiong, T Xie, H Zhao, T Zhang
arXiv preprint arXiv:2406.12845, 2024
682024
Finite sample analysis of minimax offline reinforcement learning: Completeness, fast rates and first-order efficiency
M Uehara, M Imaizumi, N Jiang, N Kallus, W Sun, T Xie
arXiv preprint arXiv:2102.02981, 2021
682021
Preference fine-tuning of llms should leverage suboptimal, on-policy data
F Tajwar, A Singh, A Sharma, R Rafailov, J Schneider, T Xie, S Ermon, ...
arXiv preprint arXiv:2404.14367, 2024
582024
A Block Coordinate Ascent Algorithm for Mean-Variance Optimization
T Xie, B Liu, Y Xu, M Ghavamzadeh, Y Chow, D Lyu, D Yoon
Advances in Neural Information Processing Systems, 1073-1083, 2018
402018
Adversarial model for offline reinforcement learning
M Bhardwaj, T Xie, B Boots, N Jiang, CA Cheng
Advances in Neural Information Processing Systems 36, 2024
312024
A variant of the wang-foster-kakade lower bound for the discounted setting
P Amortila, N Jiang, T Xie
arXiv preprint arXiv:2011.01075, 2020
252020
Exploratory Preference Optimization: Harnessing Implicit Q*-Approximation for Sample-Efficient RLHF
T Xie, DJ Foster, A Krishnamurthy, C Rosset, A Awadallah, A Rakhlin
arXiv preprint arXiv:2405.21046, 2024
212024
Interaction-Grounded Learning
T Xie, J Langford, P Mineiro, I Momennejad
International Conference on Machine Learning, 11414-11423, 2021
112021
Correcting the Mythos of KL-Regularization: Direct Alignment without Overoptimization via Chi-Squared Preference Optimization
A Huang, W Zhan, T Xie, JD Lee, W Sun, A Krishnamurthy, DJ Foster
arXiv preprint arXiv:2407.13399, 2024
8*2024
Harnessing density ratios for online reinforcement learning
P Amortila, DJ Foster, N Jiang, A Sekhari, T Xie
arXiv preprint arXiv:2401.09681, 2024
82024
Armor: A model-based framework for improving arbitrary baseline policies with offline data
T Xie, M Bhardwaj, N Jiang, CA Cheng
arXiv preprint arXiv:2211.04538, 2022
82022
Het systeem kan de bewerking nu niet uitvoeren. Probeer het later opnieuw.
Artikelen 1–20