Fast global convergence of natural policy gradient methods with entropy regularization S Cen, C Cheng, Y Chen, Y Wei, Y Chi
Operations Research 70 (4), 2563-2578, 2022
236 2022 Breaking the sample size barrier in model-based reinforcement learning with a generative model G Li, Y Wei, Y Chi, Y Chen
Operations Research 72 (1), 203-221, 2024
152 * 2024 Sample complexity of asynchronous Q-learning: Sharper analysis and variance reduction G Li, Y Wei, Y Chi, Y Gu, Y Chen
IEEE Transactions on Information Theory 68 (1), 448-473, 2021
139 2021 The lasso with general gaussian designs with applications to hypothesis testing M Celentano, A Montanari, Y Wei
The Annals of Statistics 51 (5), 2194-2220, 2023
117 2023 Pessimistic q-learning for offline reinforcement learning: Towards optimal sample complexity L Shi, G Li, Y Wei, Y Chen, Y Chi
International conference on machine learning, 19967-20025, 2022
112 2022 Is Q-learning minimax optimal? a tight sample complexity analysis G Li, C Cai, Y Chen, Y Wei, Y Chi
Operations Research 72 (1), 222-236, 2024
102 2024 Settling the sample complexity of model-based offline reinforcement learning G Li, L Shi, Y Chen, Y Chi, Y Wei
The Annals of Statistics 52 (1), 233-260, 2024
101 2024 Early stopping for kernel boosting algorithms: A general analysis with localized complexities Y Wei, F Yang, MJ Wainwright
IEEE Transactions on Information Theory 65 (10), 6685-6703, 2019
96 2019 Fast policy extragradient methods for competitive games with entropy regularization S Cen, Y Wei, Y Chi
Advances in Neural Information Processing Systems 34, 27952-27964, 2021
89 2021 Towards faster non-asymptotic convergence for diffusion-based generative models G Li, Y Wei, Y Chen, Y Chi
arXiv preprint arXiv:2306.09251, 2023
85 * 2023 Softmax Policy Gradient Methods Can Take Exponential Time to Converge G Li, Y Wei, Y Chi, Y Chen
Mathematical Programming, 2021
68 2021 Sharp Statistical Guarantees for Adversarially Robust Gaussian Classification C Dan, Y Wei, P Ravikumar
International Conference on Machine Learning, 2345-2355, 2020
66 2020 Uniform Consistency of Cross-Validation Estimators for High-Dimensional Ridge Regression P Patil, Y Wei, A Rinaldo, R Tibshirani
International Conference on Artificial Intelligence and Statistics, 3178-3186, 2021
56 2021 Derandomizing knockoffs Z Ren, Y Wei, E Candès
Journal of the American Statistical Association 118 (542), 948-958, 2023
50 2023 The curious price of distributional robustness in reinforcement learning with a generative model L Shi, G Li, Y Wei, Y Chen, M Geist, Y Chi
Advances in Neural Information Processing Systems 36, 79903-79917, 2023
40 2023 Minimum -norm interpolators: Precise asymptotics and multiple descent Y Li, Y Wei
arXiv preprint arXiv:2110.09502, 2021
39 2021 Tackling small eigen-gaps: Fine-grained eigenvector estimation and inference under heteroscedastic noise C Cheng, Y Wei, Y Chen
IEEE Transactions on Information Theory 67 (11), 7380-7419, 2021
38 * 2021 Integration and transfer learning of single-cell transcriptomes via cFIT M Peng, Y Li, B Wamsley, Y Wei, K Roeder
Proceedings of the National Academy of Sciences 118 (10), 2021
37 2021 Accelerating convergence of score-based diffusion models, provably G Li, Y Huang, T Efimov, Y Wei, Y Chi, Y Chen
arXiv preprint arXiv:2403.03852, 2024
35 2024 Sample-efficient reinforcement learning is feasible for linearly realizable MDPs with limited revisiting G Li, Y Chen, Y Chi, Y Gu, Y Wei
Advances in Neural Information Processing Systems 34, 16671-16685, 2021
35 2021