Следене
Song Mei
Song Mei
Assistant Professor at UC Berkeley
Потвърден имейл адрес: berkeley.edu - Начална страница
Заглавие
Позовавания
Позовавания
Година
A mean field view of the landscape of two-layers neural networks
S Mei, A Montanari, P Nguyen
Proceedings of the National Academy of Sciences 115, E7665-E7671, 2018
10632018
The generalization error of random features regression: Precise asymptotics and the double descent curve
S Mei, A Montanari
Communications on Pure and Applied Mathematics 75 (4), 667-766, 2022
7472022
The landscape of empirical risk for non-convex losses
S Mei, Y Bai, A Montanari
The Annals of Statistics 46 (6A), 2747-2774, 2018
3872018
Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit
S Mei, T Misiakiewicz, A Montanari
Conference on Learning Theory (COLT) 2019, 2019
3392019
Linearized two-layers neural networks in high dimension
B Ghorbani, S Mei, T Misiakiewicz, A Montanari
The Annals of Statistics 49 (2), 1029-1054, 2021
2892021
When do neural networks outperform kernel methods?
B Ghorbani, S Mei, T Misiakiewicz, A Montanari
Advances in Neural Information Processing Systems 33, 14820-14830, 2020
2272020
Transformers as statisticians: Provable in-context learning with in-context algorithm selection
Y Bai, F Chen, H Wang, C Xiong, S Mei
Advances in neural information processing systems 36, 57125-57211, 2023
2082023
Limitations of Lazy Training of Two-layers Neural Network
B Ghorbani, S Mei, T Misiakiewicz, A Montanari
Advances in Neural Information Processing Systems, 9108-9118, 2019
1682019
Generalization error of random feature and kernel methods: hypercontractivity and kernel matrix concentration
S Mei, T Misiakiewicz, A Montanari
Applied and Computational Harmonic Analysis 59, 3-84, 2022
1582022
The landscape of the spiked tensor model
GB Arous, S Mei, A Montanari, M Nica
Communications on Pure and Applied Mathematics 72 (11), 2282-2330, 2019
1392019
When Can We Learn General-Sum Markov Games with a Large Number of Players Sample-Efficiently?
Z Song, S Mei, Y Bai
International Conference on Learning Representations (ICLR) 2022, 2021
1192021
Learning with invariances in random features and kernel models
S Mei, T Misiakiewicz, A Montanari
Conference on Learning Theory, 3351-3418, 2021
952021
Negative preference optimization: From catastrophic collapse to effective unlearning
R Zhang, L Lin, Y Bai, S Mei
The First Conference on Language Modeling (COLM) 2024, 2024
822024
Solving SDPs for synchronization and MaxCut problems via the Grothendieck inequality
S Mei, T Misiakiewicz, A Montanari, RI Oliveira
Conference on Learning Theory (COLT) 2017, 2017
812017
Opportunities and challenges of diffusion models for generative AI
M Chen, S Mei, J Fan, M Wang
National Science Review 11 (12), nwae348, 2024
63*2024
How do transformers learn in-context beyond simple functions? a case study on learning with representations
T Guo, W Hu, S Mei, H Wang, C Xiong, S Savarese, Y Bai
International Conference on Learning Representations (ICLR) 2024, 2023
592023
Transformers as decision makers: Provable in-context reinforcement learning via supervised pretraining
L Lin, Y Bai, S Mei
International Conference on Learning Representations (ICLR) 2024, 2023
592023
Don’t just blame over-parametrization for over-confidence: Theoretical analysis of calibration in binary classification
Y Bai, S Mei, H Wang, C Xiong
International conference on machine learning, 566-576, 2021
552021
TAP free energy, spin glasses and variational inference
Z Fan, S Mei, A Montanari
The Annals of Probability 49 (1), 1-45, 2021
452021
Unified algorithms for RL with Decision-Estimation Coefficients: PAC, reward-free, preference-based learning and beyond
F Chen, S Mei, Y Bai
The Annals of Statistics 53 (1), 426-456, 2025
44*2025
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–20