Követés
Mansheej Paul
Mansheej Paul
Research Scientist, Databricks
E-mail megerősítve itt: databricks.com - Kezdőlap
Cím
Hivatkozott rá
Hivatkozott rá
Év
Deep learning on a data diet: Finding important examples early in training
M Paul, S Ganguli, GK Dziugaite
Advances in neural information processing systems 34, 20596-20607, 2021
4392021
Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the neural tangent kernel
S Fort, GK Dziugaite, M Paul, S Kharaghani, DM Roy, S Ganguli
Advances in Neural Information Processing Systems 33, 5850-5861, 2020
1992020
Lora learns less and forgets less
D Biderman, J Portes, JJG Ortiz, M Paul, P Greengard, C Jennings, ...
arXiv preprint arXiv:2405.09673, 2024
902024
Pretraining task diversity and the emergence of non-bayesian in-context learning for regression
A Raventós, M Paul, F Chen, S Ganguli
Advances in neural information processing systems 36, 14228-14246, 2023
702023
Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask?
M Paul, F Chen, BW Larsen, J Frankle, S Ganguli, GK Dziugaite
arXiv preprint arXiv:2210.03044, 2022
442022
Lottery tickets on a data diet: Finding initializations with sparse trainable networks
M Paul, B Larsen, S Ganguli, J Frankle, GK Dziugaite
Advances in Neural Information Processing Systems 35, 18916-18928, 2022
192022
Critique-out-loud reward models
Z Ankner, M Paul, B Cui, JD Chang, P Ammanabrolu
arXiv preprint arXiv:2408.11791, 2024
132024
Perplexed by Perplexity: Perplexity-Based Data Pruning With Small Reference Models
Z Ankner, C Blakeney, K Sreenivasan, M Marion, ML Leavitt, M Paul
arXiv preprint arXiv:2405.20541, 2024
132024
Scaling laws for precision
T Kumar, Z Ankner, BF Spector, B Bordelon, N Muennighoff, M Paul, ...
arXiv preprint arXiv:2411.04330, 2024
122024
Does your data spark joy? Performance gains from domain upsampling at the end of training
C Blakeney, M Paul, BW Larsen, S Owen, J Frankle
arXiv preprint arXiv:2406.03476, 2024
72024
The effects of pretraining task diversity on in-context learning of ridge regression
A Raventos, M Paul, F Chen, S Ganguli
ICLR 2023 Workshop on Mathematical and Empirical Understanding of Foundation …, 2023
52023
Predicting Task Forgetting in Large Language Models
A Kleiman, J Frankle, SM Kakade, M Paul
22023
Unmasking the Lottery Ticket Hypothesis: Efficient Adaptive Pruning for Finding Winning Tickets
M Paul, F Chen, BW Larsen, J Frankle, S Ganguli, GK Dziugaite
Has it Trained Yet? NeurIPS 2022 Workshop, 0
2
Perplexed by Perplexity: Perplexity-Based Pruning with Small Reference Models
Z Ankner, C Blakeney, K Sreenivasan, M Marion, ML Leavitt, M Paul
ICLR 2024 Workshop on Mathematical and Empirical Understanding of Foundation …, 0
1
Pre-Training on a Data Diet: Identifying Sufficient Examples for Early Training
M Paul, BW Larsen, S Ganguli, J Frankle, GK Dziugaite
First Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward at …, 0
1
nit Scaling: Simple and Scalable FP8 LLM Training
S Narayan, A Gupta, M Paul, D Blalock
arXiv preprint arXiv:2502.05967, 2025
2025
Soup to go: mitigating forgetting during continual learning with model averaging
A Kleiman, GK Dziugaite, J Frankle, S Kakade, M Paul
arXiv preprint arXiv:2501.05559, 2025
2025
Deep Learning on a Diet: An Error Landscape Perspective on Parameter and Data Efficiency in Deep Learning
M Paul
Stanford University, 2023
2023
A rendszer jelenleg nem tudja elvégezni a műveletet. Próbálkozzon újra később.
Cikkek 1–18