Följ
Yatin Dandi
Yatin Dandi
Verifierad e-postadress på iitk.ac.in
Titel
Citeras av
Citeras av
År
How two-layer neural networks learn, one (giant) step at a time
Y Dandi, F Krzakala, B Loureiro, L Pesce, L Stephan
arXiv preprint arXiv:2305.18270, 2023
472023
Implicit gradient alignment in distributed and federated learning
Y Dandi, L Barba, M Jaggi
Proceedings of the AAAI Conference on Artificial Intelligence 36 (6), 6454-6462, 2022
282022
Universality laws for gaussian mixtures in generalized linear models
Y Dandi, L Stephan, F Krzakala, B Loureiro, L Zdeborová
Advances in Neural Information Processing Systems 36, 2024
262024
The benefits of reusing batches for gradient descent in two-layer networks: Breaking the curse of information and leap exponents
Y Dandi, E Troiani, L Arnaboldi, L Pesce, L Zdeborová, F Krzakala
arXiv preprint arXiv:2402.03220, 2024
262024
Sampling with flows, diffusion, and autoregressive neural networks from a spin-glass perspective
D Ghio, Y Dandi, F Krzakala, L Zdeborová
Proceedings of the National Academy of Sciences 121 (27), e2311810121, 2024
242024
Asymptotics of feature learning in two-layer networks after one gradient-step
H Cui, L Pesce, Y Dandi, F Krzakala, YM Lu, L Zdeborová, B Loureiro
arXiv preprint arXiv:2402.04980, 2024
212024
Data-heterogeneity-aware mixing for decentralized learning
Y Dandi, A Koloskova, M Jaggi, SU Stich
arXiv preprint arXiv:2204.06477, 2022
212022
Repetita iuvant: Data repetition allows sgd to learn high-dimensional multi-index functions
L Arnaboldi, Y Dandi, F Krzakala, L Pesce, L Stephan
arXiv preprint arXiv:2405.15459, 2024
122024
Jointly trained image and video generation using residual vectors
Y Dandi, A Das, S Singhal, V Namboodiri, P Rai
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer …, 2020
92020
Fundamental limits of weak learnability in high-dimensional multi-index models
E Troiani, Y Dandi, L Defilippis, L Zdeborová, B Loureiro, F Krzakala
arXiv preprint arXiv:2405.15480, 2024
72024
Maximally-stable local optima in random graphs and spin glasses: Phase transitions and universality
Y Dandi, D Gamarnik, L Zdeborová
arXiv preprint arXiv:2305.03591, 2023
62023
Generalized Adversarially Learned Inference
Y Dandi, H Bharadhwaj, A Kumar, P Rai
AAAI, 2021, 2020
62020
Implicit gradient alignment in distributed and federated learning
L Barba, M Jaggi, Y Dandi
AAAI Conference on Artificial Intelligence, AAAI 22, 2021
52021
Online Learning and Information Exponents: On The Importance of Batch size, and Time/Complexity Tradeoffs
L Arnaboldi, Y Dandi, F Krzakala, B Loureiro, L Pesce, L Stephan
arXiv preprint arXiv:2406.02157, 2024
42024
Understanding Layer-wise Contributions in Deep Neural Networks through Spectral Analysis
Y Dandi, A Jacot
arXiv preprint arXiv:2111.03972, 2021
32021
A random matrix theory perspective on the spectrum of learned features and asymptotic generalization capabilities
Y Dandi, L Pesce, H Cui, F Krzakala, YM Lu, B Loureiro
arXiv preprint arXiv:2410.18938, 2024
22024
Model-Agnostic Learning to Meta-Learn
A Devos, Y Dandi
NeurIPS pre-registration workshop, 2020, 2020
22020
Learning from setbacks: the impact of adversarial initialization on generalization performance
K Ravichandran, Y Dandi, S Karp, F Mignacco
NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning, 0
1
Optimal Spectral Transitions in High-Dimensional Multi-Index Models
L Defilippis, Y Dandi, P Mergny, F Krzakala, B Loureiro
arXiv preprint arXiv:2502.02545, 2025
2025
Fundamental limits of learning in sequence multi-index models and deep attention networks: High-dimensional asymptotics and sharp thresholds
E Troiani, H Cui, Y Dandi, F Krzakala, L Zdeborová
arXiv preprint arXiv:2502.00901, 2025
2025
Systemet kan inte utföra åtgärden just nu. Försök igen senare.
Artiklar 1–20