Suivre
Arthur Jacot
Arthur Jacot
Assistant Professor, Courant Institute of Mathematical Sciences, NYU
Adresse e-mail validée de nyu.edu - Page d'accueil
Titre
Citée par
Citée par
Année
Neural tangent kernel: Convergence and generalization in neural networks
A Jacot, F Gabriel, C Hongler
Advances in neural information processing systems 31, 2018
38212018
Scaling description of generalization with number of parameters in deep learning
M Geiger, A Jacot, S Spigler, F Gabriel, L Sagun, S d’Ascoli, G Biroli, ...
Journal of Statistical Mechanics: Theory and Experiment 2020 (2), 023401, 2020
2372020
Disentangling feature and lazy training in deep neural networks
M Geiger, S Spigler, A Jacot, M Wyart
Journal of Statistical Mechanics: Theory and Experiment 2020 (11), 113301, 2020
169*2020
Implicit regularization of random feature models
A Jacot, B Simsek, F Spadaro, C Hongler, F Gabriel
International Conference on Machine Learning, 4631-4640, 2020
1062020
Geometry of the loss landscape in overparameterized neural networks: Symmetries and invariances
B Simsek, F Ged, A Jacot, F Spadaro, C Hongler, W Gerstner, J Brea
International Conference on Machine Learning, 9722-9732, 2021
972021
Kernel alignment risk estimator: Risk prediction from training data
A Jacot, B Simsek, F Spadaro, C Hongler, F Gabriel
Advances in neural information processing systems 33, 15568-15578, 2020
682020
Saddle-to-Saddle Dynamics in Deep Linear Networks: Small Initialization Training, Symmetry, and Sparsity
A Jacot, F Ged, B Şimşek, C Hongler, F Gabriel
arXiv preprint arXiv:2106.15933, 2021
67*2021
Implicit bias of large depth networks: a notion of rank for nonlinear functions
A Jacot
arXiv preprint arXiv:2209.15055, 2022
332022
The asymptotic spectrum of the hessian of dnn throughout training
A Jacot, F Gabriel, C Hongler
arXiv preprint arXiv:1910.02875, 2019
322019
Freeze and chaos: Ntk views on dnn normalization, checkerboard and boundary artifacts
A Jacot, F Gabriel, F Ged, C Hongler
Mathematical and Scientific Machine Learning, 257-270, 2022
26*2022
Feature Learning in -regularized DNNs: Attraction/Repulsion and Sparsity
A Jacot, E Golikov, C Hongler, F Gabriel
Advances in Neural Information Processing Systems 35, 6763-6774, 2022
172022
Bottleneck structure in learned features: Low-dimension vs regularity tradeoff
A Jacot
Advances in Neural Information Processing Systems 36, 23607-23629, 2023
142023
Implicit bias of SGD in -regularized linear DNNs: One-way jumps from high to low rank
Z Wang, A Jacot
arXiv preprint arXiv:2305.16038, 2023
132023
DNN-based topology optimisation: Spatial invariance and neural tangent kernel
B Dupuis, A Jacot
Advances in Neural Information Processing Systems 34, 27659-27669, 2021
72021
Order and chaos: NTK views on DNN normalization, checkerboard and boundary artifacts
A Jacot, F Gabriel, F Ged, C Hongler
arXiv preprint arXiv:1907.05715, 2019
72019
Which Frequencies do CNNs Need? Emergent Bottleneck Structure in Feature Learning
Y Wen, A Jacot
arXiv preprint arXiv:2402.08010, 2024
52024
Mixed Dynamics In Linear Networks: Unifying the Lazy and Active Regimes
Z Tu, S Aranguri, A Jacot
arXiv preprint arXiv:2405.17580, 2024
42024
Understanding Layer-wise Contributions in Deep Neural Networks through Spectral Analysis
Y Dandi, A Jacot
arXiv preprint arXiv:2111.03972, 2021
32021
Wide neural networks trained with weight decay provably exhibit neural collapse
A Jacot, P Súkeník, Z Wang, M Mondelli
arXiv preprint arXiv:2410.04887, 2024
22024
Shallow diffusion networks provably learn hidden low-dimensional structure
NM Boffi, A Jacot, S Tu, I Ziemann
arXiv preprint arXiv:2410.11275, 2024
12024
Le système ne peut pas réaliser cette opération maintenant. Veuillez réessayer plus tard.
Articles 1–20