Sledovať
Arthur Jacot
Arthur Jacot
Assistant Professor, Courant Institute of Mathematical Sciences, NYU
Overená e-mailová adresa na: nyu.edu - Domovská stránka
Názov
Citované v
Citované v
Rok
Neural tangent kernel: Convergence and generalization in neural networks
A Jacot, F Gabriel, C Hongler
Advances in neural information processing systems 31, 2018
38952018
Scaling description of generalization with number of parameters in deep learning
M Geiger, A Jacot, S Spigler, F Gabriel, L Sagun, S d’Ascoli, G Biroli, ...
Journal of Statistical Mechanics: Theory and Experiment 2020 (2), 023401, 2020
2362020
Disentangling feature and lazy training in deep neural networks
M Geiger, S Spigler, A Jacot, M Wyart
Journal of Statistical Mechanics: Theory and Experiment 2020 (11), 113301, 2020
1822020
Implicit regularization of random feature models
A Jacot, B Simsek, F Spadaro, C Hongler, F Gabriel
International Conference on Machine Learning, 4631-4640, 2020
1072020
Geometry of the loss landscape in overparameterized neural networks: Symmetries and invariances
B Simsek, F Ged, A Jacot, F Spadaro, C Hongler, W Gerstner, J Brea
International Conference on Machine Learning, 9722-9732, 2021
1032021
Kernel alignment risk estimator: Risk prediction from training data
A Jacot, B Simsek, F Spadaro, C Hongler, F Gabriel
Advances in neural information processing systems 33, 15568-15578, 2020
712020
Saddle-to-Saddle Dynamics in Deep Linear Networks: Small Initialization Training, Symmetry, and Sparsity
A Jacot, F Ged, B Şimşek, C Hongler, F Gabriel
arXiv preprint arXiv:2106.15933, 2021
69*2021
Implicit bias of large depth networks: a notion of rank for nonlinear functions
A Jacot
arXiv preprint arXiv:2209.15055, 2022
372022
The asymptotic spectrum of the hessian of dnn throughout training
A Jacot, F Gabriel, C Hongler
arXiv preprint arXiv:1910.02875, 2019
322019
Freeze and chaos: Ntk views on dnn normalization, checkerboard and boundary artifacts
A Jacot, F Gabriel, F Ged, C Hongler
Mathematical and Scientific Machine Learning, 257-270, 2022
26*2022
Feature Learning in -regularized DNNs: Attraction/Repulsion and Sparsity
A Jacot, E Golikov, C Hongler, F Gabriel
Advances in Neural Information Processing Systems 35, 6763-6774, 2022
202022
Implicit bias of SGD in -regularized linear DNNs: One-way jumps from high to low rank
Z Wang, A Jacot
arXiv preprint arXiv:2305.16038, 2023
172023
Bottleneck structure in learned features: Low-dimension vs regularity tradeoff
A Jacot
Advances in Neural Information Processing Systems 36, 23607-23629, 2023
152023
Which frequencies do CNNs need? Emergent bottleneck structure in feature learning
Y Wen, A Jacot
arXiv preprint arXiv:2402.08010, 2024
72024
DNN-based topology optimisation: Spatial invariance and neural tangent kernel
B Dupuis, A Jacot
Advances in Neural Information Processing Systems 34, 27659-27669, 2021
72021
Order and chaos: NTK views on DNN normalization, checkerboard and boundary artifacts
A Jacot, F Gabriel, F Ged, C Hongler
arXiv preprint arXiv:1907.05715, 2019
72019
Mixed dynamics in linear networks: Unifying the lazy and active regimes
Z Tu, S Aranguri, A Jacot
arXiv preprint arXiv:2405.17580, 2024
62024
Shallow diffusion networks provably learn hidden low-dimensional structure
NM Boffi, A Jacot, S Tu, I Ziemann
arXiv preprint arXiv:2410.11275, 2024
32024
Understanding Layer-wise Contributions in Deep Neural Networks through Spectral Analysis
Y Dandi, A Jacot
arXiv preprint arXiv:2111.03972, 2021
32021
Wide neural networks trained with weight decay provably exhibit neural collapse
A Jacot, P Súkeník, Z Wang, M Mondelli
arXiv preprint arXiv:2410.04887, 2024
22024
Systém momentálne nemôže vykonať operáciu. Skúste to neskôr.
Články 1–20