Learning curves of generic features maps for realistic datasets with a teacher-student model
Teacher-student models provide a framework in which the typical-case performance of high-
dimensional supervised learning can be described in closed form. The assumptions of …
dimensional supervised learning can be described in closed form. The assumptions of …
Random features for kernel approximation: A survey on algorithms, theory, and beyond
The class of random features is one of the most popular techniques to speed up kernel
methods in large-scale problems. Related works have been recognized by the NeurIPS Test …
methods in large-scale problems. Related works have been recognized by the NeurIPS Test …
[PDF][PDF] Machine learning and the implementable efficient frontier
We propose that investment strategies should be evaluated based on their net-oftrading-cost
return for each level of risk, which we term the “implementable efficient frontier.” While …
return for each level of risk, which we term the “implementable efficient frontier.” While …
AutoML-GWL: Automated machine learning model for the prediction of groundwater level
Predicting groundwater levels is pivotal in curbing overexploitation and ensuring effective
water resource governance. However, groundwater level prediction is intricate, driven by …
water resource governance. However, groundwater level prediction is intricate, driven by …
Generalization error rates in kernel regression: The crossover from the noiseless to noisy regime
In this manuscript we consider Kernel Ridge Regression (KRR) under the Gaussian design.
Exponents for the decay of the excess generalization error of KRR have been reported in …
Exponents for the decay of the excess generalization error of KRR have been reported in …
Multiple descent: Design your own generalization curve
This paper explores the generalization loss of linear regression in variably parameterized
families of models, both under-parameterized and over-parameterized. We show that the …
families of models, both under-parameterized and over-parameterized. We show that the …
Spectrum of inner-product kernel matrices in the polynomial regime and multiple descent phenomenon in kernel ridge regression
T Misiakiewicz - arxiv preprint arxiv:2204.10425, 2022 - arxiv.org
We study the spectrum of inner-product kernel matrices, ie, $ n\times n $ matrices with
entries $ h (\langle\textbf {x} _i,\textbf {x} _j\rangle/d) $ where the $(\textbf {x} _i) _ {i\leq n} …
entries $ h (\langle\textbf {x} _i,\textbf {x} _j\rangle/d) $ where the $(\textbf {x} _i) _ {i\leq n} …
A theoretical analysis of the test error of finite-rank kernel ridge regression
Existing statistical learning guarantees for general kernel regressors often yield loose
bounds when used with finite-rank kernels. Yet, finite-rank kernels naturally appear in a …
bounds when used with finite-rank kernels. Yet, finite-rank kernels naturally appear in a …
Benign overfitting in deep neural networks under lazy training
This paper focuses on over-parameterized deep neural networks (DNNs) with ReLU
activation functions and proves that when the data distribution is well-separated, DNNs can …
activation functions and proves that when the data distribution is well-separated, DNNs can …
Six lectures on linearized neural networks
In these six lectures, we examine what can be learnt about the behavior of multi-layer neural
networks from the analysis of linear models. We first recall the correspondence between …
networks from the analysis of linear models. We first recall the correspondence between …