Sketching data sets for large-scale learning: Kee** only what you need

R Gribonval, A Chatalic, N Keriven… - IEEE Signal …, 2021 - ieeexplore.ieee.org
Big data can be a blessing: with very large training data sets it becomes possible to perform
complex learning tasks with unprecedented accuracy. Yet, this improved performance …

Compressive statistical learning with random feature moments

R Gribonval, G Blanchard, N Keriven… - … Statistics and Learning, 2021 - ems.press
We describe a general framework—compressive statistical learning—for resourceefficient
large-scale learning: the training collection is compressed in one pass into a …

Estimation of off-the grid sparse spikes with over-parametrized projected gradient descent: theory and application

PJ Bénard, Y Traonmilin, JF Aujol, E Soubies - Inverse Problems, 2024 - iopscience.iop.org
In this article, we study the problem of recovering sparse spikes with over-parametrized
projected descent. We first provide a theoretical study of approximate recovery with our …

A sketching framework for reduced data transfer in photon counting lidar

MP Sheehan, J Tachella… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Single-photon lidar has become a prominent tool for depth imaging in recent years. At the
core of the technique, the depth of a target is measured by constructing a histogram of time …

Compressive learning for patch-based image denoising

H Shi, Y Traonmilin, JF Aujol - SIAM Journal on Imaging Sciences, 2022 - SIAM
The expected patch log-likelihood algorithm (EPLL) and its extensions have shown good
performances for image denoising. The prior model used by EPLL is usually a Gaussian …

Controlling Wasserstein distances by Kernel norms with application to Compressive Statistical Learning

T Vayer, R Gribonval - Journal of Machine Learning Research, 2023 - jmlr.org
Comparing probability distributions is at the crux of many machine learning algorithms.
Maximum Mean Discrepancies (MMD) and Wasserstein distances are two classes of …

Mean nyström embeddings for adaptive compressive learning

A Chatalic, L Carratino, E De Vito… - International …, 2022 - proceedings.mlr.press
Compressive learning is an approach to efficient large scale learning based on sketching an
entire dataset to a single mean embedding (the sketch), ie a vector of generalized moments …

Blind inverse problems with isolated spikes

V Debarnot, P Weiss - Information and Inference: A Journal of …, 2023 - academic.oup.com
Assume that an unknown integral operator living in some known subspace is observed
indirectly, by evaluating its action on a discrete measure containing a few isolated Dirac …

Approximation speed of quantized versus unquantized relu neural networks and beyond

A Gonon, N Brisebarre, R Gribonval… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
We deal with two complementary questions about approximation properties of ReLU
networks. First, we study how the uniform quantization of ReLU networks with real-valued …

Compressive learning of deep regularization for denoising

H Shi, Y Traonmilin, JF Aujol - … Conference on Scale Space and Variational …, 2023 - Springer
Solving ill-posed inverse problems can be done accurately if a regularizer well adapted to
the nature of the data is available. Such regularizer can be systematically linked with the …