Sketching data sets for large-scale learning: Kee** only what you need
Big data can be a blessing: with very large training data sets it becomes possible to perform
complex learning tasks with unprecedented accuracy. Yet, this improved performance …
complex learning tasks with unprecedented accuracy. Yet, this improved performance …
Compressive statistical learning with random feature moments
We describe a general framework—compressive statistical learning—for resourceefficient
large-scale learning: the training collection is compressed in one pass into a …
large-scale learning: the training collection is compressed in one pass into a …
Estimation of off-the grid sparse spikes with over-parametrized projected gradient descent: theory and application
In this article, we study the problem of recovering sparse spikes with over-parametrized
projected descent. We first provide a theoretical study of approximate recovery with our …
projected descent. We first provide a theoretical study of approximate recovery with our …
A sketching framework for reduced data transfer in photon counting lidar
Single-photon lidar has become a prominent tool for depth imaging in recent years. At the
core of the technique, the depth of a target is measured by constructing a histogram of time …
core of the technique, the depth of a target is measured by constructing a histogram of time …
Compressive learning for patch-based image denoising
The expected patch log-likelihood algorithm (EPLL) and its extensions have shown good
performances for image denoising. The prior model used by EPLL is usually a Gaussian …
performances for image denoising. The prior model used by EPLL is usually a Gaussian …
Controlling Wasserstein distances by Kernel norms with application to Compressive Statistical Learning
Comparing probability distributions is at the crux of many machine learning algorithms.
Maximum Mean Discrepancies (MMD) and Wasserstein distances are two classes of …
Maximum Mean Discrepancies (MMD) and Wasserstein distances are two classes of …
Mean nyström embeddings for adaptive compressive learning
Compressive learning is an approach to efficient large scale learning based on sketching an
entire dataset to a single mean embedding (the sketch), ie a vector of generalized moments …
entire dataset to a single mean embedding (the sketch), ie a vector of generalized moments …
Blind inverse problems with isolated spikes
V Debarnot, P Weiss - Information and Inference: A Journal of …, 2023 - academic.oup.com
Assume that an unknown integral operator living in some known subspace is observed
indirectly, by evaluating its action on a discrete measure containing a few isolated Dirac …
indirectly, by evaluating its action on a discrete measure containing a few isolated Dirac …
Approximation speed of quantized versus unquantized relu neural networks and beyond
We deal with two complementary questions about approximation properties of ReLU
networks. First, we study how the uniform quantization of ReLU networks with real-valued …
networks. First, we study how the uniform quantization of ReLU networks with real-valued …
Compressive learning of deep regularization for denoising
Solving ill-posed inverse problems can be done accurately if a regularizer well adapted to
the nature of the data is available. Such regularizer can be systematically linked with the …
the nature of the data is available. Such regularizer can be systematically linked with the …