Trak: Attributing model behavior at scale

SM Park, K Georgiev, A Ilyas, G Leclerc… - arxiv preprint arxiv …, 2023 - arxiv.org
The goal of data attribution is to trace model predictions back to training data. Despite a long
line of work towards this goal, existing approaches to data attribution tend to force users to …

Sharp analysis of low-rank kernel matrix approximations

F Bach - Conference on learning theory, 2013 - proceedings.mlr.press
We consider supervised learning problems within the positive-definite kernel framework,
such as kernel ridge regression, kernel logistic regression or the support vector machine …

Tensor-reduced atomic density representations

JP Darby, DP Kovács, I Batatia, MA Caro, GLW Hart… - Physical Review Letters, 2023 - APS
Density-based representations of atomic environments that are invariant under Euclidean
symmetries have become a widely used tool in the machine learning of interatomic …

Sok: A review of differentially private linear models for high-dimensional data

A Khanna, E Raff, N Inkawhich - 2024 IEEE Conference on …, 2024 - ieeexplore.ieee.org
Linear models are ubiquitous in data science, but are particularly prone to overfitting and
data memorization in high dimensions. To guarantee the privacy of training data, differential …

On the generalization of representations in reinforcement learning

CL Lan, S Tu, A Oberman, R Agarwal… - arxiv preprint arxiv …, 2022 - arxiv.org
In reinforcement learning, state representations are used to tractably deal with large problem
spaces. State representations serve both to approximate the value function with few …

Task-aware compressed sensing with generative adversarial networks

M Kabkab, P Samangouei, R Chellappa - Proceedings of the AAAI …, 2018 - ojs.aaai.org
In recent years, neural network approaches have been widely adopted for machine learning
tasks, with applications in computer vision. More recently, unsupervised generative models …

Compressed learning: A deep neural network approach

A Adler, M Elad, M Zibulevsky - arxiv preprint arxiv:1610.09615, 2016 - arxiv.org
Compressed Learning (CL) is a joint signal processing and machine learning framework for
inference from a signal, using a small number of measurements obtained by linear …

Efficient kernel clustering using random fourier features

R Chitta, R **, AK Jain - 2012 IEEE 12th international …, 2012 - ieeexplore.ieee.org
Kernel clustering algorithms have the ability to capture the non-linear structure inherent in
many real world data sets and thereby, achieve better clustering performance than …

Sketching for large-scale learning of mixture models

N Keriven, A Bourrier, R Gribonval… - … and Inference: A …, 2018 - academic.oup.com
Learning parameters from voluminous data can be prohibitive in terms of memory and
computational requirements. We propose a 'compressive learning'framework, where we …

Efficient private empirical risk minimization for high-dimensional learning

SP Kasiviswanathan, H ** - International Conference on …, 2016 - proceedings.mlr.press
Dimensionality reduction is a popular approach for dealing with high dimensional data that
leads to substantial computational savings. Random projections are a simple and effective …