A survey on deep matrix factorizations

P De Handschutter, N Gillis, X Siebert - Computer Science Review, 2021 - Elsevier
Constrained low-rank matrix approximations have been known for decades as powerful
linear dimensionality reduction techniques able to extract the information contained in large …

Data-driven cardiovascular flow modelling: examples and opportunities

A Arzani, STM Dawson - Journal of the Royal Society …, 2021 - royalsocietypublishing.org
High-fidelity blood flow modelling is crucial for enhancing our understanding of
cardiovascular disease. Despite significant advances in computational and experimental …

Scatterbrain: Unifying sparse and low-rank attention

B Chen, T Dao, E Winsor, Z Song… - Advances in Neural …, 2021 - proceedings.neurips.cc
Recent advances in efficient Transformers have exploited either the sparsity or low-rank
properties of attention matrices to reduce the computational and memory bottlenecks of …

[BOG][B] Numerical linear algebra

LN Trefethen, D Bau - 2022 - SIAM
Since the early 1980. the first author has taught a graduate course in numerical linear
algebra at MIT and Cornell. The alumni of this course, now numbering in the hundreds, have …

[BOG][B] Numerical methods for least squares problems

Å Björck - 2024 - SIAM
Excerpt More than 25 years have passed since the first edition of this book was published in
1996. Least squares and least-norm problems have become more significant with every …

Pixelated butterfly: Simple and efficient sparse training for neural network models

T Dao, B Chen, K Liang, J Yang, Z Song… - arxiv preprint arxiv …, 2021 - arxiv.org
Overparameterized neural networks generalize well but are expensive to train. Ideally, one
would like to reduce their computational cost while retaining their generalization benefits …

The why and how of nonnegative matrix factorization

N Gillis - … , optimization, kernels, and support vector machines, 2014 - books.google.com
Nonnegative matrix factorization (NMF) has become a widely used tool for the analysis of
high-dimensional data as it automatically extracts sparse and meaningful features from a set …

Jfb: Jacobian-free backpropagation for implicit networks

SW Fung, H Heaton, Q Li, D McKenzie… - Proceedings of the …, 2022 - ojs.aaai.org
A promising trend in deep learning replaces traditional feedforward networks with implicit
networks. Unlike traditional networks, implicit networks solve a fixed point equation to …

Missing data imputation using optimal transport

B Muzellec, J Josse, C Boyer… - … Conference on Machine …, 2020 - proceedings.mlr.press
Missing data is a crucial issue when applying machine learning algorithms to real-world
datasets. Starting from the simple assumption that two batches extracted randomly from the …

Desiderata for representation learning: A causal perspective

Y Wang, MI Jordan - arxiv preprint arxiv:2109.03795, 2021 - arxiv.org
Representation learning constructs low-dimensional representations to summarize essential
features of high-dimensional data. This learning problem is often approached by describing …