Manifold learning: What, how, and why
Manifold learning (ML), also known as nonlinear dimension reduction, is a set of methods to
find the low-dimensional structure of data. Dimension reduction for large, high-dimensional …
find the low-dimensional structure of data. Dimension reduction for large, high-dimensional …
Understanding how dimension reduction tools work: an empirical approach to deciphering t-SNE, UMAP, TriMAP, and PaCMAP for data visualization
Dimension reduction (DR) techniques such as t-SNE, UMAP, and TriMap have
demonstrated impressive visualization performance on many real-world datasets. One …
demonstrated impressive visualization performance on many real-world datasets. One …
The art of using t-SNE for single-cell transcriptomics
Single-cell transcriptomics yields ever growing data sets containing RNA expression levels
for thousands of genes from up to millions of cells. Common data analysis pipelines include …
for thousands of genes from up to millions of cells. Common data analysis pipelines include …
openTSNE: a modular Python library for t-SNE dimensionality reduction and embedding
One of the most popular techniques for visualizing large, high-dimensional data sets is t-
distributed stochastic neighbor embedding (t-SNE). Recently, several extensions have been …
distributed stochastic neighbor embedding (t-SNE). Recently, several extensions have been …
Assessing single-cell transcriptomic variability through density-preserving data visualization
Nonlinear data visualization methods, such as t-distributed stochastic neighbor embedding
(t-SNE) and uniform manifold approximation and projection (UMAP), summarize the …
(t-SNE) and uniform manifold approximation and projection (UMAP), summarize the …
[KNIHA][B] Elements of dimensionality reduction and manifold learning
Dimensionality reduction, also known as manifold learning, is an area of machine learning
used for extracting informative features from data for better representation of data or …
used for extracting informative features from data for better representation of data or …
Attraction-repulsion spectrum in neighbor embeddings
Neighbor embeddings are a family of methods for visualizing complex high-dimensional
data sets using k NN graphs. To find the low-dimensional embedding, these algorithms …
data sets using k NN graphs. To find the low-dimensional embedding, these algorithms …
Snekhorn: Dimension reduction with symmetric entropic affinities
Many approaches in machine learning rely on a weighted graph to encode thesimilarities
between samples in a dataset. Entropic affinities (EAs), which are notably used in the …
between samples in a dataset. Entropic affinities (EAs), which are notably used in the …
A probabilistic graph coupling view of dimension reduction
Most popular dimension reduction (DR) methods like t-SNE and UMAP are based on
minimizing a cost between input and latent pairwise similarities. Though widely used, these …
minimizing a cost between input and latent pairwise similarities. Though widely used, these …
From -SNE to UMAP with contrastive learning
Neighbor embedding methods $ t $-SNE and UMAP are the de facto standard for visualizing
high-dimensional datasets. Motivated from entirely different viewpoints, their loss functions …
high-dimensional datasets. Motivated from entirely different viewpoints, their loss functions …