Handling incomplete heterogeneous data using vaes
Variational autoencoders (VAEs), as well as other generative models, have been shown to
be efficient and accurate for capturing the latent structure of vast amounts of complex high …
be efficient and accurate for capturing the latent structure of vast amounts of complex high …
Modeling statistical dependencies in multi-region spike train data
Neural computations underlying cognition and behavior rely on the coordination of neural
activity across multiple brain areas. Understanding how brain areas interact to process …
activity across multiple brain areas. Understanding how brain areas interact to process …
Biologically informed deep learning to query gene programs in single-cell atlases
The increasing availability of large-scale single-cell atlases has enabled the detailed
description of cell states. In parallel, advances in deep learning allow rapid analysis of newly …
description of cell states. In parallel, advances in deep learning allow rapid analysis of newly …
Interpretable factor models of single-cell RNA-seq via variational autoencoders
Motivation Single-cell RNA-seq makes possible the investigation of variability in gene
expression among cells, and dependence of variation on cell type. Statistical inference …
expression among cells, and dependence of variation on cell type. Statistical inference …
Identifiable deep generative models via sparse decoding
We develop the sparse VAE for unsupervised representation learning on high-dimensional
data. The sparse VAE learns a set of latent factors (representations) which summarize the …
data. The sparse VAE learns a set of latent factors (representations) which summarize the …
Infogan-cr and modelcentrality: Self-supervised model training and selection for disentangling gans
Disentangled generative models map a latent code vector to a target space, while enforcing
that a subset of the learned latent codes are interpretable and associated with distinct …
that a subset of the learned latent codes are interpretable and associated with distinct …
Disentangled representation learning for cross-modal biometric matching
Cross-modal biometric matching (CMBM) aims to determine the corresponding voice from a
face, or identify the corresponding face from a voice. Recently, many CMBM methods have …
face, or identify the corresponding face from a voice. Recently, many CMBM methods have …
Local Disentanglement in Variational Auto-Encoders Using Jacobian Regularization
T Rhodes, D Lee - Advances in Neural Information …, 2021 - proceedings.neurips.cc
There have been many recent advances in representation learning; however, unsupervised
representation learning can still struggle with model identification issues related to rotations …
representation learning can still struggle with model identification issues related to rotations …
SepVAE: a contrastive VAE to separate pathological patterns from healthy ones
Contrastive Analysis VAE (CA-VAEs) is a family of Variational auto-encoders (VAEs) that
aims at separating the common factors of variation between a background dataset (BG)(ie …
aims at separating the common factors of variation between a background dataset (BG)(ie …
End-to-end training of deep probabilistic CCA on paired biomedical observations
Medical pathology images are visually evaluated by experts for disease diagnosis, but the
connection between image features and the state of the cells in an image is typically …
connection between image features and the state of the cells in an image is typically …