Cr-vae: Contrastive regularization on variational autoencoders for preventing posterior collapse

F Lygerakis, E Rueckert - arxiv preprint arxiv:2309.02968, 2023 - arxiv.org
The Variational Autoencoder (VAE) is known to suffer from the phenomenon of\textit
{posterior collapse}, where the latent representations generated by the model become …

Interpretable Sentence Representation with Variational Autoencoders and Attention

G Felhi - arxiv preprint arxiv:2305.02810, 2023 - arxiv.org
In this thesis, we develop methods to enhance the interpretability of recent representation
learning techniques in natural language processing (NLP) while accounting for the …

Preventing Model Collapse in Gaussian Process Latent Variable Models

Y Li, Z Lin, F Yin, MM Zhang - arxiv preprint arxiv:2404.01697, 2024 - arxiv.org
Gaussian process latent variable models (GPLVMs) are a versatile family of unsupervised
learning models, commonly used for dimensionality reduction. However, common …

ED-VAE: Entropy Decomposition of ELBO in Variational Autoencoders

F Lygerakis, E Rueckert - arxiv preprint arxiv:2407.06797, 2024 - arxiv.org
Traditional Variational Autoencoders (VAEs) are constrained by the limitations of the
Evidence Lower Bound (ELBO) formulation, particularly when utilizing simplistic, non …

Disentangling Cobionts and Contamination in Long-Read Genomic Data using Sequence Composition

CC Weber - bioRxiv, 2024 - biorxiv.org
The recent acceleration in genome sequencing targeting previously unexplored parts of the
tree of life presents computational challenges. Samples collected from the wild often contain …

CR-VAE: Contrastive Regularization on Variational Autoencoders for Preventing Posterior Collapse

F Lygerakis, E Rueckert - 2023 7th Asian Conference on …, 2023 - ieeexplore.ieee.org
The Variational Autoencoder (VAE) is known to suffer from the phenomenon of posterior
collapse, where the latent representations generated by the model become independent of …