Synthetic data in human analysis: A survey

I Joshi, M Grimmer, C Rathgeb, C Busch… - … on Pattern Analysis …, 2024‏ - ieeexplore.ieee.org
Deep neural networks have become prevalent in human analysis, boosting the performance
of applications, such as biometric recognition, action recognition, as well as person re …

xViTCOS: explainable vision transformer based COVID-19 screening using radiography

AK Mondal, A Bhattacharjee, P Singla… - IEEE Journal of …, 2021‏ - ieeexplore.ieee.org
Objective: Since its outbreak, the rapid spread of COrona VIrus Disease 2019 (COVID-19)
across the globe has pushed the health care system in many countries to the verge of …

Ard-vae: A statistical formulation to find the relevant latent dimensions of variational autoencoders

S Saha, S Joshi, R Whitaker - arxiv preprint arxiv:2501.10901, 2025‏ - arxiv.org
The variational autoencoder (VAE) is a popular, deep, latent-variable model (DLVM) due to
its simple yet effective formulation for modeling the data distribution. Moreover, optimizing …

FlexAE: Flexibly learning latent priors for wasserstein auto-encoders

AK Mondal, H Asnani, P Singla… - Uncertainty in Artificial …, 2021‏ - proceedings.mlr.press
Auto-Encoder (AE) based neural generative frameworks model the joint-distribution
between the data and the latent space using an Encoder-Decoder pair, with regularization …

Adaptive compression of the latent space in variational autoencoders

G Sejnova, M Vavrecka, K Stepanova - International Conference on …, 2024‏ - Springer
Abstract Variational Autoencoders (VAEs) are powerful generative models that have been
widely used in various fields, including image and text generation. However, one of the …

DisFormer: Disentangled Object Representations for Learning Visual Dynamics Via Transformers

SS Gandhi, V Sharma, R Gupta, AK Mondal… - 2023‏ - openreview.net
We focus on the task of visual dynamics prediction. Recent work has shown that object-
centric representations can greatly help improve the accuracy of learning such dynamics in …

Sparsity driven latent space sampling for generative prior based compressive sensing

V Killedar, PK Pokala… - ICASSP 2021-2021 IEEE …, 2021‏ - ieeexplore.ieee.org
We address the problem of recovering signals from compressed measurements based on
generative priors. Recently, generative-model based compressive sensing (GMCS) methods …

RENs: Relevance Encoding Networks

K Iyer, R Bhalodia, S Elhabian - arxiv preprint arxiv:2205.13061, 2022‏ - arxiv.org
The manifold assumption for high-dimensional data assumes that the data is generated by
varying a set of parameters obtained from a low-dimensional latent space. Deep generative …

scRAE: deterministic regularized autoencoders with flexible priors for clustering single-cell gene expression data

AK Mondal, H Asnani, P Singla… - IEEE/ACM Transactions …, 2021‏ - ieeexplore.ieee.org
Clustering single-cell RNA sequence (scRNA-seq) data poses statistical and computational
challenges due to their high-dimensionality and data-sparsity, also known as …

A machine learning approach for fighting the curse of dimensionality in global optimization

JF Schumann, AM Aragón - arxiv preprint arxiv:2110.14985, 2021‏ - arxiv.org
Finding global optima in high-dimensional optimization problems is extremely challenging
since the number of function evaluations required to sufficiently explore the search space …