[HTML][HTML] An overview of variational autoencoders for source separation, finance, and bio-signal applications

A Singh, T Ogunfunmi - Entropy, 2021‏ - mdpi.com
Autoencoders are a self-supervised learning system where, during training, the output is an
approximation of the input. Typically, autoencoders have three parts: Encoder (which …

Neural decoding for intracortical brain–computer interfaces

Y Dong, S Wang, Q Huang, RW Berg… - Cyborg and Bionic …, 2023‏ - spj.science.org
Brain–computer interfaces have revolutionized the field of neuroscience by providing a
solution for paralyzed patients to control external devices and improve the quality of daily …

A high-performance speech neuroprosthesis

FR Willett, EM Kunz, C Fan, DT Avansino, GH Wilson… - Nature, 2023‏ - nature.com
Speech brain–computer interfaces (BCIs) have the potential to restore rapid communication
to people with paralysis by decoding neural activity evoked by attempted speech into text, or …

Long-term stability of cortical population dynamics underlying consistent behavior

JA Gallego, MG Perich, RH Chowdhury, SA Solla… - Nature …, 2020‏ - nature.com
Animals readily execute learned behaviors in a consistent manner over long periods of time,
and yet no equally stable neural correlate has been demonstrated. How does the cortex …

Understanding self-training for gradual domain adaptation

A Kumar, T Ma, P Liang - International conference on …, 2020‏ - proceedings.mlr.press
Abstract Machine learning systems must adapt to data distributions that evolve over time, in
applications ranging from sensor networks and self-driving car perception modules to brain …

A unified, scalable framework for neural population decoding

M Azabou, V Arora, V Ganesh, X Mao… - Advances in …, 2023‏ - proceedings.neurips.cc
Our ability to use deep learning approaches to decipher neural activity would likely benefit
from greater scale, in terms of both the model size and the datasets. However, the …

Machine learning for neural decoding

JI Glaser, AS Benjamin, RH Chowdhury, MG Perich… - eneuro, 2020‏ - eneuro.org
Despite rapid advances in machine learning tools, the majority of neural decoding
approaches still use traditional methods. Modern machine learning tools, which are versatile …

Adversarial self-training improves robustness and generalization for gradual domain adaptation

L Shi, W Liu - Advances in Neural Information Processing …, 2023‏ - proceedings.neurips.cc
Abstract Gradual Domain Adaptation (GDA), in which the learner is provided with additional
intermediate domains, has been theoretically and empirically studied in many contexts …

Neural Latents Benchmark'21: Evaluating latent variable models of neural population activity

F Pei, J Ye, D Zoltowski, A Wu, RH Chowdhury… - arxiv preprint arxiv …, 2021‏ - arxiv.org
Advances in neural recording present increasing opportunities to study neural activity in
unprecedented detail. Latent variable models (LVMs) are promising tools for analyzing this …

Subject-aware contrastive learning for biosignals

JY Cheng, H Goh, K Dogrusoz, O Tuzel… - arxiv preprint arxiv …, 2020‏ - arxiv.org
Datasets for biosignals, such as electroencephalogram (EEG) and electrocardiogram (ECG),
often have noisy labels and have limited number of subjects (< 100). To handle these …