UNSSOR: Unsupervised neural speech separation by leveraging over-determined training mixtures

ZQ Wang, S Watanabe - Advances in Neural Information …, 2023 - proceedings.neurips.cc
In reverberant conditions with multiple concurrent speakers, each microphone acquires a
mixture signal of multiple speakers at a different location. In over-determined conditions …

Neural full-rank spatial covariance analysis for blind source separation

Y Bando, K Sekiguchi, Y Masuyama… - IEEE Signal …, 2021 - ieeexplore.ieee.org
This paper describes aneural blind source separation (BSS) method based on amortized
variational inference (AVI) of a non-linear generative model of mixture signals. A classical …

USDnet: Unsupervised Speech Dereverberation via Neural Forward Filtering

ZQ Wang - IEEE/ACM Transactions on Audio, Speech, and …, 2024 - ieeexplore.ieee.org
In reverberant conditions with a single speaker, each far-field microphone records a
reverberant version of the same speaker signal at a different location. In over-determined …

Spatial loss for unsupervised multi-channel source separation

K Saijo, R Scheibler - arxiv preprint arxiv:2204.00210, 2022 - arxiv.org
We propose a spatial loss for unsupervised multi-channel source separation. The proposed
loss exploits the duality of direction of arrival (DOA) and beamforming: the steering and …

Surrogate source model learning for determined source separation

R Scheibler, M Togami - ICASSP 2021-2021 IEEE International …, 2021 - ieeexplore.ieee.org
We propose to learn surrogate functions of universal speech priors for determined blind
speech separation. Deep speech priors are highly desirable due to their superior modelling …

Joint separation and localization of moving sound sources based on neural full-rank spatial covariance analysis

H Munakata, Y Bando, R Takeda… - IEEE Signal …, 2023 - ieeexplore.ieee.org
This paper presents an unsupervised multichannel method that can separate moving sound
sources based on an amortized variational inference (AVI) of joint separation and …

Self-remixing: Unsupervised speech separation via separation and remixing

K Saijo, T Ogawa - ICASSP 2023-2023 IEEE International …, 2023 - ieeexplore.ieee.org
We present Self-Remixing, a novel self-supervised speech separation method, which refines
a pre-trained separation model in an unsupervised manner. Self-Remixing consists of a …

Dynamic fine‐tuning layer selection using Kullback–Leibler divergence

RN Wanjiku, L Nderu, M Kimwele - Engineering Reports, 2023 - Wiley Online Library
The selection of layers in the transfer learning fine‐tuning process ensures a pre‐trained
model's accuracy and adaptation in a new target domain. However, the selection process is …

Unsupervised multi-channel separation and adaptation

C Han, K Wilson, S Wisdom… - ICASSP 2024-2024 …, 2024 - ieeexplore.ieee.org
A key challenge in machine learning is to generalize from training data to an application
domain of interest. This work extends the recently-proposed mixture invariant training (MixIT) …

[PDF][PDF] Weakly-Supervised Neural Full-Rank Spatial Covariance Analysis for a Front-End System of Distant Speech Recognition.

Y Bando, T Aizawa, K Itoyama, K Nakadai - Interspeech, 2022 - isca-archive.org
This paper presents a weakly-supervised multichannel neural speech separation method for
distant speech recognition (DSR) of real conversational speech mixtures. A blind source …