iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.21437/Interspeech.2019-1289
ISCA Archive - Multichannel Loss Function for Supervised Speech Source Separation by Mask-Based Beamforming
ISCA Archive Interspeech 2019
ISCA Archive Interspeech 2019

Multichannel Loss Function for Supervised Speech Source Separation by Mask-Based Beamforming

Yoshiki Masuyama, Masahito Togami, Tatsuya Komatsu

In this paper, we propose two mask-based beamforming methods using a deep neural network (DNN) trained by multichannel loss functions. Beamforming technique using time-frequency (TF)-masks estimated by a DNN have been applied to many applications where TF-masks are used for estimating spatial covariance matrices. To train a DNN for mask-based beamforming, loss functions designed for monaural speech enhancement/separation have been employed. Although such a training criterion is simple, it does not directly correspond to the performance of mask-based beamforming. To overcome this problem, we use multichannel loss functions which evaluate the estimated spatial covariance matrices based on the multichannel Itakura–Saito divergence. DNNs trained by the multichannel loss functions can be applied to construct several beamformers. Experimental results confirmed their effectiveness and robustness to microphone configurations.