SPMamba: State-space model is all you need in speech separation


Abstract

In speech separation, both CNN- and Transformer-based models have demonstrated robust separation capabilities, garnering significant attention within the research community. However, CNN-based methods have limited modelling capability for long-sequence audio, leading to suboptimal separation performance. Conversely, Transformer-based methods are limited in practical applications due to their high computational complexity. Notably, within computer vision, Mamba-based methods have been celebrated for their formidable performance and reduced computational requirements. In this paper, we propose a network architecture for speech separation using a state-space model, namely SPMamba. We adopt the TF-GridNet model as the foundational framework and substitute its Transformer component with a bidirectional Mamba module, aiming to capture a broader range of contextual information. Our experimental results reveal an important role in the performance aspects of Mamba-based models. SPMamba demonstrates superior performance with a significant advantage over existing separation models in a dataset built on Librispeech. Notably, SPMamba achieves a substantial improvement in separation quality, with a 2.42 dB enhancement in SI-SNRi compared to the TF-GridNet. The source code for SPMamba is publicly accessible at https://github.com/JusperLee/SPMamba.

Index Terms: Dynamic depth neural network, dynamic width neural network

1 Introduction↩︎

Speech separation is pivotal in enhancing the intelligibility and quality of audio in environments with multiple speakers, thereby facilitating clearer communication and better audio analysis. In recent years, deep learning models, particularly those based on Convolutional Neural Networks (CNNs)[1], Recurrent Neural Networks (RNNs)[2][4], and Transformer architectures[5][7], have significantly advanced the state of the art in various auditory tasks, including speech separation.

However, despite their successes, both CNN-based and Transformer-based models encounter fundamental challenges in the speech separation domain. CNN-based models [1], [8][10], for instance, are limited by their local receptive fields, which restricts their ability to capture the full context of audio signals, thus affecting their separation capabilities. On the other hand, while Transformer-based models [5], [6], [11] excel in modelling long-range dependencies, their self-attention mechanisms suffer from quadratic complexity with respect to the sequence length, rendering them computationally expensive for real-time applications.

Recent developments in State Space Models (SSMs) [12], [13] have shown promise in addressing these limitations by establishing long-range dependencies with linear computational complexity, making them particularly suitable for tasks requiring efficient processing of long sequences. Leveraging the foundational principles of classical SSM research, modern SSMs, exemplified by Mamba, have demonstrated their efficacy across various domains, including natural language processing [14], [15] and vision tasks [16][18]. In the speech separation field, the potential for SSMs to revolutionize the design of efficient and effective models remains largely untapped [19].

Leveraging the transformative potential of SSMs in capturing long-range dependencies with linear computational complexity, we introduce an innovative architecture for speech separation, SPMamba. This architecture ingeniously integrates the essence of SSMs into the realm of audio processing, specifically targeting the challenges of speech separation. SPMamba is built upon the robust framework of TF-GridNet [11], which is renowned for effectively handling temporal and frequency dimensions in audio signals[11]. By replacing the Transformer components of TF-GridNet with bidirectional Mamba modules, SPMamba is designed to significantly enhance the model’s ability to comprehend and process the vast contextual landscape of audio sequences. This substitution not only addresses the limitations of CNN-based models in dealing with long-sequence audio but also mitigates the computational inefficiencies inherent in RNN-based approaches.

Our comprehensive experiments, conducted on a dataset with noise and reverberation, underscore the remarkable efficacy of SPMamba in the field of speech separation. The results unequivocally demonstrate a marked superiority of SPMamba over conventional separation models, highlighting a significant leap in performance metrics. Specifically, SPMamba achieves an impressive 2.42 dB improvement in SI-SNRi, compared with TF-GridNet. This enhancement is not merely a quantitative victory but a testament to the qualitative leap in separation quality afforded by the integration of SSMs.

The main contribution of this paper is the pioneering exploration of SSMs within the speech separation domain through the introduction of SPMamba. The superior performance of SPMamba, as evidenced by our rigorous experimental validation, sets a new benchmark in the field, offering a compelling alternative to existing models. Beyond the immediate improvements in separation quality and computational efficiency, this work opens new avenues for future research and development of SSM-based audio processing models.

Figure 1: An overview of the proposed SPMamba model. SPMamba uses TF-GridNet as the base model and replaces the BLSTM with BMamba. The time-frequency attention module also comprises a multi-head attention module and convolutional layers.

2 Background: Mamba↩︎

In the realm of speech separation tasks, the challenge lies in disentangling mixed audio signals into their constituent sources. This is particularly relevant in scenarios where multiple speakers are present, and the objective is to isolate the speech of each individual speaker from a single mixed input signal \(\mathbf{x}\in \mathbb{R}^{1 \times T}\). Previous methods [1], [5], [6], [8][11] have leveraged CNN, RNN and Transformer to tackle this problem, each offering distinct advantages and drawbacks in terms of computational efficiency and the ability to capture temporal dependencies.

The Mamba method introduces a novel approach by employing Selective SSM that combines the strengths of both CNNs and RNNs, while also addressing their limitations through a selection mechanism that incorporates input-dependent dynamics. This technique enables the model to selectively focus on or ignore parts of the input sequence, a capability that is crucial for effectively separating overlapping speech signals.

Structured SSMs, as described in the Mamba method, operate by mapping each channel of an input \(x\) to an output \(y\) through a higher-dimensional latent state \(h\), as illustrated in the following equations: \[\begin{align} & h_k = \hat{A}h_{k-1} + \hat{B}x_k, \\ & y_k = \hat{C}h_k + \hat{D}x_k, \end{align}\] where \(\hat{A}, \hat{B}, \hat{C}, \hat{D}\) are discretized state matrices tailored for the speech separation task. The discretization process transforms continuous parameters \((\Delta, A, B)\) into discrete counterparts \((\hat{A}, \hat{B})\), enabling the model to operate on discrete-time audio signals.

Specifically, the Mamba architecture combines elements of the H3 architecture and MLP blocks into a single, homogeneously stacked block. It expands the model dimension by a factor, concentrating most parameters in linear projections. The architecture includes fewer parameters for the inner SSM and employs the SiLU/Swish activation function. It is designed with standard normalization and residual connections, utilizing an optional LayerNorm layer for enhanced performance. The Mamba structure aims to match the efficiency of Transformer models with a streamlined, parameter-efficient design.

In addition, one key innovation of the Mamba is its hardware-aware algorithm that efficiently computes these selective SSMs on modern GPU architectures. By exploiting the memory hierarchy of GPUs, the method ensures that the expanded states are materialized in more efficient levels of the GPU memory hierarchy, such as SRAM, rather than the slower GPU HBM (High Bandwidth Memory). This approach significantly reduces the computational overhead associated with the large effective state size \((DN \times B \times L)\), where \(D\) is the number of channels, \(N\) is the state dimension, \(B\) is the batch size, and \(L\) is the sequence length.

The selection mechanism is realized by making several parameters (\(\Delta, B, C\)) functions of the input, thereby introducing time-varying dynamics into the model. This allows the Mamba method to dynamically adjust its focus on specific parts of the input sequence based on the content, a feature that is particularly beneficial for speech separation tasks where changes occur in different segments of the audio signal.

3 Method↩︎

3.1 Overall Pipeline↩︎

In order to design an efficient speech separation model, we introduce Mamba into the TF-GridNet network structure [11], called SPMamba, as shown in Fig.1. In Section 3.2, we first introduce a bidirectional Mamba layer, the core contribution of this paper; in Section 3.3, we present the details of the SPMAmba structure; in Section 3.4, we describe the loss function we use.

3.2 BMamba↩︎

Figure 2: An overview of the BMamba layer. BMamba processes both forward and backward audio sequences.

While the S6 model exhibits unique features, its causal processing of the input data limits it to capturing only historical information about the data. This feature makes S6 suitable for handling causal tasks like causal speech separation. However, in this paper, our interest is mainly focused on non-causal speech separation. To overcome this limitation, an intuitive solution is to mimic the processing of BLSTM by scanning speech frames along both forward and backward directions, thus enabling the model to combine current and historical features. Fig.2 shows the detailed structure of the BMamba layer.

Specifically, we will process the input audio feature \(\mathbf{E}_t\) from both the front and back directions, where \(t\) denotes the serial number of the overall structure as shown in Fig.1. For one direction, we first apply a linear projection of \(\mathbf{E}_t\) onto \(\hat{B}\), \(\hat{C}\), and \(\Delta\). Finally, we compute forward and backward through the SSM. The forward \(\mathbf{E}^f_t\) and backward \(\mathbf{E}^b_t\) are then gated and concated together to obtain the output marker sequence \(\mathbf{E}^o_t\).

3.3 SPMamba↩︎

The SPMamba model adopts TF-GridNet as its backbone network, which is the SOTA speech separation framework in previous studies. In order to further improve the efficiency of the model, we adopt the strategy of replacing the BLSTM network with a bidirectional Mamba network. Next, we will elaborate on the structural design and implementation details of the SPMamba model.

Fig.1 illustrates the overall architecture of SPMamba. Consistent with TF-GridNet, it consists of three main components: 1) a time-domain module for learning the feature relationships between different frames; 2) a frequency-domain module for modelling the relationships between different sub-bands; and 3) a time-frequency attention module for capturing long-range global information.

Time-Domain Feature Module. In this module, we treat the input tensor as a series of independent frequency sequences and employ a BMamba layer to capture the complex relationships within each frame. First, we unfold the input tensor using a kernel of size \(K\) and stride \(S\) to enhance the local spectral context. To ensure dimensional consistency, we apply zero padding to the frequency dimension. Next, we apply layer normalization along the channel dimension of the unfolded tensor, followed by a BMamba layer with H hidden units in each direction to model the intra-frame frequency information. To recover the original dimensions, we use a 1D deconvolution layer with kernel size \(K\), stride \(S\), input channel \(2H\), and output channel \(C\) to process the hidden embeddings of the BMamba layer. Finally, we remove the zero-padding and add the input tensor to the output tensor via a residual connection to facilitate gradient flow and learning. This module effectively captures the intricate relationships within each frame by leveraging the power of the BMamba layer, enabling the network to exploit the rich spectral information present in the input tensor, leading to enhanced feature representations for subsequent processing.

Frequency-Domain Feature Module. In this module, the procedure closely resembles the intra-frame spectral module. The key distinction lies in interpreting the input tensor, which is treated as F-independent sequences, each with a length of T. Within this module, a BMamba layer is employed to capture and model the temporal information present within each sub-band.

Time-Frequency Attention Module. This module leverages frame-level embeddings derived from the time-frequency representations within each frame of the output tensor generated by the frequency-domain feature module. It employs whole-sequence self-attention on these frame embeddings to capture long-range global information. Like TF-GridNet, the concatenated attention outputs undergo further processing to obtain the output tensor, which is fed into the next block.

3.4 Loss function↩︎

For the Loss function, we employ Permutation Invariant Training (PIT) [20], [21] to calculate the Signal-to-Noise Ratio (SNR) loss [22]. The SNR loss is defined as: \[\text{SNR}(\mathbf{s}, \hat{\mathbf{s}}) = 10 \log_{10} \frac{\|\mathbf{s}\|^2}{\|\mathbf{s} - \hat{\mathbf{s}}\|^2},\] where \(\mathbf{s}\) represents the target signal and \(\hat{\mathbf{s}}\) represents the estimated signal.

4 Experiment configurations↩︎

Table 1: Quantitative comparison of SPMamba with other existing models on a test set. The number of parameters and the computational cost were calculated using ptflops [23]. TimeThe MACs were calculated by processing one audio with 1 second in length and 16 kHz in sample rate on GPU.
Model SDR SDRi SI-SNR SI-SNRi Params(M) Macs (G/s)
Conv-TasNet [1] 7.58 7.69 6.71 6.89 5.62 10.23
DualPathRNN [4] 5.76 5.87 4.88 5.06 2.72 85.32
SudoRM-RF [8] 7.59 7.70 6.66 6.84 2.72 4.60
A-FRCNN [10] 9.53 9.64 8.58 8.76 6.13 81.20
TDANet [9] 9.93 10.14 8.95 9.21 2.33 9.13
BSRNN [24] 12.64 12.75 12.04 12.23 25.97 98.69
TF-GridNet [11] 13.59 13.70 12.62 12.81 14.43 445.56
SPMamba (Ours) 16.01 16.14 15.20 15.33 6.14 78.69

4.1 Datasets↩︎

We have constructed a multi-speaker speech separation dataset with reverberation and noise. For the speaker audio component of the dataset, we selected the publicly available Librispeech dataset [25], with Librispeech-360 containing approximately 360 hours of English data. For the noise component, we chose to use the noise provided by the WHAM! dataset [26] and the sound effects from the DnR data [27]. For background music, we selected the music portion of the cleaned DnR dataset [27].

To create realistic synthetic mixtures, we focused on three main aspects: class overlap between different audio sources in the mixture, relative levels, and multi-channel spatialization. To ensure that the mixtures include multiple complete speech segments and have a sufficient number of onsets and offsets between different classes, we set each mixture’s length to 60 seconds with a sample rate of 16kHz. We do not allow intra-class overlap, meaning two segments from the same speaker will not overlap, but foreground and background sound effects can overlap. We use Loudness Units Full Scale (LUFS) [28] for relative levels for adjustment, with music at \(-24\), speech at \(-17\), and sound effects at \(-21\). We uniformly sample an average LUFS value for each class in each mixture from a range of \(±2.0\) around the respective target LUFS. For multi-channel spatialization, we use a simulator [29] to simulate spatial reverberation through tracing to achieve an effect close to real-world scenarios. Finally, we constructed a 57-hour training set, an 8-hour validation set, and a 3-hour test set to evaluate the performance of different models.

4.2 Model configurations↩︎

For the short-time Fourier transform (STFT), we employ a 512-point Hann window with a hop size of 128 points. We apply a 512-point Fourier transform to extract a 129-dimensional complex spectrum for each frame. We use \(B = 6\) blocks, and \(E\) is set to 4. Each BMamba layer is composed of two Mamba components, with a hidden layer dimension of 128. We utilize RMSNorm to normalize the output of the Mamba components.

4.3 Training and evaluation↩︎

During the training process, we randomly select 4s-long mixed audio segments for training. We employ the Adam optimizer [30] with an initial learning rate of 0.001, and we halve the learning rate if the validation loss does not improve within 10 epochs. The maximum value for gradient clipping is set to 5. The model is trained until no best validation model is found for 20 consecutive epochs. For evaluation, we use SI-SNRi [31] and SDRi [22] as metrics and report the number of parameters and computational complexity of different models.

5 Results↩︎

We compare our proposed method, SPMamba, with several state-of-the-art speech separation models, including Conv-TasNet [1], DualPathRNN [4], SudoRM-RF [8], A-FRCNN [10], TDANet [9], BSRNN [24], and TF-GridNet [11]. Conv-TasNet is a classic time-domain audio separation network. DualPathRNN combines two RNNs with different temporal resolutions to model long-term dependencies. SudoRM-RF is a lightweight time-domain model composed of multiple UNets. A-FRCNN is an asynchronous fully recurrent convolutional neural network. TDANet is an encoder-decoder architecture time-domain audio separation network that is the best-performing lightweight separation network. BSRNN is a model that achieves SOTA music separation performance by constructing a model using a frequency band-splitting approach. TF-GridNet is a time-frequency domain model that achieves SOTA performance in speech separation by alternately modeling in the frequency and time domains. These models represent various architectures and methods in the field of speech separation.

The experimental results in Table 1 demonstrate that our proposed method, SPMamba, outperforms all other compared models regarding SDR(i) and SI-SNR(i) metrics. SPMamba achieves an SDR of 16.01 dB and an SI-SNR of 15.20 dB, surpassing the baseline model, TF-GridNet, by a significant margin of 2.42 dB and 2.58 dB, respectively. It is worth noting that SPMamba achieves these state-of-the-art results with only 6.14 million parameters and a computational complexity of 78.69 G/s, which is considerably lower than that of TF-GridNet (14.43 million parameters and 445.56 G/s). This highlights the efficiency and effectiveness of our proposed architecture in tackling the speech separation task.

6 Conclusion↩︎

In this paper, we introduce SPMamba, a novel speech separation architecture that leverages the power of State Space Models (SSMs) to address the limitations of existing CNN-based and Transformer-based methods. By incorporating a bidirectional Mamba module into the TF-GridNet framework, SPMamba captures a wider range of contextual information while maintaining computational efficiency. Our experimental results demonstrate the superior performance of SPMamba, with a substantial improvement of 2.42 dB in SI-SNRi compared to the baseline TF-GridNet model. Moreover, SPMamba achieves this state-of-the-art performance with significantly fewer parameters and lower computational complexity, highlighting its efficiency and effectiveness in speech separation tasks.

References↩︎

[1]
Y. Luo and N. Mesgarani, “Conv-tasnet: Surpassing ideal time–frequency magnitude masking for speech separation,” IEEE/ACM transactions on audio, speech, and language processing, vol. 27, no. 8, pp. 1256–1266, 2019.
[2]
K. Li, X. Hu, and Y. Luo, On the Use of Deep Mask Estimation Module for Neural Source Separation Systems,” in Proc. Interspeech 2022, 2022, pp. 5328–5332.
[3]
K. Li and Y. Luo, “On the design and training strategies for rnn-based online neural speech separation systems,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).1em plus 0.5em minus 0.4emIEEE, 2023, pp. 1–5.
[4]
Y. Luo, Z. Chen, and T. Yoshioka, “Dual-path rnn: efficient long sequence modeling for time-domain single-channel speech separation,” in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).1em plus 0.5em minus 0.4emIEEE, 2020, pp. 46–50.
[5]
J. Chen, Q. Mao, and D. Liu, “Dual-path transformer network: Direct context-aware modeling for end-to-end monaural speech separation,” arXiv preprint arXiv:2007.13975, 2020.
[6]
C. Subakan, M. Ravanelli, S. Cornell, M. Bronzi, and J. Zhong, “Attention is all you need in speech separation,” in ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).1em plus 0.5em minus 0.4emIEEE, 2021, pp. 21–25.
[7]
L. Yang, W. Liu, and W. Wang, “Tfpsnet: Time-frequency domain path scanning network for speech separation,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).1em plus 0.5em minus 0.4emIEEE, 2022, pp. 6842–6846.
[8]
E. Tzinis, Z. Wang, and P. Smaragdis, “Sudo rm-rf: Efficient networks for universal audio source separation,” in 2020 IEEE 30th International Workshop on Machine Learning for Signal Processing (MLSP).1em plus 0.5em minus 0.4emIEEE, 2020, pp. 1–6.
[9]
K. Li, R. Yang, and X. Hu, “An efficient encoder-decoder architecture with top-down attention for speech separation,” in The Eleventh International Conference on Learning Representations, 2022.
[10]
X. Hu, K. Li, W. Zhang, Y. Luo, J.-M. Lemercier, and T. Gerkmann, “Speech separation using an asynchronous fully recurrent convolutional neural network,” Advances in Neural Information Processing Systems, vol. 34, pp. 22 509–22 522, 2021.
[11]
Z.-Q. Wang, S. Cornell, S. Choi, Y. Lee, B.-Y. Kim, and S. Watanabe, “Tf-gridnet: Making time-frequency domain models great again for monaural speaker separation,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).1em plus 0.5em minus 0.4emIEEE, 2023, pp. 1–5.
[12]
A. Gu, K. Goel, and C. Re, “Efficiently modeling long sequences with structured state spaces,” in International Conference on Learning Representations, 2021.
[13]
A. Gu, I. Johnson, K. Goel, K. Saab, T. Dao, A. Rudra, and C. Ré, “Combining recurrent, convolutional, and continuous-time models with linear state space layers,” in Advances in neural information processing systems, 2021, pp. 572–585.
[14]
M. Pióro, K. Ciebiera, K. Król, J. Ludziejewski, and S. Jaszczur, “Moe-mamba: Efficient selective state space models with mixture of experts,” arXiv preprint arXiv:2401.04081, 2024.
[15]
Z. Yang, A. Mitra, S. Kwon, and H. Yu, “Clinicalmamba: A generative clinical language model on longitudinal clinical notes,” arXiv preprint arXiv:2403.05795, 2024.
[16]
A. Gu and T. Dao, “Mamba: Linear-time sequence modeling with selective state spaces,” arXiv preprint arXiv:2312.00752, 2023.
[17]
Y. Liu, Y. Tian, Y. Zhao, H. Yu, L. Xie, Y. Wang, Q. Ye, and Y. Liu, “Vmamba: Visual state space model,” arXiv preprint arXiv:2401.10166, 2024.
[18]
J. Liu, H. Yang, H.-Y. Zhou, Y. Xi, L. Yu, Y. Yu, Y. Liang, G. Shi, S. Zhang, H. Zheng et al., “Swin-umamba: Mamba-based unet with imagenet-based pretraining,” arXiv preprint arXiv:2402.03302, 2024.
[19]
C. Chen, C.-H. H. Yang, K. Li, Y. Hu, P.-J. Ku, and E. S. Chng, “A neural state-space model approach to efficient speech separation,” arXiv preprint arXiv:2305.16932, 2023.
[20]
D. Yu, M. Kolbæk, Z.-H. Tan, and J. Jensen, “Permutation invariant training of deep models for speaker-independent multi-talker speech separation,” in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).1em plus 0.5em minus 0.4emIEEE, 2017, pp. 241–245.
[21]
M. Kolbæk, D. Yu, Z.-H. Tan, and J. Jensen, “Multitalker speech separation with utterance-level permutation invariant training of deep recurrent neural networks,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 25, no. 10, pp. 1901–1913, 2017.
[22]
E. Vincent, R. Gribonval, and C. Févotte, “Performance measurement in blind audio source separation,” IEEE transactions on audio, speech, and language processing, vol. 14, no. 4, pp. 1462–1469, 2006.
[23]
V. Sovrasov. (2023) ptflops: a flops counting tool for neural networks in pytorch framework. [Online]. Available: https://github.com/sovrasov/flops-counter.pytorch.
[24]
Y. Luo and J. Yu, “Music source separation with band-split rnn,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2023.
[25]
V. Panayotov, G. Chen, D. Povey, and S. Khudanpur, “Librispeech: an asr corpus based on public domain audio books,” in 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP).1em plus 0.5em minus 0.4emIEEE, 2015, pp. 5206–5210.
[26]
G. Wichern, J. Antognini, M. Flynn, L. R. Zhu, E. McQuinn, D. Crow, E. Manilow, and J. L. Roux, “Wham!: Extending speech separation to noisy environments,” arXiv preprint arXiv:1907.01160, 2019.
[27]
D. Petermann, G. Wichern, Z.-Q. Wang, and J. Le Roux, “The cocktail fork problem: Three-stem audio separation for real-world soundtracks,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).1em plus 0.5em minus 0.4emIEEE, 2022, pp. 526–530.
[28]
E. Grimm, R. Van Everdingen, and M. Schöpping, “Toward a recommendation for a european standard of peak and lkfs loudness levels,” SMPTE motion imaging journal, vol. 119, no. 3, pp. 28–34, 2010.
[29]
C. Chen, C. Schissler, S. Garg, P. Kobernik, A. Clegg, P. Calamia, D. Batra, P. Robinson, and K. Grauman, “Soundspaces 2.0: A simulation platform for visual-acoustic learning,” Advances in Neural Information Processing Systems, vol. 35, pp. 8896–8911, 2022.
[30]
D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
[31]
J. Le Roux, S. Wisdom, H. Erdogan, and J. R. Hershey, “Sdr–half-baked or well done?” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).1em plus 0.5em minus 0.4emIEEE, 2019, pp. 626–630.