Explore Long-Range Context feature for Speaker Verification

12/14/2021
by   Zhuo Li, et al.
0

Capturing long-range dependency and modeling long temporal contexts is proven to benefit speaker verification tasks. In this paper, we propose the combination of the Hierarchical-Split block(HS-block) and the Depthwise Separable Self-Attention(DSSA) module to capture richer multi-range context speaker features from a local and global perspective respectively. Specifically, the HS-block splits the feature map and filters into several groups and stacks them in one block, which enlarges the receptive fields(RFs) locally. The DSSA module improves the multi-head self-attention mechanism by the depthwise-separable strategy and explicit sparse attention strategy to model the pairwise relations globally and captures effective long-range dependencies in each channel. Experiments are conducted on the Voxceleb and SITW. Our best system achieves 1.27 SITW by applying the combination of HS-block and DSSA module.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro