Academy & Industry Research Collaboration Center (AIRCC)

Volume 11, Number 14, September 2021

End-to-End Chinese Dialect Discrimination with Self-Attention

  Authors

Yangjie Dan, Fan Xu, Mingwen Wang, Jiangxi Normal University, China

  Abstract

Dialect discrimination has an important practical significance for protecting inheritance of dialects. The traditional dialect discrimination methods pay much attention to the underlying acoustic features, and ignore the meaning of the pronunciation itself, resulting in low performance. This paper systematically explores the validity of the pronunciation features of dialect speech composed of phoneme sequence information for dialect discrimination, and designs an end-to-end dialect discrimination model based on the multi-head self-attention mechanism. Specifically, we first adopt the residual convolution neural network and the multihead self-attention mechanism to effectively extract the phoneme sequence features unique to different dialects to compose the novel phonetic features. Then, we perform dialect discrimination based on the extracted phonetic features using the self-attention mechanism and bi-directional long short-term memory networks. The experimental results on the large-scale benchmark 10-way Chinese dialect corpus released by IFLYTEK 1 show that our model outperforms the state-of-the-art alternatives by large margin.

  Keywords

Dialect discrimination, Multi-head attention mechanism, Phonetic sequence, Connectionist temporal classification.