Multilingual Speech to Text using Deep Learning based on MFCC Features


P Deepak Reddy, Chirag Rudresh and Adithya A S, PES University, India


The proposed methodology presented in the paper deals with solving the problem of multilingual speech recognition. Current text and speech recognition and translation methods have a very low accuracy in translating sentences which contain a mixture of two or more different languages. The paper proposes a novel approach to tackling this problem and highlights some of the drawbacks of current recognition and translation methods.

The proposed approach deals with recognition of audio queries which contain a mixture of words in two different languages - Kannada and English. The novelty in the approach presented, is the use of a next Word Prediction model in combination with a Deep Learning speech recognition model to accurately recognise and convert the input audio query to text. Another method proposed to solve the problem of multilingual speech recognition and translation is the use of cosine similarity between the audio features of words for fast and accurate recognition. The dataset used for training and testing the models was generated manually by the authors as there was no pre-existing audio and text dataset which contained sentences in a mixture of both Kannada and English.

The DL speech recognition model in combination with the Word Prediction model gives an accuracy of 71% when tested on the in-house multilingual dataset. This method outperforms other existing translation and recognition solutions for the same test set.

Multilingual translation and recognition is an important problem to tackle as there is a tendency for people to speak in a mixture of languages. By solving this problem, the barrier of language and communication can be lifted and thus can help people connect better and more comfortably with each other.


Natural Language Processing, Deep Learning, Multilingual Speech Recognition, Machine Learning, Speech to Text.