Volume 10, Number 5

Analyzing Architectures for Neural Machine Translation using Low Computational Resources

  Authors

Aditya Mandke, Onkar Litake, and Dipali Kadam, SCTR’s Pune Institute of Computer Technology, India

  Abstract

With the recent developments in the field of Natural Language Processing, there has been a rise in the use of different architectures for Neural Machine Translation. Transformer architectures are used to achieve state-of-the-art accuracy, but they are very computationally expensive to train. Everyone cannot have such setups consisting of high-end GPUs and other resources. We train our models on low computational resources and investigate the results. As expected, transformers outperformed other architectures, but there were some surprising results. Transformers consisting of more encoders and decoders took more time to train but had fewer BLEU scores. LSTM performed well in the experiment and took comparatively less time to train than transformers, making it suitable to use in situations having time constraints.

  Keywords

Machine Translation, Indic Languages, Natural Language Processing.