Volume 13, Number 3
Investigating Multi-feature Selection and Ensembling for Audio Classification
Authors
Muhammad Turab1, Teerath Kumar2,3, Malika Bendechache2,3,4, and Takfarinas Saber4,5, 1Mehran University of Engineering and Technology, Pakistan, 2ADAPT – Science Foundation Ireland Research Centre, 3Dublin City University, Ireland, 4Lero – the Irish Software Research Centre, 5National University of Ireland, Ireland
Abstract
Deep Learning (DL) algorithms have shown impressive performance in diverse domains. Among them, audio has attracted many researchers over the last couple of decades due to some interesting patterns–particularly in classification of audio data. For better performance of audio classification, feature selection and combination play a key role as they have the potential to make or break the performance of any DL model. To investigate this role, we conduct an extensive evaluation of the performance of several cutting-edge DL models (i.e., Convolutional Neural Network, EfficientNet, MobileNet, Supper Vector Machine and Multi-Perceptron) with various state-of-the-art audio features (i.e., Mel Spectrogram, Mel Frequency Cepstral Coefficients, and Zero Crossing Rate) either independently or as a combination (i.e., through ensembling) on three different datasets (i.e., Free Spoken Digits Dataset, Audio Urdu Digits Dataset, and Audio Gujarati Digits Dataset). Overall, results suggest feature selection depends on both the dataset and the model. However, feature combinations should be restricted to the only features that already achieve good performances when used individually (i.e., mostly Mel Spectrogram, Mel Frequency Cepstral Coefficients). Such feature combination/ensembling enabled us to outperform the previous state-of-the-art results irrespective of our choice of DL model.
Keywords
Audio Classification, Audio Features, Deep Learning, Ensembling, Feature Selection.