A spectrogram-based voiceprint recognition using deep neural network

This paper presents a speaker identification algorithm using the deep neural network (DNN) as the classifier to learn the features of the voiceprints represented by spectrogram. The collected speech signals are pre-emphasized, windowed, divided into some chunks, then calculated to obtain the magnitude of the frequency spectrum, which creates the spectrograms. The local binary patterns (LBP) operator is used to obtain the texture features embedded in spectrograms.

These texture features, being represented by LBP vectors, are fed to DNN with four hidden layers to learn the speech features. In the learning progress, both of extraction and reconstruction procedures are reduplicated in each hidden layer. Through these extraction and reconstruction procedures of DNN, the speech features of each individual are given as a recognition figure, which offers the recognition results. The numerical experiments indicate that our approach has an acceptable recognition rate with high accuracy.