Bridging the gap between complex scientific research and the curious minds eager to explore it.

Computer Science, Sound

Tone Recognition of Mandarin Speech Using BP Neural Network

Tone Recognition of Mandarin Speech Using BP Neural Network

In this article, the authors present a new framework called wav2vec2.0 for self-supervised learning of speech representations. This framework is designed to improve the accuracy of speech recognition systems by leveraging large amounts of unlabelled audio data. The authors propose two main features: Wav2vec2.0 (W2E) and LLFs (Low-Level Features).

Wav2vec2.0 (W2E)

W2E is a novel feature that represents speech signals in a more efficient and effective way. It combines the advantages of both Mel-frequency cepstral coefficients (MFCCs) and log-spectral features, which are commonly used in speech recognition systems. W2E uses a transformer architecture to learn the mapping between the raw audio signal and a spectrogram representation that captures the important information for speech recognition. This feature is computed from the same audio segment as LLFs and has several advantages over traditional MFCCs, such as being more robust to noise and having better time-frequency localization.

LLFs (Low-Level Features)

LLFs are a set of features that capture the low-level properties of speech signals, such as pitch, energy, and spectral characteristics. These features are computed from the same audio segment as W2E but are more sensitive to the low-frequency components of the signal. The authors propose using LLFs as an input feature for their model to improve the recognition performance, especially in noisy environments.

Experiments

The authors conduct several ablation experiments to evaluate the effectiveness of Wav2vec2.0 and LLFs in speech recognition tasks. They compare the recognition performance of a baseline model that uses LLFs alone with a model that combines LLFs and Wav2vec2.0 features. The results show that the combination of Wav2vec2.0 and LLFs significantly improves the recognition performance, achieving an F1 score of 86.90% compared to 84.81% for the baseline model.

Attention Mechanism

The authors also introduce an attention mechanism in their model to improve the focus on important parts of the input signal. They show that this attention mechanism can significantly improve the recognition performance, achieving an F1 score of 88.45%.

Conclusion

In summary, wav2vec2.0 is a powerful framework for self-supervised learning of speech representations. It combines the advantages of W2E and LLFs to improve the accuracy of speech recognition systems in both quiet and noisy environments. The attention mechanism introduced in this paper can further improve the performance by helping the model focus on important parts of the input signal. Overall, wav2vec2.0 is a significant advancement in the field of speech recognition and has the potential to improve the accuracy of speech recognition systems in a wide range of applications.