Papers
arxiv:2109.11680

Simple and Effective Zero-shot Cross-lingual Phoneme Recognition

Published on Sep 23, 2021
Authors:
,
,

Abstract

Recent progress in self-training, self-supervised pretraining and unsupervised learning enabled well performing speech recognition systems without any labeled data. However, in many cases there is labeled data available for related languages which is not utilized by these methods. This paper extends previous work on zero-shot cross-lingual transfer learning by fine-tuning a multilingually pretrained wav2vec 2.0 model to transcribe unseen languages. This is done by mapping phonemes of the training languages to the target language using articulatory features. Experiments show that this simple method significantly outperforms prior work which introduced task-specific architectures and used only part of a monolingually pretrained model.

Community

Sign up or log in to comment

Models citing this paper 3

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2109.11680 in a dataset README.md to link it from this page.

Spaces citing this paper 29

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.