Zero-shot Cross-Lingual Transfer for Synthetic Data Generation in Grammatical Error Detection
Abstract
Grammatical Error Detection (GED) methods rely heavily on human annotated error corpora. However, these annotations are unavailable in many low-resource languages. In this paper, we investigate GED in this context. Leveraging the zero-shot cross-lingual transfer capabilities of multilingual pre-trained language models, we train a model using data from a diverse set of languages to generate synthetic errors in other languages. These synthetic error corpora are then used to train a GED model. Specifically we propose a two-stage fine-tuning pipeline where the GED model is first fine-tuned on multilingual synthetic data from target languages followed by fine-tuning on human-annotated GED corpora from source languages. This approach outperforms current state-of-the-art annotation-free GED methods. We also analyse the errors produced by our method and other strong baselines, finding that our approach produces errors that are more diverse and more similar to human errors.
Community
Grammatical Error Detection (GED) methods rely heavily on human annotated error corpora. However, these annotations are unavailable in many low-resource languages. In this paper, we investigate GED in this context. Leveraging the zero-shot cross-lingual transfer capabilities of multilingual pre-trained language models, we train a model using data from a diverse set of languages to generate synthetic errors in other languages. These synthetic error corpora are then used to train a GED model. Specifically we propose a two-stage fine-tuning pipeline where the GED model is first fine-tuned on multilingual synthetic data from target languages followed by fine-tuning on human-annotated GED corpora from source languages. This approach outperforms current state-of-the-art annotation-free GED methods. We also analyse the errors produced by our method and other strong baselines, finding that our approach produces errors that are more diverse and more similar to human errors.
Hi @GaeLop congrats on your work!
Are you planning to release any artifacts (models, datasets) on the hub?
If yes, here's a guide: https://huggingface.co/docs/hub/models-uploading.
Here's a guide for uploading datasets: https://huggingface.co/docs/datasets/loading
Let me know if you need any assistance.
Cheers,
Niels
open-source @ HF
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Self-Translate-Train: A Simple but Strong Baseline for Cross-lingual Transfer of Large Language Models (2024)
- Self-Distillation for Model Stacking Unlocks Cross-Lingual NLU in 200+ Languages (2024)
- Improving Grammatical Error Correction via Contextual Data Augmentation (2024)
- Organic Data-Driven Approach for Turkish Grammatical Error Correction and LLMs (2024)
- UniBridge: A Unified Approach to Cross-Lingual Transfer Learning for Low-Resource Languages (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper