WJTSentDiL / README.md
bennexx's picture
updated citation
29c60dc verified
metadata
configs:
  - config_name: tokenized_data
    data_files: corpus_tokenized.csv
  - config_name: sources
    data_files: sources.csv
  - config_name: sentences_only
    data_files: corpus.csv
  - config_name: main_data
    data_files: corpus_level.csv
    default: true
license: cc-by-sa-4.0
license_name: license
license_link: LICENSE
size_categories:
  - 10M<n<100M
pretty_name: WJTSentDiL

Summary

This dataset (WJTSentDiL, a corpus of Wikipedia, JpWaC and Tatoeba Sentences with Difficulty Level) contains Japanese sentences, obtained from various online sources and processed in a way to make them more fit as example sentences for L2 Japanese (Japanese as a second language) learners.

Data Fields

In the main_data configuration:

  • sentence (str): a Japanese sentence.
  • level (str): JLPT level of the sentence (attributed by a text classifier. Please keep in mind that the model has many limitations.)

In the tokenized_data configuration:

  • sentence (str): a Japanese sentence.
  • sentence_tokenized (list): list of the sentence's tokens as tokenized by ja_ginza.
  • sentence_tokenized_lemma (list): same as above, but all tokens are lemmatized.

In the sentences_only configuration:

  • sentence (str): a Japanese sentence.

In the sources.csv file:

  • source: from this element's index and the next, sentences are found in source.

Sources

Dataset construction

Processing filters:

  • Entries with multiple sentences have been expanded
  • Removal of duplicates
  • No more than 20% punctuation
  • No more than 20% numbers Additionally, processing applied only to Wikipedia sentences:
  • No Latin/Russian/Arabic characters
  • End in punctuation, and the last word is an adjective, verb, or auxilliary

Some Stats

  • 97% of sentences are from Japanese Wikipedia
  • The average sentence length is 26 tokens
  • The average Kanji ratio is 37%

Licenses

The same licenses as the original works apply.

Cite

@inproceedings{benedetti-etal-2024-automatically,
    title = "Automatically Suggesting Diverse Example Sentences for {L}2 {J}apanese Learners Using Pre-Trained Language Models",
    author = "Benedetti, Enrico  and
      Aizawa, Akiko  and
      Boudin, Florian",
    editor = "Fu, Xiyan  and
      Fleisig, Eve",
    booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
    month = aug,
    year = "2024",
    address = "Bangkok, Thailand",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.acl-srw.11",
    pages = "114--131",
    abstract = "Providing example sentences that are diverse and aligned with learners{'} proficiency levels is essential for fostering effective language acquisition.This study examines the use of Pre-trained Language Models (PLMs) to produce example sentences targeting L2 Japanese learners.We utilize PLMs in two ways: as quality scoring components in a retrieval system that draws from a newly curated corpus of Japanese sentences, and as direct sentence generators using zero-shot learning.We evaluate the quality of sentences by considering multiple aspects such as difficulty, diversity, and naturalness, with a panel of raters consisting of learners of Japanese, native speakers {--} and GPT-4.Our findings suggest that there is inherent disagreement among participants on the ratings of sentence qualities, except for difficulty. Despite that, the retrieval approach was preferred by all evaluators, especially for beginner and advanced target proficiency, while the generative approaches received lower scores on average.Even so, our experiments highlight the potential for using PLMs to enhance the adaptability of sentence suggestion systems and therefore improve the language learning journey.",
}