WJTSentDiL / README.md
bennexx's picture
Update README.md
c342089 verified
|
raw
history blame
2.35 kB
metadata
configs:
  - config_name: main_data
    data_files: corpus_level.csv
    default: true
  - config_name: tokenized_data
    data_files: corpus_tokenized.csv
  - config_name: sources
    data_files: sources.csv
  - config_name: sentences_only
    data_files: corpus.csv
license: cc-by-sa-4.0
license_name: license
license_link: LICENSE
size_categories:
  - 10M<n<100M

Summary

This dataset contains Japanese sentences, obtained from various online sources and processed in a way to make them more fit as example sentences for L2 Japanese (Japanese as a second language) learners.

Data Fields

In the main_data configuration:

  • sentence (str): a Japanese sentence.
  • level (str): JLPT level of the sentence (attributed by a text classifier. Please keep in mind that the model has many limitations.)

In the tokenized_data configuration:

  • sentence (str): a Japanese sentence.
  • sentence_tokenized (list): list of the sentence's tokens as tokenized by ja_ginza.
  • sentence_tokenized_lemma (list): same as above, but all tokens are lemmatized.

In the sentences_only configuration: - sentence (str): a Japanese sentence.

In the sources.csv file:

  • source: from this element's index and the next, sentences are found in source.

Sources

Dataset construction

Processing filters:

  • Entries with multiple sentences have been expanded
  • Removal of duplicates
  • No more than 20% punctuation
  • No more than 20% numbers Additionally, processing applied only to Wikipedia sentences:
  • No Latin/Russian/Arabic characters
  • End in punctuation, and the last word is an adjective, verb, or auxilliary

Some Stats

  • 97% of sentences are from Japanese Wikipedia
  • The average sentence length is 26 tokens
  • The average Kanji ratio is 37%

Licenses

The same licenses as the original works apply [WIP]

Cite

[WIP]