|
--- |
|
configs: |
|
- config_name: tokenized_data |
|
data_files: "corpus_tokenized.csv" |
|
- config_name: sources |
|
data_files: "sources.csv" |
|
- config_name: sentences_only |
|
data_files: "corpus.csv" |
|
- config_name: main_data |
|
data_files: "corpus_level.csv" |
|
default: true |
|
|
|
license: cc-by-sa-4.0 |
|
license_name: license |
|
license_link: LICENSE |
|
size_categories: |
|
- 10M<n<100M |
|
--- |
|
|
|
### Summary |
|
This dataset contains Japanese sentences, obtained from various online sources and processed in a way to make them more fit as example sentences for L2 Japanese (Japanese as a second language) learners. |
|
|
|
### Data Fields |
|
|
|
In the `main_data` configuration: |
|
- `sentence` (`str`): a Japanese sentence. |
|
- `level` (`str`): JLPT level of the sentence (attributed by a [text classifier](https://huggingface.co/bennexx/cl-tohoku-bert-base-japanese-v3-jlpt-classifier). Please keep in mind that the model has many limitations.) |
|
|
|
In the `tokenized_data` configuration: |
|
- `sentence` (`str`): a Japanese sentence. |
|
- `sentence_tokenized` (`list`): list of the sentence's tokens as tokenized by `ja_ginza`. |
|
- `sentence_tokenized_lemma` (`list`): same as above, but all tokens are lemmatized. |
|
|
|
In the `sentences_only` configuration: |
|
- `sentence` (`str`): a Japanese sentence. |
|
|
|
In the `sources.csv` file: |
|
- `source`: from this element's index and the next, sentences are found in `source`. |
|
|
|
### Sources |
|
|
|
- Levels 0~4 of the Japanese Web as Corpus 1.0. [jpWaC page](https://www.clarin.si/repository/xmlui/handle/11356/1047) |
|
- Tatoeba sentences downloaded on 20/12/2023. [Tatoeba Project](https://tatoeba.org/en/downloads) |
|
- Wikipedia dump downloaded on 01/12/2023. [jawiki-20231201-pages-articles-multistream.xml.bz2](https://dumps.wikimedia.org/jawiki/20231201/jawiki-20231201-pages-articles-multistream.xml.bz2) |
|
|
|
### Dataset construction |
|
Processing filters: |
|
- Entries with multiple sentences have been expanded |
|
- Removal of duplicates |
|
- No more than 20% punctuation |
|
- No more than 20% numbers |
|
Additionally, processing applied only to Wikipedia sentences: |
|
- No Latin/Russian/Arabic characters |
|
- End in punctuation, and the last word is an adjective, verb, or auxilliary |
|
|
|
|
|
### Some Stats |
|
|
|
- 97% of sentences are from Japanese Wikipedia |
|
- The average sentence length is 26 tokens |
|
- The average Kanji ratio is 37% |
|
|
|
### Licenses |
|
|
|
The same licenses as the original works apply [WIP] |
|
|
|
Cite |
|
|
|
[WIP] |