File size: 2,353 Bytes
e47d70f e0d8e6f 5ac39e0 94f1a83 e0d8e6f 94f1a83 dd34273 94f1a83 dd34273 e47d70f 7361795 d3c1408 dd34273 d3c1408 dd34273 d3c1408 dd34273 1f27c1e d3c1408 c342089 dd34273 d3c1408 dd34273 d3c1408 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
---
configs:
- config_name: tokenized_data
data_files: "corpus_tokenized.csv"
- config_name: sources
data_files: "sources.csv"
- config_name: sentences_only
data_files: "corpus.csv"
- config_name: main_data
data_files: "corpus_level.csv"
default: true
license: cc-by-sa-4.0
license_name: license
license_link: LICENSE
size_categories:
- 10M<n<100M
---
### Summary
This dataset contains Japanese sentences, obtained from various online sources and processed in a way to make them more fit as example sentences for L2 Japanese (Japanese as a second language) learners.
### Data Fields
In the `main_data` configuration:
- `sentence` (`str`): a Japanese sentence.
- `level` (`str`): JLPT level of the sentence (attributed by a [text classifier](https://huggingface.co/bennexx/cl-tohoku-bert-base-japanese-v3-jlpt-classifier). Please keep in mind that the model has many limitations.)
In the `tokenized_data` configuration:
- `sentence` (`str`): a Japanese sentence.
- `sentence_tokenized` (`list`): list of the sentence's tokens as tokenized by `ja_ginza`.
- `sentence_tokenized_lemma` (`list`): same as above, but all tokens are lemmatized.
In the `sentences_only` configuration:
- `sentence` (`str`): a Japanese sentence.
In the `sources.csv` file:
- `source`: from this element's index and the next, sentences are found in `source`.
### Sources
- Levels 0~4 of the Japanese Web as Corpus 1.0. [jpWaC page](https://www.clarin.si/repository/xmlui/handle/11356/1047)
- Tatoeba sentences downloaded on 20/12/2023. [Tatoeba Project](https://tatoeba.org/en/downloads)
- Wikipedia dump downloaded on 01/12/2023. [jawiki-20231201-pages-articles-multistream.xml.bz2](https://dumps.wikimedia.org/jawiki/20231201/jawiki-20231201-pages-articles-multistream.xml.bz2)
### Dataset construction
Processing filters:
- Entries with multiple sentences have been expanded
- Removal of duplicates
- No more than 20% punctuation
- No more than 20% numbers
Additionally, processing applied only to Wikipedia sentences:
- No Latin/Russian/Arabic characters
- End in punctuation, and the last word is an adjective, verb, or auxilliary
### Some Stats
- 97% of sentences are from Japanese Wikipedia
- The average sentence length is 26 tokens
- The average Kanji ratio is 37%
### Licenses
The same licenses as the original works apply [WIP]
Cite
[WIP] |