Update README.md
Browse files
README.md
CHANGED
@@ -46,3 +46,34 @@ configs:
|
|
46 |
- split: test
|
47 |
path: data/test-*
|
48 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
46 |
- split: test
|
47 |
path: data/test-*
|
48 |
---
|
49 |
+
|
50 |
+
Detokenizes the ECMT dataset using the kiwipiepy library.
|
51 |
+
|
52 |
+
The script used to convert the dataset is here: https://gist.github.com/ianporada/a246ebf59696c6e16e1bc1873bc182a4
|
53 |
+
|
54 |
+
The library version used is kiwipiepy==0.20.3 / kiwipiepy_model==0.20.0
|
55 |
+
|
56 |
+
The dataset schema is as follows:
|
57 |
+
```python
|
58 |
+
{
|
59 |
+
# the original document filename
|
60 |
+
"doc_id": str,
|
61 |
+
|
62 |
+
# a list of sentences in the document
|
63 |
+
"sentences": [
|
64 |
+
"index": int, # the index of the sentence within the document
|
65 |
+
"detokenized_text": str, # a single string representing the text of the sentence (detokenized using kiwipiepy)
|
66 |
+
|
67 |
+
# a list of token positions which are tuples of the form (start, end)
|
68 |
+
# the token at index i corresponds to characters detokenized_text[start:end]
|
69 |
+
"token_positions": [(int, int), ...],
|
70 |
+
|
71 |
+
# the original values of each token from the dataset
|
72 |
+
"tokens": [{"index": int, "text": str, "xpos": str}, ...],
|
73 |
+
],
|
74 |
+
|
75 |
+
# a list of coreference chains, each chain is a list of mentions
|
76 |
+
# each mention is a list of form [sentence_index, start_token_index, end_token_index] where token indices are inclusive indices within the given sentence
|
77 |
+
"coref_chains": [[[int, int, int], ...], ...]
|
78 |
+
}
|
79 |
+
```
|