Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
configs:
|
3 |
+
- config_name: default
|
4 |
+
data_files:
|
5 |
+
- split: train
|
6 |
+
path: "csqa-Train.parquet"
|
7 |
+
- split: validation
|
8 |
+
path: "csqa-Validation.parquet"
|
9 |
+
---
|
10 |
+
|
11 |
+
Commonsense-QA with CoT few-shot exemplars introduced in the paper: https://arxiv.org/pdf/2201.11903.
|
12 |
+
|
13 |
+
<details>
|
14 |
+
<summary>Generation Code</summary>
|
15 |
+
|
16 |
+
```python
|
17 |
+
import pandas as pd
|
18 |
+
from datasets import load_dataset
|
19 |
+
|
20 |
+
FEW_SHOT_EXEMPLARS = [
|
21 |
+
{
|
22 |
+
"question": "What do people use to absorb extra ink from a fountain pen?",
|
23 |
+
"rationale": "The answer must be an item that can absorb ink. Of the above choices, only blotters are used to absorb ink.",
|
24 |
+
},
|
25 |
+
{
|
26 |
+
"question": "What home entertainment equipment requires cable?",
|
27 |
+
"rationale": "The answer must require cable. Of the above choices, only television requires cable.",
|
28 |
+
},
|
29 |
+
{
|
30 |
+
"question": "The fox walked from the city into the forest, what was it looking for?",
|
31 |
+
"rationale": "The answer must be something in the forest. Of the above choices, only natural habitat is in the forest.",
|
32 |
+
},
|
33 |
+
{
|
34 |
+
"question": "Sammy wanted to go to where the people were. Where might he go?",
|
35 |
+
"rationale": "The answer must be a place with a lot of people. Of the above choices, only populated areas have a lot of people.",
|
36 |
+
},
|
37 |
+
{
|
38 |
+
"question": "Where do you put your grapes just before checking out?",
|
39 |
+
"rationale": "The answer should be the place where grocery items are placed before checking out. Of the above choices, grocery cart makes the most sense for holding grocery items.",
|
40 |
+
},
|
41 |
+
{
|
42 |
+
"question": "Google Maps and other highway and street GPS services have replaced what?",
|
43 |
+
"rationale": "The answer must be something that used to do what Google Maps and GPS services do, which is to give directions. Of the above choices, only atlases are used to give directions.",
|
44 |
+
},
|
45 |
+
{
|
46 |
+
"question": "Before getting a divorce, what did the wife feel who was doing all the work?",
|
47 |
+
"rationale": "The answer should be the feeling of someone getting divorced who was doing all the work. Of the above choices, the closest feeling is bitterness.",
|
48 |
+
},
|
49 |
+
]
|
50 |
+
|
51 |
+
|
52 |
+
dataset = load_dataset("tau/commonsense_qa", split="validation").to_pandas()
|
53 |
+
dataset["rationale"] = ""
|
54 |
+
dataset.to_parquet("csqa-Validation.parquet", index=False)
|
55 |
+
|
56 |
+
|
57 |
+
examples = []
|
58 |
+
dataset = load_dataset("tau/commonsense_qa", split="train").to_pandas()
|
59 |
+
for example in FEW_SHOT_EXEMPLARS:
|
60 |
+
row = dataset[dataset["question"] == example["question"]].iloc[0]
|
61 |
+
row["rationale"] = example["rationale"]
|
62 |
+
examples.append(row)
|
63 |
+
pd.DataFrame(examples).to_parquet("csqa-Train.parquet", index=False)
|
64 |
+
```
|
65 |
+
</details>
|