Datasets:
proper hf datasets schema
Browse files
README.md
CHANGED
@@ -6,36 +6,36 @@ pretty_name: HackerNews comments dataset
|
|
6 |
dataset_info:
|
7 |
config_name: default
|
8 |
features:
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
configs:
|
40 |
- config_name: default
|
41 |
data_files:
|
@@ -45,7 +45,7 @@ configs:
|
|
45 |
|
46 |
# Hackernews Comments Dataset
|
47 |
|
48 |
-
A dataset of all [HN API](https://github.com/HackerNews/API) items from `id=0` till `id=
|
49 |
|
50 |
## Dataset contents
|
51 |
|
@@ -68,10 +68,14 @@ No cleaning, validation or filtering was performed. The resulting data files are
|
|
68 |
|
69 |
You can directly load this dataset with a [Huggingface Datasets](https://github.com/huggingface/datasets/) library.
|
70 |
|
|
|
|
|
|
|
|
|
71 |
```python
|
72 |
from datasets import load_dataset
|
73 |
|
74 |
-
ds = load_dataset("nixiesearch/hackernews-comments")
|
75 |
print(ds.features)
|
76 |
|
77 |
```
|
|
|
6 |
dataset_info:
|
7 |
config_name: default
|
8 |
features:
|
9 |
+
- name: id
|
10 |
+
dtype: int64
|
11 |
+
- name: deleted
|
12 |
+
dtype: bool
|
13 |
+
- name: type
|
14 |
+
dtype: string
|
15 |
+
- name: by
|
16 |
+
dtype: string
|
17 |
+
- name: time
|
18 |
+
dtype: int64
|
19 |
+
- name: text
|
20 |
+
dtype: string
|
21 |
+
- name: dead
|
22 |
+
dtype: bool
|
23 |
+
- name: parent
|
24 |
+
dtype: int64
|
25 |
+
- name: poll
|
26 |
+
dtype: int64
|
27 |
+
- name: kids
|
28 |
+
sequence: int64
|
29 |
+
- name: url
|
30 |
+
dtype: string
|
31 |
+
- name: score
|
32 |
+
dtype: int64
|
33 |
+
- name: title
|
34 |
+
dtype: string
|
35 |
+
- name: parts
|
36 |
+
sequence: int64
|
37 |
+
- name: descendants
|
38 |
+
dtype: int64
|
39 |
configs:
|
40 |
- config_name: default
|
41 |
data_files:
|
|
|
45 |
|
46 |
# Hackernews Comments Dataset
|
47 |
|
48 |
+
A dataset of all [HN API](https://github.com/HackerNews/API) items from `id=0` till `id=41422887` (so from 2006 till 02 Sep 2024). The dataset is build by scraping the HN API according to its official [schema and docs](https://github.com/HackerNews/API). Scraper code is also available on github: [nixiesearch/hnscrape](https://github.com/nixiesearch/hnscrape)
|
49 |
|
50 |
## Dataset contents
|
51 |
|
|
|
68 |
|
69 |
You can directly load this dataset with a [Huggingface Datasets](https://github.com/huggingface/datasets/) library.
|
70 |
|
71 |
+
```shell
|
72 |
+
pip install datasets zstandard
|
73 |
+
```
|
74 |
+
|
75 |
```python
|
76 |
from datasets import load_dataset
|
77 |
|
78 |
+
ds = load_dataset("nixiesearch/hackernews-comments", split="train")
|
79 |
print(ds.features)
|
80 |
|
81 |
```
|
example.py
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from datasets import load_dataset
|
2 |
+
|
3 |
+
ds = load_dataset("nixiesearch/hackernews-comments", split="train")
|
4 |
+
print(ds.features)
|
items/items_41422884_41522893_1728354080290.jsonl.zst
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:42c45ce243e181c442ee2f50719ef49b4e48dba07db8881e25239b7f396e062f
|
3 |
-
size 14855888
|
|
|
|
|
|
|
|
items/items_41522879_41623041_1728354216260.jsonl.zst
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:f64fdb355ac213d60d44ac95af190c9cf5bf1a9a1b1ff1a95347e3932bee404b
|
3 |
-
size 14573672
|
|
|
|
|
|
|
|
items/items_41623031_41723169_1728354354981.jsonl.zst
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:5b6e686cfc2c95718e5ccc3179382a09eddcfd2345ae6447d36996b08c991eba
|
3 |
-
size 14676249
|
|
|
|
|
|
|
|