Datasets:

Languages:
English
ArXiv:
File size: 7,300 Bytes
e83b57a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
---
task_categories:
- text-generation
language:
- en
pretty_name: SlimPajama 627B
license: apache-2.0
---
## Getting Started

SlimPajama-627B consists of 59166 jsonl files. It is a cleaned and deduplicated version of [Together Computer's RedPajama](https://github.com/togethercomputer/redpajama-data).

You can download the dataset using [Hugging Face datasets](https://huggingface.co/docs/datasets/load_hub):
```python
from datasets import load_dataset
ds = load_dataset("cerebras/SlimPajama-627B")
```

## Background

We release SlimPajama – the largest deduplicated, multi-corpora, open-source, dataset for training large language models. SlimPajama was created by cleaning and deduplicating the RedPajama dataset from Together Computer via MinHashLSH. By filtering out low quality data and duplicates, we were able to remove 49.6% of bytes, slimming down the dataset from 1210B to 627B tokens! We believe SlimPajama offers the highest quality and most compute efficient data to train on for runs less than 627B tokens. When upsampled, we expect SlimPajama to perform equal or better than RedPajama-1T when training at trillion token scale. This release was made possible with the support of our customer OpenTensor. We believe SlimPajama is currently the most attractive open-source dataset because it offers the highest data quality through strict deduplication and the inclusion of curated data sources. The dataset can easily be upsampled to increase the number of tokens and precisely control the amount of duplication present.

Applying [MinHashLSH](http://infolab.stanford.edu/~ullman/mmds/book0n.pdf) deduplication to Trillion token datasets like RedPajama was not possible with off-the-shelf open-source code. We made several optimizations to existing solutions to produce infrastructure that can perform MinHashLSH deduplication on Trillion token datasets in a distributed, multi-threaded and memory efficient fashion. Today we are open-sourcing this infrastructure to enable the community to develop higher quality, deduplicated datasets in the future.


### Our observations of the original data set

1. RedPajama contains a portion of partially downloaded files.
2. Some (~2%) of the examples contain empty text. They were downloaded correctly, but do not have useful content that a model can be trained on.
3. There are many (~50%) duplicates in the data. The RedPajama team deduplicated some sources (Books, GitHub, Commoncrawl), but did not deduplicate all sources.


### Our contributions

1. SlimPajama 627B – the largest deduplicated, multi-corpora, open dataset for LLM training. We release it under the Apache 2.0 license.
2. Releasing validation and test sets, ~500M tokens each, which the training data has been decontaminated against.
3. Library of methods to replicate or pre-process from scratch other datasets. To the best of our knowledge these are the first open source tools to enable cleaning and MinHashLSH deduplication of text data at trillion token scale.


The full set of scripts to recreate the dataset from the original RedPajama dataset is available on the Cerebras github. The blog post detailing our cleaning and deduplication process can be found in the SlimPajama blog post.

## Dataset Summary

#### Comparison of dataset features
| Dataset         | Tokens | Open Source | Curated Data Sources | Deduplicated |
| --------------- | ------ | ----------- | -------------------- | ------------ |
| SlimPajama      | 627B   | **Yes**     | **Yes**              | **Yes**      |
| RedPajama       | 1.21T  | **Yes**     | **Yes**              | No           |
| RefinedWeb-600B | 600B   | **Yes**     | No                   | **Yes**      |
| RefinedWeb-5T   | 5T     | No          | No                   | **Yes**      |
| LLaMA           | 1.4T   | No          | **Yes**              | **Yes**      |
| MPT             | 1T     | No          | **Yes**              | No           |
| MassiveText     | 1.4T   | No          | **Yes**              | **Yes**      |


#### Document low-length filter rates

| Data source   | Document low-length filter rate |
| ------------- | ------------------------------- |
| Commoncrawl   | 0.02%                           |
| C4            | 4.70%                           |
| GitHub        | 0.00%                           |
| Books         | 0.00%                           |
| ArXiv         | 0.62%                           |
| Wikpedia      | 0.00%                           |
| StackExchange | 0.32%                           |
| Total         | 1.86%                           |

#### Byte deduplication rates

| Data source    | Dedupe byte prune rate |
| -------------  | ---------------------- |
| Commoncrawl    | 63.76%                 |
| C4             | 6.85%                  |
| GitHub         | 46.16%                 |
| Books          | 2.01%                  |
| ArXiv          | 0.06%                  |
| Wikipedia      | 2.24%                  |
| StackExchange  | 0.20%                  |
| Total          | 49.60%                 |

#### Data source proportions for SlimPajama and RedPajama

| Data source   | SlimPajama | RedPajama |
| ------------- | ---------- | --------- |
| Commoncrawl   | 52.2%      | 72.6%    |
| C4            | 26.7%      | 14.4%    |
| GitHub        | 5.2%       | 4.9%     |
| Books         | 4.2%       | 2.1%     |
| ArXiv         | 4.6%       | 2.3%     |
| Wikpedia      | 3.8%       | 2.0%     |
| StackExchange | 3.3%       | 1.7%     |


### Languages

Primarily English, with some non-English files in Wikipedia.


### Dataset Structure

The dataset consists of jsonl files, with structure as follows:

```json
{
    "text": ...,
    "meta": {"redpajama_set_name": "RedPajamaCommonCrawl" | "RedPajamaC4" | "RedPajamaGithub" | "RedPajamaBook" | "RedPajamaArXiv" | "RedPajamaWikipedia" | "RedPajamaStackExchange"},
}
```

### Dataset Creation

SlimPajama was created by cleaning and deduplicating the [RedPajama dataset from Together Computer](https://github.com/togethercomputer/redpajama-data) via MinHashLSH. RedPajama is an open-source reproduction of the [LLaMa](https://arxiv.org/abs/2302.13971) data collection methodology.


### Source Data

The data sources composing RedPajama are explained in [its model card](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). 


To cite SlimPajama, please use:

```
@software{cerebras2023slimpajama,
  author = {Cerebras Systems},
  title = {SlimPajama: A 627B token cleaned and deduplicated version of RedPajama},
  month = June,
  year = 2023,
  url = {TODO: Blog URL}
}
```

## License
Please refer to the licenses of the data subsets you use.

- [Common Crawl Foundation Terms of Use](https://commoncrawl.org/terms-of-use/full/)
- [C4 license](https://huggingface.co/datasets/allenai/c4#license)
- GitHub was limited to MIT, BSD, or Apache licenses only
- Books: [the_pile_books3 license](https://huggingface.co/datasets/the_pile_books3#licensing-information) and [pg19 license](https://huggingface.co/datasets/pg19#licensing-information)
- [ArXiv Terms of Use](https://info.arxiv.org/help/api/tou.html)
- [Wikipedia License](https://huggingface.co/datasets/wikipedia#licensing-information)
- [StackExchange license on the Internet Archive](https://archive.org/details/stackexchange)