Update README.md
Browse files
README.md
CHANGED
@@ -9,5 +9,7 @@ base_model:
|
|
9 |
|
10 |
SYNTHETIC-1-7B-SFT is an initial model trained on the SFT subset of SYNTHETIC-1, a collaboratively generated reasoning dataset from Deepseek-R1. The model largely outperforms other models based on Qwen-2.5-Instruct-7B that were trained with smaller reasoning datasets.
|
11 |
|
|
|
|
|
12 |
|
13 |

|
|
|
9 |
|
10 |
SYNTHETIC-1-7B-SFT is an initial model trained on the SFT subset of SYNTHETIC-1, a collaboratively generated reasoning dataset from Deepseek-R1. The model largely outperforms other models based on Qwen-2.5-Instruct-7B that were trained with smaller reasoning datasets.
|
11 |
|
12 |
+
All SYNTHETIC-1 datasets can be found in our [🤗 SYNTHETIC-1 Collection](https://huggingface.co/collections/PrimeIntellect/synthetic-1-67a2c399cfdd6c9f7fae0c37).
|
13 |
+
|
14 |
|
15 |

|