Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
train: false
|
4 |
+
inference: false
|
5 |
+
pipeline_tag: text-generation
|
6 |
+
---
|
7 |
+
This is a version of the <a href="https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B">DeepSeek-R1-Distill-Qwen-1.5B</a> model re-distilled for better performance.
|
8 |
+
|
9 |
+
## Performance
|
10 |
+
|
11 |
+
| Models | <a href="https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B">DeepSeek-R1-Distill-Qwen-1.5B</a> | DeepSeek-R1-ReDistill-Qwen-1.5B-v1.1 |
|
12 |
+
|:-------------------:|:--------:|:----------------:|
|
13 |
+
| ARC (25-shot) | 40.96 | <b>41.3</b> |
|
14 |
+
| HellaSwag (10-shot)| 44 | <b>45.22</b> |
|
15 |
+
| MMLU (5-shot) | 39.27 | <b>42.01</b> |
|
16 |
+
| TruthfulQA-MC2 | 45.17 | <b>46.64</b> |
|
17 |
+
| Winogrande (5-shot)| 55.49 | <b>56.75</b> |
|
18 |
+
| GSM8K (5-shot) | 69.9 | <b>73.24</b> |
|
19 |
+
| Average | 49.13 | <b>50.86</b> |
|
20 |
+
|
21 |
+
| Models | <a href="https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B">DeepSeek-R1-Distill-Qwen-1.5B</a> | DeepSeek-R1-ReDistill-Qwen-1.5B-v1.1 |
|
22 |
+
|:-------------------:|:--------:|:----------------:|
|
23 |
+
| GPQA (0-shot) | 26.96 | <b>27.8</b> |
|
24 |
+
| MMLU PRO (5-shot) | 16.74 | <b>19.44</b> |
|
25 |
+
| MUSR (0-shot) | 35.93 | <b>35.94</b> |
|
26 |
+
| BBH (3-shot) | 35.12 | 35.11 |
|
27 |
+
| Average | | |
|
28 |
+
|
29 |
+
## Usage
|
30 |
+
```Python
|
31 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
32 |
+
compute_dtype = torch.bfloat16
|
33 |
+
device = 'cuda'
|
34 |
+
model_id = "mobiuslabsgmbh/DeepSeek-R1-ReDistill-Qwen-1.5B-v1.1"
|
35 |
+
|
36 |
+
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=compute_dtype, attn_implementation=attn_implementation, device_map=device)
|
37 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
38 |
+
|
39 |
+
chat = tokenizer.apply_chat_template([{"role":"user", "content":"What is 1.5+102.2?"}], tokenize=True, add_generation_prompt=True, return_tensors="pt")
|
40 |
+
outputs = model_student.generate(chat.to(device), max_new_tokens=1024, do_sample=True)
|
41 |
+
print(tokenizer.decode(outputs[0]))
|
42 |
+
```
|
43 |
+
|