---
license: mit
train: false
inference: false
pipeline_tag: text-generation
---
This is a version of the DeepSeek-R1-Distill-Qwen-1.5B model re-distilled for better performance.
## Performance
| Models | DeepSeek-R1-Distill-Qwen-1.5B | DeepSeek-R1-ReDistill-Qwen-1.5B-v1.1 |
|:-------------------:|:--------:|:----------------:|
| ARC (25-shot) | 40.96 | 41.3 |
| HellaSwag (10-shot)| 44 | 45.22 |
| MMLU (5-shot) | 39.27 | 42.01 |
| TruthfulQA-MC2 | 45.17 | 46.64 |
| Winogrande (5-shot)| 55.49 | 56.75 |
| GSM8K (5-shot) | 69.9 | 73.24 |
| Average | 49.13 | 50.86 |
| Models | DeepSeek-R1-Distill-Qwen-1.5B | DeepSeek-R1-ReDistill-Qwen-1.5B-v1.1 |
|:-------------------:|:--------:|:----------------:|
| GPQA (0-shot) | 26.96 | 27.8 |
| MMLU PRO (5-shot) | 16.74 | 19.44 |
| MUSR (0-shot) | 35.93 | 35.94 |
| BBH (3-shot) | 35.12 | 35.11 |
| Average | | |
## Usage
```Python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
compute_dtype = torch.bfloat16
device = 'cuda'
model_id = "mobiuslabsgmbh/DeepSeek-R1-ReDistill-Qwen-1.5B-v1.1"
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=compute_dtype, attn_implementation="sdpa", device_map=device)
tokenizer = AutoTokenizer.from_pretrained(model_id)
chat = tokenizer.apply_chat_template([{"role":"user", "content":"What is 1.5+102.2?"}], tokenize=True, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(chat.to(device), max_new_tokens=1024, do_sample=True)
print(tokenizer.decode(outputs[0]))
```