File size: 3,187 Bytes
0e26071
 
 
9bbd06c
0e26071
 
 
 
 
 
 
 
9bbd06c
 
0e26071
3d7ab14
 
 
0e26071
 
 
 
 
 
 
9bbd06c
 
 
0e26071
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3d7ab14
 
 
0e26071
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
license: apache-2.0
datasets:
- open-r1/codeforces-cots
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-32B-Instruct
pipeline_tag: text-generation
---

# Model Card for OlympicCoder-32B

OlympicCoder-32B is a code mode that achieves very strong performance on competitive coding benchmarks such as LiveCodeBench andthe 2024 International Olympiad in Informatics.

* Repository: https://github.com/huggingface/open-r1
* Blog post: https://huggingface.co/blog/open-r1/update-3

## Model description

- **Model type:** A 32B parameter model fine-tuned on a decontaminated version of the codeforces dataset.
- **Language(s) (NLP):** Primarily English
- **License:** apache-2.0
- **Finetuned from model:** [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct)

## Evaluation

![](./ioi-evals.png)




## Usage
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:

```python
# pip install transformers
# pip install accelerate

import torch
from transformers import pipeline

pipe = pipeline("text-generation", model="open-r1/OlympicCoder-32B", torch_dtype=torch.bfloat16, device_map="auto")

# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
    {"role": "user", "content": "Write a python program to calculate the 10th Fibonacci number"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=8000, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
#<|im_start|>user
#Write a python program to calculate the 10th fibonacci number<|im_end|>
#<|im_start|>assistant
#<think>Okay, I need to write a Python program that calculates the 10th Fibonacci number. Hmm, the Fibonacci sequence starts with 0 and 1. Each subsequent number is the sum of the two preceding ones. So the sequence goes: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, and so on. ...
```

> [!IMPORTANT]
> To ensure that the model consistently outputs a long chain-of-thought, we have edited the chat template to prefill the first assistant turn with a `<think>` token. As a result, the outputs from this model will not show the opening `<think>` token if you use the model's `generate()` method.  To apply reinforcement learning with a format reward, either prepend the `<think>` token to the model's completions or amend the chat template to remove the prefill. Check out our [blog post](https://huggingface.co/blog/open-r1/update-3#lesson-4-prefill-with-think-to-consistently-enable-long-cot) for more details.


## Training procedure
### Training hyper-parameters

The following hyperparameters were used during training on 16 H100 nodes:

- dataset: open-r1/codeforces-cots_decontaminated
- learning_rate: 4.0e-5
- train_batch_size: 1
- seed: 42
- packing: false
- distributed_type: fsdp
- num_devices: 128
- gradient_accumulation_steps: 1
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_min_lr
- min_lr_rate: 0.1
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 10.0