File size: 12,461 Bytes
727b1b9
 
 
 
8f69a30
727b1b9
 
 
 
 
 
 
500d3d7
727b1b9
c48a45d
01711ff
c48a45d
54d4a82
 
727b1b9
01711ff
 
 
 
 
 
727b1b9
 
1e56e4a
727b1b9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a6c0496
727b1b9
 
ff919ce
46b6c6f
edc40ce
 
 
46b6c6f
 
 
f3d419d
 
46b6c6f
 
edc40ce
6eaabb8
46b6c6f
edc40ce
 
46b6c6f
 
 
 
 
 
 
 
 
 
 
 
f3d419d
 
 
 
 
 
 
 
 
 
 
4c98372
 
f3d419d
 
 
46b6c6f
 
f3d419d
 
46b6c6f
 
 
6eaabb8
46b6c6f
 
 
 
 
 
 
 
 
 
 
 
727b1b9
f3d419d
 
 
 
 
 
 
 
 
 
4c98372
 
f3d419d
727b1b9
0046a3e
 
 
 
 
f3d419d
 
0046a3e
 
 
6eaabb8
0046a3e
 
 
 
 
 
 
 
 
727b1b9
f3d419d
 
 
 
 
 
 
ff919ce
 
 
 
 
f3d419d
 
ff919ce
 
 
6eaabb8
ff919ce
 
 
 
 
 
 
 
 
f3d419d
ff919ce
f3d419d
 
 
 
727b1b9
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
---
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-72B
language:
- en
- zh
---


> [!IMPORTANT]
> If you enjoy our model, please **give it a like on our Hugging Face repo**. Your support means a lot to us. Thank you!

> [!IMPORTANT]
> You can download the **GGUF files of Xwen-72B-Chat** at [xwen-team/Xwen-72B-Chat-i1-GGUF](https://huggingface.co/xwen-team/Xwen-72B-Chat-i1-GGUF) (weighted/imatrix quants) and [xwen-team/Xwen-72B-Chat-GGUF](https://huggingface.co/xwen-team/Xwen-72B-Chat-GGUF) (static quants).

NEWS:
- Big thanks to @mradermacher for helping us build GGUFs for our Xwen-72B-Chat and Xwen-7B-Chat! The GGUF files have accumulated **over 2k downloads in one day** πŸš€ Our official GGUF repos: [**xwen-team/Xwen-72B-Chat-i1-GGUF**](https://huggingface.co/xwen-team/Xwen-72B-Chat-i1-GGUF) (weighted/imatrix quants) and [**xwen-team/Xwen-72B-Chat-GGUF**](https://huggingface.co/xwen-team/Xwen-72B-Chat-GGUF) (static quants).

# Xwen-72B-Chat

<img src="Xwen-Cartoon.jpg" alt="Xwen-Cartoon" style="zoom:35%;" />

## 1. Introduction

Xwen is a series of open-sourced large language models (currently including **[Xwen-72B-Chat](https://huggingface.co/xwen-team/Xwen-72B-Chat)** and **[Xwen-7B-Chat](https://huggingface.co/xwen-team/Xwen-7B-Chat)**), post-trained from the pre-trained Qwen2.5 models (i.e., [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) and [Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B)) [1].

**πŸ† Top-1 chat performance!** To the best of our knowledge, at the time of Xwen models' release (February 1, 2025), **[Xwen-72B-Chat](https://huggingface.co/xwen-team/Xwen-72B-Chat) and [Xwen-7B-Chat](https://huggingface.co/xwen-team/Xwen-7B-Chat) exhibit the best chat performance among open-sourced models below 100B and 10B, respectively**, based on evaluation results from widely-used benchmarks such as Arena-Hard-Auto [2], MT-Bench [3], and AlignBench [4]. Please view details in the [Evaluation Results](https://huggingface.co/xwen-team/Xwen-72B-Chat#3-evaluation-results) part.

**πŸš€ Xwen technical report is on the way!** During the training of Xwen models, we have accumulated many technical insights and lessons. To promote the democratization of technology, we are in the process of documenting these insights and lessons in a technical report, which will be released as soon as possible.



## 2. Usage

> [!CAUTION]
> For optimal performance, we refrain from fine-tuning the model's identity. Thus, inquiries such as "Who are you" or "Who developed you" may yield random responses that are not necessarily accurate. 

> [!CAUTION]
> This open-source model is provided "as is," without warranties or liabilities, and users assume all risks associated with its use; users are advised to comply with local laws, and the model's outputs do not represent the views or positions of the developers. 

The usage of our Xwen-Chat models is similar to that of the Qwen2.5-Instruct models, with the tokenizer and chat template being identical to those of the Qwen2.5-Instruct models.

Here we provide a python script to demonstrate how to deploy our Xwen models to generate reponses:


```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "xwen-team/Xwen-72B-Chat"   # Or "xwen-team/Xwen-7B-Chat" if you want to use the 7B model

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Give me a short introduction to large language models."
messages = [
    {"role": "system", "content": "You are Xwen, created by Xwen Team. You are a helpful assistant."},   # This system prompt is not necessary, and you can put it as an empty string.
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

print(response)
```

## 3. Evaluation Results

> [!CAUTION]
> Results on other benchmarks will be updated soon! 😊

πŸ”‘: Open-sourced

πŸ”’: Proprietary

### 3.1 Arena-Hard-Auto-v0.1

All results below, except those for `Xwen-72B-Chat`, `DeepSeek-V3` and `DeepSeek-R1`, are sourced from [Arena-Hard-Auto](https://github.com/lmarena/arena-hard-auto) (accessed on February 1, 2025).

The results of `DeepSeek-V3` and `DeepSeek-R1` are borrowed from their officially reported results.

#### 3.1.1 No Style Control

**Comparison of Xwen-72B-Chat with other LLMs at a comparable level:**

|                                   | Score                    | 95% CIs     |
| --------------------------------- | ------------------------ | ----------- |
| **Xwen-72B-Chat** πŸ”‘               | **86.1** (Top-1 among πŸ”‘ below 100B) | (-1.5, 1.7) |
| Qwen2.5-72B-Instruct πŸ”‘            | 78.0                     | (-1.8, 1.8) |
| Athene-v2-Chat πŸ”‘                  | 85.0                     | (-1.4, 1.7) |
| DeepSeek-V3 **(671B >> 72B)** πŸ”‘  |  85.5   | N/A  |
| DeepSeek-R1 **(671B >> 72B)** πŸ”‘  |  **92.3** (Top-1 among πŸ”‘)  | N/A  |
| Llama-3.1-Nemotron-70B-Instruct πŸ”‘ | 84.9                     | (-1.7, 1.8) |
| Llama-3.1-405B-Instruct-FP8 πŸ”‘     | 69.3                     | (-2.4, 2.2) |
| Claude-3-5-Sonnet-20241022 πŸ”’      | 85.2                     | (-1.4, 1.6) |
| O1-Preview-2024-09-12 πŸ”’           | **92.0** (Top-1 among πŸ”’) | (-1.2, 1.0) |
| O1-Mini-2024-09-12 πŸ”’              | 90.4                     | (-1.1, 1.3) |
| GPT-4-Turbo-2024-04-09 πŸ”’          | 82.6                     | (-1.8, 1.5) |
| GPT-4-0125-Preview πŸ”’              | 78.0                     | (-2.1, 2.4) |
| GPT-4o-2024-08-06 πŸ”’               | 77.9                     | (-2.0, 2.1) |
| Yi-Lightning πŸ”’                    | 81.5                     | (-1.6, 1.6) |
| Yi-LargeπŸ”’                         | 63.7                     | (-2.6, 2.4) |
| GLM-4-0520 πŸ”’                      | 63.8                     | (-2.9, 2.8) |


**Comparison of Xwen-7B-Chat with other LLMs at a comparable level:**

|                         | Score    | 95% CIs     |
| ----------------------- | -------- | ----------- |
| **Xwen-7B-Chat** πŸ”‘      | **59.4** | (-2.4, 2.1) |
| Qwen2.5-7B-Instruct πŸ”‘   | 50.4     | (-2.9, 2.5) |
| Gemma-2-27B-IT πŸ”‘        | 57.5     | (-2.1, 2.4) |
| Llama-3.1-8B-Instruct πŸ”‘ | 21.3     | (-1.9, 2.2) |
| Llama-3-8B-Instruct πŸ”‘   | 20.6     | (-2.0, 1.9) |
| Starling-LM-7B-beta πŸ”‘   | 23.0     | (-1.8, 1.8) |
| DeepSeek-R1-Distill-Qwen-7B (only responses) πŸ”‘ | 17.2  | (-1.4, 1.7) |
| DeepSeek-R1-Distill-Qwen-7B (w/ thoughts and responses) πŸ”‘ |  13.6  | (-1.4, 1.8)  |  



#### 3.1.2 Style Control

**Comparison of Xwen-72B-Chat with other LLMs at a comparable level:**

|                                   | Score                    | 95% CIs     |
| --------------------------------- | ------------------------ | ----------- |
| **Xwen-72B-Chat** πŸ”‘               | **72.4** (Top-1 Among πŸ”‘) | (-4.3, 4.1) |
| Qwen2.5-72B-Instruct πŸ”‘            | 63.3                     | (-2.5, 2.3) |
| Athene-v2-Chat πŸ”‘                  | 72.1                     | (-2.5, 2.5) |
| Llama-3.1-Nemotron-70B-Instruct πŸ”‘ | 71.0                     | (-2.8, 3.1) |
| Llama-3.1-405B-Instruct-FP8 πŸ”‘     | 67.1                     | (-2.2, 2.8) |
| Claude-3-5-Sonnet-20241022 πŸ”’      | **86.4** (Top-1 Among πŸ”’) | (-1.3, 1.3) |
| O1-Preview-2024-09-12 πŸ”’           | 81.7                     | (-2.2, 2.1) |
| O1-Mini-2024-09-12 πŸ”’              | 79.3                     | (-2.8, 2.3) |
| GPT-4-Turbo-2024-04-09 πŸ”’          | 74.3                     | (-2.4, 2.4) |
| GPT-4-0125-Preview πŸ”’              | 73.6                     | (-2.0, 2.0) |
| GPT-4o-2024-08-06 πŸ”’               | 71.1                     | (-2.5, 2.0) |
| Yi-Lightning πŸ”’                    | 66.9                     | (-3.3, 2.7) |
| Yi-Large-Preview πŸ”’                | 65.1                     | (-2.5, 2.5) |
| GLM-4-0520 πŸ”’                      | 61.4                     | (-2.6, 2.4) |

**Comparison of Xwen-7B-Chat with other LLMs at a comparable level:**

|                         | Score    | 95% CIs     |
| ----------------------- | -------- | ----------- |
| **Xwen-7B-Chat** πŸ”‘      | **50.3** | (-3.8, 2.8) |
| Qwen2.5-7B-Instruct πŸ”‘   | 46.9     | (-3.1, 2.7) |
| Gemma-2-27B-IT πŸ”‘        | 47.5     | (-2.5, 2.7) |
| Llama-3.1-8B-Instruct πŸ”‘ | 18.3     | (-1.6, 1.6) |
| Llama-3-8B-Instruct πŸ”‘   | 19.8     | (-1.6, 1.9) |
| Starling-LM-7B-beta πŸ”‘   | 26.1     | (-2.6, 2.0) |
| DeepSeek-R1-Distill-Qwen-7B (only responses) πŸ”‘ | 18.5  | (-1.6, 1.8) |
| DeepSeek-R1-Distill-Qwen-7B (w/ thoughts and responses) πŸ”‘ |  11.8  | (-1.6, 1.6)  |  


### 3.2 AlignBench-v1.1

> [!IMPORTANT]
> We replaced the original judge model, `GPT-4-0613`, in AlignBench with the more powerful model, `GPT-4o-0513`. To keep fairness, all the results below are generated by ``GPT-4o-0513``. As a result, the following results may differ from the AlignBench-v1.1 scores reported elsewhere.

**Comparison of Xwen-72B-Chat with other LLMs at a comparable level:**

|                               | Score                    |
| ----------------------------- | ------------------------ |
| **Xwen-72B-Chat** πŸ”‘           | **7.57** (Top-1 Among πŸ”‘) |
| Qwen2.5-72B-Instruct πŸ”‘            | 7.51                     |
| Deepseek V2.5 πŸ”‘               | 7.38                     |
| Mistral-Large-Instruct-2407 πŸ”‘ | 7.10                     |
| Llama3.1-70B-Instruct πŸ”‘       | 5.81                     |
| Llama-3.1-405B-Instruct-FP8 πŸ”‘ | 5.56                     |
| GPT-4o-0513 πŸ”’                 | **7.59** (Top-1 Among πŸ”’) |
| Claude-3.5-Sonnet-20240620 πŸ”’  | 7.17                     |
| Yi-Lightning πŸ”’                | 7.54                     |
| Yi-Large-Preview πŸ”’            | 7.20                     |


**Comparison of Xwen-7B-Chat with other LLMs at a comparable level:**

|                    | Score    |
| ------------------ | -------- |
| **Xwen-7B-Chat** πŸ”‘ | **6.88** |
| Qwen2.5-7B-Chat πŸ”‘  | 6.56     |

### 3.3 MT-Bench

> [!IMPORTANT]
> We replaced the original judge model, `GPT-4`, in MT-Bench with the more powerful model, `GPT-4o-0513`. To keep fairness, all the results below are generated by ``GPT-4o-0513``. As a result, the following results may differ from the MT-Bench scores reported elsewhere.

**Comparison of Xwen-72B-Chat with other LLMs at a comparable level:**

|                               | Score                    |
| ----------------------------- | ------------------------ |
| **Xwen-72B-Chat** πŸ”‘           | **8.64** (Top-1 Among πŸ”‘) |
| Qwen2.5-72B-Instruct πŸ”‘            | 8.62                     |
| Deepseek V2.5 πŸ”‘               | 8.43                     |
| Mistral-Large-Instruct-2407 πŸ”‘ | 8.53                     |
| Llama3.1-70B-Instruct πŸ”‘       | 8.23                     |
| Llama-3.1-405B-Instruct-FP8 πŸ”‘ | 8.36                     |
| GPT-4o-0513 πŸ”’                 | 8.59                     |
| Claude-3.5-Sonnet-20240620 πŸ”’  | 6.96                     |
| Yi-Lightning πŸ”’                | **8.75** (Top-1 Among πŸ”’) |
| Yi-Large-Preview πŸ”’            | 8.32                     |

**Comparison of Xwen-7B-Chat with other LLMs at a comparable level:**

|                    | Score    |
| ------------------ | -------- |
| **Xwen-7B-Chat** πŸ”‘ | **7.98** |
| Qwen2.5-7B-Chat πŸ”‘  | 7.71     |

## References

[1] Yang, An, et al. "Qwen2. 5 technical report." arXiv preprint arXiv:2412.15115 (2024).

[2] Li, Tianle, et al. "From Crowdsourced Data to High-Quality Benchmarks: Arena-Hard and BenchBuilder Pipeline." arXiv preprint arXiv:2406.11939 (2024).

[3] Zheng, Lianmin, et al. "Judging llm-as-a-judge with mt-bench and chatbot arena." Advances in Neural Information Processing Systems 36 (2023).

[4] Liu, Xiao, et al. "Alignbench: Benchmarking chinese alignment of large language models." Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (2024).