RichardErkhov
commited on
uploaded readme
Browse files
README.md
ADDED
@@ -0,0 +1,160 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Quantization made by Richard Erkhov.
|
2 |
+
|
3 |
+
[Github](https://github.com/RichardErkhov)
|
4 |
+
|
5 |
+
[Discord](https://discord.gg/pvy7H8DZMG)
|
6 |
+
|
7 |
+
[Request more models](https://github.com/RichardErkhov/quant_request)
|
8 |
+
|
9 |
+
|
10 |
+
Storm-7B - GGUF
|
11 |
+
- Model creator: https://huggingface.co/jieliu/
|
12 |
+
- Original model: https://huggingface.co/jieliu/Storm-7B/
|
13 |
+
|
14 |
+
|
15 |
+
| Name | Quant method | Size |
|
16 |
+
| ---- | ---- | ---- |
|
17 |
+
| [Storm-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/jieliu_-_Storm-7B-gguf/blob/main/Storm-7B.Q2_K.gguf) | Q2_K | 2.53GB |
|
18 |
+
| [Storm-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/jieliu_-_Storm-7B-gguf/blob/main/Storm-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
|
19 |
+
| [Storm-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/jieliu_-_Storm-7B-gguf/blob/main/Storm-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
|
20 |
+
| [Storm-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/jieliu_-_Storm-7B-gguf/blob/main/Storm-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
|
21 |
+
| [Storm-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/jieliu_-_Storm-7B-gguf/blob/main/Storm-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
|
22 |
+
| [Storm-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/jieliu_-_Storm-7B-gguf/blob/main/Storm-7B.Q3_K.gguf) | Q3_K | 3.28GB |
|
23 |
+
| [Storm-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/jieliu_-_Storm-7B-gguf/blob/main/Storm-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
|
24 |
+
| [Storm-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/jieliu_-_Storm-7B-gguf/blob/main/Storm-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
|
25 |
+
| [Storm-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/jieliu_-_Storm-7B-gguf/blob/main/Storm-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
|
26 |
+
| [Storm-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/jieliu_-_Storm-7B-gguf/blob/main/Storm-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
|
27 |
+
| [Storm-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/jieliu_-_Storm-7B-gguf/blob/main/Storm-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
|
28 |
+
| [Storm-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/jieliu_-_Storm-7B-gguf/blob/main/Storm-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
|
29 |
+
| [Storm-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/jieliu_-_Storm-7B-gguf/blob/main/Storm-7B.Q4_K.gguf) | Q4_K | 4.07GB |
|
30 |
+
| [Storm-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/jieliu_-_Storm-7B-gguf/blob/main/Storm-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
|
31 |
+
| [Storm-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/jieliu_-_Storm-7B-gguf/blob/main/Storm-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
|
32 |
+
| [Storm-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/jieliu_-_Storm-7B-gguf/blob/main/Storm-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
|
33 |
+
| [Storm-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/jieliu_-_Storm-7B-gguf/blob/main/Storm-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
|
34 |
+
| [Storm-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/jieliu_-_Storm-7B-gguf/blob/main/Storm-7B.Q5_K.gguf) | Q5_K | 4.78GB |
|
35 |
+
| [Storm-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/jieliu_-_Storm-7B-gguf/blob/main/Storm-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
|
36 |
+
| [Storm-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/jieliu_-_Storm-7B-gguf/blob/main/Storm-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
|
37 |
+
| [Storm-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/jieliu_-_Storm-7B-gguf/blob/main/Storm-7B.Q6_K.gguf) | Q6_K | 5.53GB |
|
38 |
+
| [Storm-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/jieliu_-_Storm-7B-gguf/blob/main/Storm-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
|
43 |
+
Original model description:
|
44 |
+
---
|
45 |
+
license: apache-2.0
|
46 |
+
library_name: transformers
|
47 |
+
tags:
|
48 |
+
- storm
|
49 |
+
- mistral
|
50 |
+
- openchat
|
51 |
+
- RLAIF
|
52 |
+
- reward model
|
53 |
+
language:
|
54 |
+
- en
|
55 |
+
base_model: openchat/openchat-3.5-0106
|
56 |
+
datasets:
|
57 |
+
- berkeley-nest/Nectar
|
58 |
+
---
|
59 |
+
|
60 |
+
# Storm-7B
|
61 |
+
- **Developed by**: [Jie Liu](https://jieliu.site/) \\(^{*1,2}\\), [Zhanhui Zhou](https://scholar.google.com/citations?user=SbACfYQAAAAJ&hl=zh-CN) \\(^{*2}\\), [Jiaheng Liu](https://liujiaheng.github.io/) \\(^{2}\\), [Xingyuan Bu](https://scholar.google.com.hk/citations?user=cqYaRhUAAAAJ&hl=zh-CN) \\(^{2}\\), [Chao Yang](https://scholar.google.com/citations?user=5KRbHPMAAAAJ&hl=zh-CN) \\(^{2}\\), [Han-Sen Zhong](https://scholar.google.com.hk/citations?user=X_ZfX8sAAAAJ&hl=zh-CN) \\(^{\dag 2}\\), [Wanli Ouyang](https://wlouyang.github.io/) \\(^{1,2}\\).
|
62 |
+
- \\(^{1}\\)MMLab, The Chinese University of Hong Kong   \\(^{2}\\)Shanghai AI Laboratory
|
63 |
+
- Paper: [Iterative Length-Regularized Direct Preference Optimization: A Case Study on Improving 7B Language Models to GPT-4 Level](https://arxiv.org/pdf/2406.11817)
|
64 |
+
- Finetuned from the model: [openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
|
65 |
+
- Dataset: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar)
|
66 |
+
- Reward Model: [Starling-RM-34B](https://huggingface.co/Nexusflow/Starling-RM-34B)
|
67 |
+
|
68 |
+
Please see our paper for more details.
|
69 |
+
|
70 |
+
## Introduction
|
71 |
+
|
72 |
+
We released Storm-7B, the first open-source language model comparable to the GPT-4 series on the [AlpacaEval 2.0](https://tatsu-lab.github.io/alpaca_eval/) leaderboard.
|
73 |
+
|
74 |
+
Recent studies show that DPO benefits from iterative training with online preferences labeled by a trained reward model. In this work, we identify a pitfall of vanilla iterative DPO - improved response quality can lead to increased verbosity. To address this, we introduce iterative length-regularized DPO (iLR-DPO) to penalize response length. Our empirical results show that iLR-DPO can enhance a 7B model to perform on par with GPT-4 **without increasing verbosity**.
|
75 |
+
|
76 |
+
## Performance
|
77 |
+
Our 7B model achieves a **50.5%** length-controlled win rate against GPT-4 Preview on AlpacaEval 2.0.
|
78 |
+
<p align="center">
|
79 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/639be86b59473c6ae02ef9c4/Tj_a1QntAxkhy2SXbOdmT.png" width="60%">
|
80 |
+
</p>
|
81 |
+
Our model's LC win rate improves over iterations without significantly changing the response length, indicating better alignment with human values without length bias. The final trained model (iteration 3) achieves a 50.5% LC win rate, making it the first open-source model to surpass the baseline model GPT-4 Preview.
|
82 |
+
|
83 |
+
In addition to regular decoding, we also test beam search and best-of-n sampling on top of our trained model. Beam search over our trained model shows a 5% improvement over regular decoding, Best-of-n sampling with Starling-RM-34B achieves 61.6% LC Win rate and outperforms GPT-4 Omni.
|
84 |
+
<p align="center">
|
85 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/639be86b59473c6ae02ef9c4/GGa28vaREaVq099MPdqcP.png" width="100%">
|
86 |
+
</p>
|
87 |
+
|
88 |
+
We observe no significant degradation in traditional NLP tasks from the Huggingface Open LLM Leaderboard.
|
89 |
+
<p align="center">
|
90 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/639be86b59473c6ae02ef9c4/8KEm_Ladg7Kqko8mC63SN.png" width="100%">
|
91 |
+
</p>
|
92 |
+
|
93 |
+
|
94 |
+
## Uses
|
95 |
+
|
96 |
+
Our model uses the same chat template as [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106). A sample code snippet for inference using our model is provided below.
|
97 |
+
|
98 |
+
```python
|
99 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
100 |
+
|
101 |
+
device = "cuda"
|
102 |
+
|
103 |
+
model = AutoModelForCausalLM.from_pretrained("jieliu/Storm-7B").to(device)
|
104 |
+
tokenizer = AutoTokenizer.from_pretrained("jieliu/Storm-7B")
|
105 |
+
model.eval().requires_grad_(False)
|
106 |
+
|
107 |
+
def generate_response(prompt):
|
108 |
+
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
|
109 |
+
outputs = model.generate(
|
110 |
+
input_ids,
|
111 |
+
max_length=2048,
|
112 |
+
do_sample=True,
|
113 |
+
temperature=1.0,
|
114 |
+
pad_token_id=tokenizer.pad_token_id,
|
115 |
+
eos_token_id=tokenizer.eos_token_id,
|
116 |
+
)
|
117 |
+
response_ids = outputs[0]
|
118 |
+
response_text = tokenizer.decode(response_ids, skip_special_tokens=True)
|
119 |
+
return response_text
|
120 |
+
|
121 |
+
prompt = "How does a telescope work?"
|
122 |
+
input_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:"
|
123 |
+
response_text = generate_response(input_prompt)
|
124 |
+
print("Response:", response_text)
|
125 |
+
```
|
126 |
+
|
127 |
+
## Scripts
|
128 |
+
You can reproduce our results on AlphaEval 2.0 using the script provided below.
|
129 |
+
```bash
|
130 |
+
git clone https://github.com/tatsu-lab/alpaca_eval.git
|
131 |
+
cd alpaca_eval
|
132 |
+
pip install -e .
|
133 |
+
export OPENAI_API_KEY=<your_api_key>
|
134 |
+
alpaca_eval evaluate_from_model --model_configs 'Storm-7B'
|
135 |
+
```
|
136 |
+
|
137 |
+
## Limitations
|
138 |
+
|
139 |
+
Our work has several limitations:
|
140 |
+
(1) We focus on aligning with human preferences but only use GPT-4 as a proxy for human judgment to evaluate language models.
|
141 |
+
(2) We reduce verbosity with a length penalty, though verbosity and length are not necessarily correlated. Future work could train a specific reward model to directly penalize verbosity, replacing the length margin with a verbosity margin, following the standard [MODPO pipeline](https://github.com/ZHZisZZ/modpo).
|
142 |
+
|
143 |
+
## Citation
|
144 |
+
|
145 |
+
```
|
146 |
+
@article{liu2024iterative,
|
147 |
+
title = {Iterative Length-Regularized Direct Preference Optimization: A Case Study on Improving 7B Language Models to GPT-4 Level},
|
148 |
+
author = {Liu, Jie and Zhou, Zhanhui and Liu, Jiaheng and Bu, Xingyuan and Yang, Chao and Zhong Han-Sen and Ouyang, Wanli},
|
149 |
+
journal={arXiv preprint arXiv:2406.11817},
|
150 |
+
year={2024}
|
151 |
+
}
|
152 |
+
|
153 |
+
@article{zhou2023beyond,
|
154 |
+
title={Beyond one-preference-for-all: Multi-objective direct preference optimization},
|
155 |
+
author={Zhou, Zhanhui and Liu, Jie and Yang, Chao and Shao, Jing and Liu, Yu and Yue, Xiangyu and Ouyang, Wanli and Qiao, Yu},
|
156 |
+
journal={arXiv preprint arXiv:2310.03708},
|
157 |
+
year={2023}
|
158 |
+
}
|
159 |
+
```
|
160 |
+
|