File size: 8,892 Bytes
48ddfee 81a13d6 de93377 48ddfee 2a2de07 81a13d6 48ddfee 43ecab1 48ddfee 9822b8e 48ddfee 7f61891 2a2de07 43ecab1 48ddfee 43ecab1 9822b8e 43ecab1 2a2de07 2a37ce4 2a2de07 2a37ce4 2a2de07 2a37ce4 2a2de07 43ecab1 48ddfee 43ecab1 1d7cdbc 43ecab1 26cce27 48ddfee 43ecab1 2a2de07 43ecab1 48ddfee 43ecab1 71edab8 2a2de07 71edab8 43ecab1 2a2de07 81a13d6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 |
---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- storm
- mistral
- openchat
- RLAIF
- reward model
base_model: openchat/openchat-3.5-0106
datasets:
- berkeley-nest/Nectar
model-index:
- name: Storm-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 34.24
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=jieliu/Storm-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 32.33
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=jieliu/Storm-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 5.97
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=jieliu/Storm-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 7.72
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=jieliu/Storm-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 14.63
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=jieliu/Storm-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 23.55
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=jieliu/Storm-7B
name: Open LLM Leaderboard
---
# Storm-7B
- **Developed by**: [Jie Liu](https://jieliu.site/) \\(^{*1,2}\\), [Zhanhui Zhou](https://scholar.google.com/citations?user=SbACfYQAAAAJ&hl=zh-CN) \\(^{*2}\\), [Jiaheng Liu](https://liujiaheng.github.io/) \\(^{2}\\), [Xingyuan Bu](https://scholar.google.com.hk/citations?user=cqYaRhUAAAAJ&hl=zh-CN) \\(^{2}\\), [Chao Yang](https://scholar.google.com/citations?user=5KRbHPMAAAAJ&hl=zh-CN) \\(^{2}\\), [Han-Sen Zhong](https://scholar.google.com.hk/citations?user=X_ZfX8sAAAAJ&hl=zh-CN) \\(^{\dag 2}\\), [Wanli Ouyang](https://wlouyang.github.io/) \\(^{1,2}\\).
- \\(^{1}\\)MMLab, The Chinese University of Hong Kong   \\(^{2}\\)Shanghai AI Laboratory
- Paper: [Iterative Length-Regularized Direct Preference Optimization: A Case Study on Improving 7B Language Models to GPT-4 Level](https://arxiv.org/pdf/2406.11817)
- Finetuned from the model: [openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
- Dataset: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar)
- Reward Model: [Starling-RM-34B](https://huggingface.co/Nexusflow/Starling-RM-34B)
Please see our paper for more details.
## Introduction
We released Storm-7B, the first open-source language model comparable to the GPT-4 series on the [AlpacaEval 2.0](https://tatsu-lab.github.io/alpaca_eval/) leaderboard.
Recent studies show that DPO benefits from iterative training with online preferences labeled by a trained reward model. In this work, we identify a pitfall of vanilla iterative DPO - improved response quality can lead to increased verbosity. To address this, we introduce iterative length-regularized DPO (iLR-DPO) to penalize response length. Our empirical results show that iLR-DPO can enhance a 7B model to perform on par with GPT-4 **without increasing verbosity**.
## Performance
Our 7B model achieves a **50.5%** length-controlled win rate against GPT-4 Preview on AlpacaEval 2.0.
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/639be86b59473c6ae02ef9c4/Tj_a1QntAxkhy2SXbOdmT.png" width="60%">
</p>
Our model's LC win rate improves over iterations without significantly changing the response length, indicating better alignment with human values without length bias. The final trained model (iteration 3) achieves a 50.5% LC win rate, making it the first open-source model to surpass the baseline model GPT-4 Preview.
In addition to regular decoding, we also test beam search and best-of-n sampling on top of our trained model. Beam search over our trained model shows a 5% improvement over regular decoding, Best-of-n sampling with Starling-RM-34B achieves 61.6% LC Win rate and outperforms GPT-4 Omni.
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/639be86b59473c6ae02ef9c4/GGa28vaREaVq099MPdqcP.png" width="100%">
</p>
We observe no significant degradation in traditional NLP tasks from the Huggingface Open LLM Leaderboard.
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/639be86b59473c6ae02ef9c4/8KEm_Ladg7Kqko8mC63SN.png" width="100%">
</p>
## Uses
Our model uses the same chat template as [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106). A sample code snippet for inference using our model is provided below.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"
model = AutoModelForCausalLM.from_pretrained("jieliu/Storm-7B").to(device)
tokenizer = AutoTokenizer.from_pretrained("jieliu/Storm-7B")
model.eval().requires_grad_(False)
def generate_response(prompt):
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
outputs = model.generate(
input_ids,
max_length=2048,
do_sample=True,
temperature=1.0,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
)
response_ids = outputs[0]
response_text = tokenizer.decode(response_ids, skip_special_tokens=True)
return response_text
prompt = "How does a telescope work?"
input_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:"
response_text = generate_response(input_prompt)
print("Response:", response_text)
```
## Scripts
You can reproduce our results on AlphaEval 2.0 using the script provided below.
```bash
git clone https://github.com/tatsu-lab/alpaca_eval.git
cd alpaca_eval
pip install -e .
export OPENAI_API_KEY=<your_api_key>
alpaca_eval evaluate_from_model --model_configs 'Storm-7B'
```
## Limitations
Our work has several limitations:
(1) We focus on aligning with human preferences but only use GPT-4 as a proxy for human judgment to evaluate language models.
(2) We reduce verbosity with a length penalty, though verbosity and length are not necessarily correlated. Future work could train a specific reward model to directly penalize verbosity, replacing the length margin with a verbosity margin, following the standard [MODPO pipeline](https://github.com/ZHZisZZ/modpo).
## Citation
```
@article{liu2024iterative,
title = {Iterative Length-Regularized Direct Preference Optimization: A Case Study on Improving 7B Language Models to GPT-4 Level},
author = {Liu, Jie and Zhou, Zhanhui and Liu, Jiaheng and Bu, Xingyuan and Yang, Chao and Zhong Han-Sen and Ouyang, Wanli},
journal={arXiv preprint arXiv:2406.11817},
year={2024}
}
@article{zhou2023beyond,
title={Beyond one-preference-for-all: Multi-objective direct preference optimization},
author={Zhou, Zhanhui and Liu, Jie and Yang, Chao and Shao, Jing and Liu, Yu and Yue, Xiangyu and Ouyang, Wanli and Qiao, Yu},
journal={arXiv preprint arXiv:2310.03708},
year={2023}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jieliu__Storm-7B)
| Metric |Value|
|-------------------|----:|
|Avg. |19.74|
|IFEval (0-Shot) |34.24|
|BBH (3-Shot) |32.33|
|MATH Lvl 5 (4-Shot)| 5.97|
|GPQA (0-shot) | 7.72|
|MuSR (0-shot) |14.63|
|MMLU-PRO (5-shot) |23.55|
|