shimmyshimmer commited on
Commit
71a6d2d
·
verified ·
1 Parent(s): 960e53d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +361 -0
README.md ADDED
@@ -0,0 +1,361 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ library_name: transformers
5
+ license: apache-2.0
6
+ tags:
7
+ - unsloth
8
+ - transformers
9
+ - mistral
10
+ - mistral-instruct
11
+ - instruct
12
+ base_model: mistralai/Mistral-Small-24B-Base-2501
13
+ ---
14
+
15
+ # Finetune LLMs 2-5x faster with 70% less memory via Unsloth!
16
+ We have a free Google Colab Tesla T4 notebook for Mistral (7B) here: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb
17
+
18
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
19
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
20
+
21
+
22
+ ## ✨ Finetune for Free
23
+
24
+ All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
25
+
26
+ | Unsloth supports | Free Notebooks | Performance | Memory use |
27
+ |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
28
+ | **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less |
29
+ | **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less |
30
+ | **Qwen2 VL (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2_VL_(7B)-Vision.ipynb) | 1.8x faster | 60% less |
31
+ | **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less |
32
+ | **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb) | 2.4x faster | 58% less |
33
+ | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_3.5_Mini-Conversational.ipynb) | 2x faster | 50% less |
34
+ | **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma2_(9B)-Alpaca.ipynb) | 2.4x faster | 58% less |
35
+ | **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb) | 2.2x faster | 62% less |
36
+
37
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="200"/>](https://docs.unsloth.ai)
38
+
39
+ - This [Llama 3.2 conversational notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) is useful for ShareGPT ChatML / Vicuna templates.
40
+ - This [text completion notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_(7B)-Text_Completion.ipynb) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
41
+ - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
42
+ # Model Card for Mistral-Small-24B-Instruct-2501
43
+
44
+ Mistral Small 3 ( 2501 ) sets a new benchmark in the "small" Large Language Models category below 70B, boasting 24B parameters and achieving state-of-the-art capabilities comparable to larger models!
45
+ This model is an instruction-fine-tuned version of the base model: [Mistral-Small-24B-Base-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Base-2501).
46
+
47
+ Mistral Small can be deployed locally and is exceptionally "knowledge-dense", fitting in a single RTX 4090 or a 32GB RAM MacBook once quantized.
48
+ Perfect for:
49
+ - Fast response conversational agents.
50
+ - Low latency function calling.
51
+ - Subject matter experts via fine-tuning.
52
+ - Local inference for hobbyists and organizations handling sensitive data.
53
+
54
+ For enterprises that need specialized capabilities (increased context, particular modalities, domain specific knowledge, etc.), we will be releasing commercial models beyond what Mistral AI contributes to the community.
55
+
56
+ This release demonstrates our commitment to open source, serving as a strong base model.
57
+
58
+ Learn more about Mistral Small in our [blog post](https://mistral.ai/news/mistral-small-3/).
59
+
60
+ Model developper: Mistral AI Team
61
+
62
+ ## Key Features
63
+ - **Multilingual:** Supports dozens of languages, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch, and Polish.
64
+ - **Agent-Centric:** Offers best-in-class agentic capabilities with native function calling and JSON outputting.
65
+ - **Advanced Reasoning:** State-of-the-art conversational and reasoning capabilities.
66
+ - **Apache 2.0 License:** Open license allowing usage and modification for both commercial and non-commercial purposes.
67
+ - **Context Window:** A 32k context window.
68
+ - **System Prompt:** Maintains strong adherence and support for system prompts.
69
+ - **Tokenizer:** Utilizes a Tekken tokenizer with a 131k vocabulary size.
70
+
71
+ ## Benchmark results
72
+
73
+
74
+ ### Human evaluated benchmarks
75
+
76
+ | Category | Gemma-2-27B | Qwen-2.5-32B | Llama-3.3-70B | Gpt4o-mini |
77
+ |----------|-------------|--------------|---------------|------------|
78
+ | Mistral is better | 0.536 | 0.496 | 0.192 | 0.200 |
79
+ | Mistral is slightly better | 0.196 | 0.184 | 0.164 | 0.204 |
80
+ | Ties | 0.052 | 0.060 | 0.236 | 0.160 |
81
+ | Other is slightly better | 0.060 | 0.088 | 0.112 | 0.124 |
82
+ | Other is better | 0.156 | 0.172 | 0.296 | 0.312 |
83
+
84
+ **Note**:
85
+
86
+ - We conducted side by side evaluations with an external third-party vendor, on a set of over 1k proprietary coding and generalist prompts.
87
+ - Evaluators were tasked with selecting their preferred model response from anonymized generations produced by Mistral Small 3 vs another model.
88
+ - We are aware that in some cases the benchmarks on human judgement starkly differ from publicly available benchmarks, but have taken extra caution in verifying a fair evaluation. We are confident that the above benchmarks are valid.
89
+
90
+ ### Publicly accesible benchmarks
91
+
92
+ **Reasoning & Knowledge**
93
+
94
+ | Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 |
95
+ |------------|---------------|--------------|---------------|---------------|-------------|
96
+ | mmlu_pro_5shot_cot_instruct | 0.663 | 0.536 | 0.666 | 0.683 | 0.617 |
97
+ | gpqa_main_cot_5shot_instruct | 0.453 | 0.344 | 0.531 | 0.404 | 0.377 |
98
+
99
+ **Math & Coding**
100
+
101
+ | Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 |
102
+ |------------|---------------|--------------|---------------|---------------|-------------|
103
+ | humaneval_instruct_pass@1 | 0.848 | 0.732 | 0.854 | 0.909 | 0.890 |
104
+ | math_instruct | 0.706 | 0.535 | 0.743 | 0.819 | 0.761 |
105
+ **Instruction following**
106
+ | Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 |
107
+ |------------|---------------|--------------|---------------|---------------|-------------|
108
+ | mtbench_dev | 8.35 | 7.86 | 7.96 | 8.26 | 8.33 |
109
+ | wildbench | 52.27 | 48.21 | 50.04 | 52.73 | 56.13 |
110
+ | arena_hard | 0.873 | 0.788 | 0.840 | 0.860 | 0.897 |
111
+ | ifeval | 0.829 | 0.8065 | 0.8835 | 0.8401 | 0.8499 |
112
+ **Note**:
113
+ - Performance accuracy on all benchmarks were obtained through the same internal evaluation pipeline - as such, numbers may vary slightly from previously reported performance
114
+ ([Qwen2.5-32B-Instruct](https://qwenlm.github.io/blog/qwen2.5/), [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct), [Gemma-2-27B-IT](https://huggingface.co/google/gemma-2-27b-it)).
115
+ - Judge based evals such as Wildbench, Arena hard and MTBench were based on gpt-4o-2024-05-13.
116
+ ### Basic Instruct Template (V7-Tekken)
117
+ ```
118
+ <s>[SYSTEM_PROMPT]<system prompt>[/SYSTEM_PROMPT][INST]<user message>[/INST]<assistant response></s>[INST]<user message>[/INST]
119
+ ```
120
+ *`<system_prompt>`, `<user message>` and `<assistant response>` are placeholders.*
121
+ ***Please make sure to use [mistral-common](https://github.com/mistralai/mistral-common) as the source of truth***
122
+ ## Usage
123
+ The model can be used with the following frameworks;
124
+ - [`vllm`](https://github.com/vllm-project/vllm): See [here](#vLLM)
125
+ - [`transformers`](https://github.com/huggingface/transformers): See [here](#Transformers)
126
+ ### vLLM
127
+ We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm)
128
+ to implement production-ready inference pipelines.
129
+ **Note 1**: We recommond using a relatively low temperature, such as `temperature=0.15`.
130
+ **Note 2**: Make sure to add a system prompt to the model to best tailer it for your needs. If you want to use the model as a general assistant, we recommend the following
131
+ system prompt:
132
+ ```
133
+ system_prompt = """You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.
134
+ Your knowledge base was last updated on 2023-10-01. The current date is 2025-01-30.
135
+ When you're not sure about some information, you say that you don't have the information and don't make up anything.
136
+ If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \"What are some good restaurants around me?\" => \"Where are you?\" or \"When is the next flight to Tokyo\" => \"Where do you travel from?\")"""
137
+ ```
138
+ **_Installation_**
139
+ Make sure you install [`vLLM >= 0.6.4`](https://github.com/vllm-project/vllm/releases/tag/v0.6.4):
140
+ ```
141
+ pip install --upgrade vllm
142
+ ```
143
+ Also make sure you have [`mistral_common >= 1.5.2`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.2) installed:
144
+ ```
145
+ pip install --upgrade mistral_common
146
+ ```
147
+ You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39).
148
+ #### Server
149
+ We recommand that you use Mistral-Small-24B-Instruct-2501 in a server/client setting.
150
+ 1. Spin up a server:
151
+ ```
152
+ vllm serve mistralai/Mistral-Small-24B-Instruct-2501 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice
153
+ ```
154
+ **Note:** Running Mistral-Small-24B-Instruct-2501 on GPU requires ~55 GB of GPU RAM in bf16 or fp16.
155
+ 2. To ping the client you can use a simple Python snippet.
156
+ ```py
157
+ import requests
158
+ import json
159
+ from datetime import datetime, timedelta
160
+
161
+ url = "http://<your-server>:8000/v1/chat/completions"
162
+ headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
163
+
164
+ model = "mistralai/Mistral-Small-24B-Instruct-2501"
165
+
166
+ messages = [
167
+ {
168
+ "role": "system",
169
+ "content": "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat."
170
+ },
171
+ {
172
+ "role": "user",
173
+ "content": "Give me 5 non-formal ways to say 'See you later' in French."
174
+ },
175
+ ]
176
+ data = {"model": model, "messages": messages}
177
+
178
+ response = requests.post(url, headers=headers, data=json.dumps(data))
179
+ print(response.json()["choices"][0]["message"]["content"])
180
+
181
+ # Sure, here are five non-formal ways to say "See you later" in French:
182
+ #
183
+ # 1. À plus tard
184
+ # 2. À plus
185
+ # 3. Salut
186
+ # 4. À toute
187
+ # 5. Bisous
188
+ #
189
+ # ```
190
+ # /\_/\
191
+ # ( o.o )
192
+ # > ^ <
193
+ # ```
194
+ ```
195
+ ### Function calling
196
+ Mistral-Small-24-Instruct-2501 is excellent at function / tool calling tasks via vLLM. *E.g.:*
197
+ <details>
198
+ <summary>Example</summary>
199
+ ```py
200
+ import requests
201
+ import json
202
+ from huggingface_hub import hf_hub_download
203
+ from datetime import datetime, timedelta
204
+
205
+ url = "http://<your-url>:8000/v1/chat/completions"
206
+ headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
207
+
208
+ model = "mistralai/Mistral-Small-24B-Instruct-2501"
209
+
210
+
211
+ def load_system_prompt(repo_id: str, filename: str) -> str:
212
+ file_path = hf_hub_download(repo_id=repo_id, filename=filename)
213
+ with open(file_path, "r") as file:
214
+ system_prompt = file.read()
215
+ today = datetime.today().strftime("%Y-%m-%d")
216
+ yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d")
217
+ model_name = repo_id.split("/")[-1]
218
+ return system_prompt.format(name=model_name, today=today, yesterday=yesterday)
219
+
220
+ SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")
221
+
222
+
223
+ tools = [
224
+ {
225
+ "type": "function",
226
+ "function": {
227
+ "name": "get_current_weather",
228
+ "description": "Get the current weather in a given location",
229
+ "parameters": {
230
+ "type": "object",
231
+ "properties": {
232
+ "city": {
233
+ "type": "string",
234
+ "description": "The city to find the weather for, e.g. 'San Francisco'",
235
+ },
236
+ "state": {
237
+ "type": "string",
238
+ "description": "The state abbreviation, e.g. 'CA' for California",
239
+ },
240
+ "unit": {
241
+ "type": "string",
242
+ "description": "The unit for temperature",
243
+ "enum": ["celsius", "fahrenheit"],
244
+ },
245
+ },
246
+ "required": ["city", "state", "unit"],
247
+ },
248
+ },
249
+ },
250
+ {
251
+ "type": "function",
252
+ "function": {
253
+ "name": "rewrite",
254
+ "description": "Rewrite a given text for improved clarity",
255
+ "parameters": {
256
+ "type": "object",
257
+ "properties": {
258
+ "text": {
259
+ "type": "string",
260
+ "description": "The input text to rewrite",
261
+ }
262
+ },
263
+ },
264
+ },
265
+ },
266
+ ]
267
+ messages = [
268
+ {"role": "system", "content": SYSTEM_PROMPT},
269
+ {
270
+ "role": "user",
271
+ "content": "Could you please make the below article more concise?\n\nOpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership.",
272
+ },
273
+ {
274
+ "role": "assistant",
275
+ "content": "",
276
+ "tool_calls": [
277
+ {
278
+ "id": "bbc5b7ede",
279
+ "type": "function",
280
+ "function": {
281
+ "name": "rewrite",
282
+ "arguments": '{"text": "OpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership."}',
283
+ },
284
+ }
285
+ ],
286
+ },
287
+ {
288
+ "role": "tool",
289
+ "content": '{"action":"rewrite","outcome":"OpenAI is a FOR-profit company."}',
290
+ "tool_call_id": "bbc5b7ede",
291
+ "name": "rewrite",
292
+ },
293
+ {
294
+ "role": "assistant",
295
+ "content": "---\n\nOpenAI is a FOR-profit company.",
296
+ },
297
+ {
298
+ "role": "user",
299
+ "content": "Can you tell me what the temperature will be in Dallas, in Fahrenheit?",
300
+ },
301
+ ]
302
+ data = {"model": model, "messages": messages, "tools": tools}
303
+
304
+ response = requests.post(url, headers=headers, data=json.dumps(data))
305
+ import ipdb; ipdb.set_trace()
306
+ print(response.json()["choices"][0]["message"]["tool_calls"])
307
+ # [{'id': '8PdihwL6d', 'type': 'function', 'function': {'name': 'get_current_weather', 'arguments': '{"city": "Dallas", "state": "TX", "unit": "fahrenheit"}'}}]
308
+ ```
309
+ </details>
310
+ #### Offline
311
+ ```py
312
+ from vllm import LLM
313
+ from vllm.sampling_params import SamplingParams
314
+ from datetime import datetime, timedelta
315
+
316
+ SYSTEM_PROMPT = "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat."
317
+ user_prompt = "Give me 5 non-formal ways to say 'See you later' in French."
318
+
319
+ messages = [
320
+ {
321
+ "role": "system",
322
+ "content": SYSTEM_PROMPT
323
+ },
324
+ {
325
+ "role": "user",
326
+ "content": user_prompt
327
+ },
328
+ ]
329
+ # note that running this model on GPU requires over 60 GB of GPU RAM
330
+ llm = LLM(model=model_name, tokenizer_mode="mistral", tensor_parallel_size=8)
331
+
332
+ sampling_params = SamplingParams(max_tokens=512, temperature=0.15)
333
+ outputs = llm.chat(messages, sampling_params=sampling_params)
334
+
335
+ print(outputs[0].outputs[0].text)
336
+ # Sure, here are five non-formal ways to say "See you later" in French:
337
+ #
338
+ # 1. À plus tard
339
+ # 2. À plus
340
+ # 3. Salut
341
+ # 4. À toute
342
+ # 5. Bisous
343
+ #
344
+ # ```
345
+ # /\_/\
346
+ # ( o.o )
347
+ # > ^ <
348
+ # ```
349
+ ```
350
+ ### Transformers
351
+ If you want to use Hugging Face transformers to generate text, you can do something like this.
352
+ ```py
353
+ from transformers import pipeline
354
+ import torch
355
+ messages = [
356
+ {"role": "user", "content": "Give me 5 non-formal ways to say 'See you later' in French."},
357
+ ]
358
+ chatbot = pipeline("text-generation", model="mistralai/Mistral-Small-24B-Instruct-2501", max_new_tokens=256, torch_dtype=torch.bfloat16)
359
+ chatbot(messages)
360
+ ```
361
+