lemonilia commited on
Commit
ccf8079
·
verified ·
1 Parent(s): 3ea3318

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -11
README.md CHANGED
@@ -9,38 +9,42 @@ base_model:
9
  ---
10
 
11
  # Mistral-Small-3-Reasoner-s1
12
- A simple [Mistral-Small-24B-Instruct-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501) finetune on the [s1K reasoning dataset by Muennighoff et al.](https://huggingface.co/datasets/simplescaling/s1K) to give the original model reasoning capabilities. Surprisingly, they appear to work even outside math/STEM subjects.
13
 
14
  ## Usage notes
15
  Prepend the assistant response with `<think>` to make the model engage in a chain-of-thought. This should happen automatically with math questions on an empty context, but it needs to be forced in longer conversations. When done thinking, the model will generate `</think>` and then the final response.
16
 
17
  Make sure that the models' output length is long enough, and be prepared to make the model continue its response if it stops prematurely.
18
 
 
 
 
 
19
  ## Prompting format
20
  ```
21
  [INST]User question.[/INST]<think>
22
- Chain of thought
23
  </think>
24
 
25
  Model response.</s>
26
  ```
27
 
 
 
 
 
 
 
28
  # What's in this repository
29
  - Checkpoints for epochs 1~5
30
  - LoRA adapter for the final model
31
- - Q4_K and Q6_K GGUF quantizations
32
-
33
- ## Known quirks
34
- - The model can ramble indefinitely with open-ended questions at the beginning of the conversation.
35
- - The model can override the requested formatting/style with that of the finetuning data.
36
- - Some of the non-reasoning capabilities of the original `Mistral-Small-24B-Instruct-2501` model might have degraded.
37
- - Most model default guardrails still work, but can be bypassed with a suitable prompt as with the original model.
38
 
39
  ## Dataset
40
- Almost the entirety of the [s1K dataset](https://huggingface.co/datasets/simplescaling/s1K) was used with minimal modifications to make it properly work with Mistral-Small-3-Instruct, with the exception of 4 rows that didn't fit within the training sequence length of 8192 tokens and 16 of the shortest ones that have been used as the test set instead. No samples have been clipped and no system prompt was added. All samples are single-turn.
41
 
42
  ## Training hyperparameters
43
- I tried to more or less follow the indications in [appendix C in the paper](https://arxiv.org/abs/2501.19393) with the exception of using 4-bit LoRA finetuning and tuning the learning reate accordingly. The loss was not computed on questions within `[INST]...[/INST]` tags (including the tags), just on reasoning traces and solutions. The training sequence length was about the maximum I could use on one NVidia RTX3090 24GB GPU.
44
 
45
  Overfitting (increasing eval loss) occurred after 1 epoch, but the training loss behavior was similar to that observed in the paper.
46
 
 
9
  ---
10
 
11
  # Mistral-Small-3-Reasoner-s1
12
+ A simple [Mistral-Small-24B-Instruct-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501) finetune on the [s1K reasoning dataset by Muennighoff et al.](https://huggingface.co/datasets/simplescaling/s1K) to give the original model basic reasoning capabilities similar to DeepSeek R1's. Surprisingly, they appear to work even outside math/STEM subjects.
13
 
14
  ## Usage notes
15
  Prepend the assistant response with `<think>` to make the model engage in a chain-of-thought. This should happen automatically with math questions on an empty context, but it needs to be forced in longer conversations. When done thinking, the model will generate `</think>` and then the final response.
16
 
17
  Make sure that the models' output length is long enough, and be prepared to make the model continue its response if it stops prematurely.
18
 
19
+ Low-depth instructions (perhaps at depth-0, just before the assistant's rsponse) can be beneficial in steering how the model should think. An additional `[SYSTEM_PROMPT]` can be used there.
20
+
21
+ I advise to remove old chain-of-thoughts from the chat history before the model generates a new one, but I haven't tested this in depth.
22
+
23
  ## Prompting format
24
  ```
25
  [INST]User question.[/INST]<think>
26
+ Chain of thought.
27
  </think>
28
 
29
  Model response.</s>
30
  ```
31
 
32
+ ## Known quirks and issues
33
+ - Very long chain-of-thoughts might confuse the model during multi-turn conversations.
34
+ - The model can override the requested formatting/style with that of the finetuning data.
35
+ - Some of the non-reasoning capabilities of the original `Mistral-Small-24B-Instruct-2501` model might have degraded.
36
+ - Most default guardrails apparently still work, but can be bypassed with a suitable prompt as with the original model.
37
+
38
  # What's in this repository
39
  - Checkpoints for epochs 1~5
40
  - LoRA adapter for the final model
41
+ - Some static GGUF quantizations
 
 
 
 
 
 
42
 
43
  ## Dataset
44
+ Almost the entirety of the [s1K dataset](https://huggingface.co/datasets/simplescaling/s1K) was used with minimal modifications to make it properly work with Mistral-Small-3-Instruct, with the exception of 4 rows that didn't fit within the training sequence length of 8192 tokens and 16 of the shortest ones that have been used as the test set instead. No samples have been clipped and no system prompt was added. All were single-turn.
45
 
46
  ## Training hyperparameters
47
+ I tried to roughly follow indications from [appendix C in the paper](https://arxiv.org/abs/2501.19393) with the notable exception of using 4-bit LoRA finetuning and tuning the learning rate accordingly. The loss was not computed on questions within `[INST]...[/INST]` tags (including the tags), just on reasoning traces and solutions. The training sequence length was about the maximum I could use on one NVidia RTX3090 24GB GPU. The total training time was about 18 hours.
48
 
49
  Overfitting (increasing eval loss) occurred after 1 epoch, but the training loss behavior was similar to that observed in the paper.
50