Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ base_model:
|
|
9 |
---
|
10 |
|
11 |
# Mistral-Small-3-Reasoner-s1
|
12 |
-
A simple [Mistral-Small-24B-Instruct-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501) finetune on the [s1K reasoning dataset by Muennighoff et al.](https://huggingface.co/datasets/simplescaling/s1K) to give the original model basic reasoning capabilities within `<think>` tags, like DeepSeek R1. Surprisingly, the model can reason even outside math/STEM subjects.
|
13 |
|
14 |
## Usage notes
|
15 |
Prepend the assistant response with `<think>` to make the model engage in a chain-of-thought. This should happen automatically with math questions on an empty context, but it needs to be forced in longer conversations. When done thinking, it will generate `</think>` and then the final response.
|
@@ -18,7 +18,7 @@ Make sure that the model's output length is long enough; be prepared to make the
|
|
18 |
|
19 |
Low-depth instructions (perhaps at depth-0, just before the assistant's rsponse) can be beneficial in steering how the model should think. An additional `[SYSTEM_PROMPT]` could be used there.
|
20 |
|
21 |
-
|
22 |
|
23 |
## Prompting format
|
24 |
```
|
@@ -30,7 +30,8 @@ Model response.</s>
|
|
30 |
```
|
31 |
|
32 |
## Known quirks and issues
|
33 |
-
-
|
|
|
34 |
- Besides multi-turn capabilities, other non-reasoning capabilities of the original `Mistral-Small-24B-Instruct-2501` model might have degraded.
|
35 |
- Most default guardrails apparently still work, but can be very easily bypassed with a suitable prompt as with the original model.
|
36 |
|
|
|
9 |
---
|
10 |
|
11 |
# Mistral-Small-3-Reasoner-s1
|
12 |
+
A simple [Mistral-Small-24B-Instruct-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501) finetune on the LIMA-like [s1K reasoning dataset by Muennighoff et al.](https://huggingface.co/datasets/simplescaling/s1K) to give the original model basic reasoning capabilities within `<think>` tags, like DeepSeek R1. Surprisingly, the model can reason even outside math/STEM subjects.
|
13 |
|
14 |
## Usage notes
|
15 |
Prepend the assistant response with `<think>` to make the model engage in a chain-of-thought. This should happen automatically with math questions on an empty context, but it needs to be forced in longer conversations. When done thinking, it will generate `</think>` and then the final response.
|
|
|
18 |
|
19 |
Low-depth instructions (perhaps at depth-0, just before the assistant's rsponse) can be beneficial in steering how the model should think. An additional `[SYSTEM_PROMPT]` could be used there.
|
20 |
|
21 |
+
From tests it seems beneficial to keep at least one chain-of-thought in the context in addition to the one being generated. More experimentation required here.
|
22 |
|
23 |
## Prompting format
|
24 |
```
|
|
|
30 |
```
|
31 |
|
32 |
## Known quirks and issues
|
33 |
+
- Not really a true issue, but information in the system prompt that contradicts that in the chat history or that is meant to be only temporarily valid may cause coherency issues with this model, since it will follow instructions very precisely.
|
34 |
+
- Without user control on chain-of-thought length, the model can ramble for several thousands of tokens.
|
35 |
- Besides multi-turn capabilities, other non-reasoning capabilities of the original `Mistral-Small-24B-Instruct-2501` model might have degraded.
|
36 |
- Most default guardrails apparently still work, but can be very easily bypassed with a suitable prompt as with the original model.
|
37 |
|