File size: 1,982 Bytes
05d2331
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
---
base_model: mistralai/Mistral-Nemo-Base-2407
license: apache-2.0
datasets:
- BeaverAI/Nemo-Inst-Tune-ds
language:
- en
library_name: transformers
---

# mistral-doryV2-12b - EXL2 8bpw max

This is a 8bpw EXL2 quant of [BeaverAI/mistral-doryV2-12b](https://huggingface.co/BeaverAI/mistral-doryV2-12b)

This quant was made using exllamav2-0.1.8 with default dataset. I used a slightly modified quantization script to force use of highest bpw methods for all layers in the model (which is usually "1:8b_128g s4") to ensure max quality.

I also added a small fix in config file to set max default context at 128k as original Mistral-Nemo should have.

I tested this quant shortly in some random RPs (including ones over 8k context) and it seems to work fine.

## Prompt Templates

Uses Alpaca-like format like mentioned below.

### Original readme below

---

# Dory 12b (v2)
(redone) redone instruct finetune of mistral nemo 12b's base. *not* (E)RP-focused, leave that to drummer.

![image/gif](https://cdn-uploads.huggingface.co/production/uploads/634262af8d8089ebaefd410e/BiBtgV_WEIha72WqETWfk.gif)

thanks to twisted again for the compute :3

## Prompting
alpaca-like:
```
### System:
[Optional system prompt]

### Instruction:
[Query]

### Response:
[Response]</s>

### Instruction:
[...]
```

## Training details
Rank 64 QDoRA, trained on the following data mix:
- All of [kalomaze/Opus_Instruct_3k](https://huggingface.co/datasets/kalomaze/Opus_Instruct_3k)
- All conversations with a reward model rating above 5 in [Magpie-Align/Magpie-Gemma2-Pro-Preview-Filtered](https://huggingface.co/datasets/Magpie-Align/Magpie-Gemma2-Pro-Preview-Filtered)
- 50k of [Gryphe/Sonnet3.5-SlimOrcaDedupCleaned](https://huggingface.co/datasets/Gryphe/Sonnet3.5-SlimOrcaDedupCleaned)
- All stories above 4.7 rating and published before 2020 in [Fizzarolli/FallingThroughTheSkies-592k-Filtered-Filtered](https://huggingface.co/datasets/Fizzarolli/FallingThroughTheSkies-592k-Filtered-Filtered)