Triangle104 commited on
Commit
ded0b78
·
verified ·
1 Parent(s): 124930a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -0
README.md CHANGED
@@ -14,6 +14,72 @@ language:
14
  This model was converted to GGUF format from [`allura-org/Qwen2.5-32b-RP-Ink`](https://huggingface.co/allura-org/Qwen2.5-32b-RP-Ink) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
15
  Refer to the [original model card](https://huggingface.co/allura-org/Qwen2.5-32b-RP-Ink) for more details on the model.
16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  ## Use with llama.cpp
18
  Install llama.cpp through brew (works on Mac and Linux)
19
 
 
14
  This model was converted to GGUF format from [`allura-org/Qwen2.5-32b-RP-Ink`](https://huggingface.co/allura-org/Qwen2.5-32b-RP-Ink) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
15
  Refer to the [original model card](https://huggingface.co/allura-org/Qwen2.5-32b-RP-Ink) for more details on the model.
16
 
17
+ ---
18
+ Model details:
19
+ -
20
+ A roleplay-focused LoRA finetune of Qwen 2.5 32b Instruct. Methodology and hyperparams inspired by SorcererLM and Slush.
21
+ Yet another model in the Ink series, following in the footsteps of the Nemo one
22
+
23
+ Testimonials
24
+ -
25
+ whatever I tested was crack [...] It's got some refreshingly good prose, that's for sure
26
+
27
+ - TheLonelyDevil
28
+
29
+ The NTR is fantastic with this tune, lots of good gooning to be had. [...] Description and scene setting prose flows smoothly in comparison to larger models.
30
+
31
+ - TonyTheDeadly
32
+
33
+ This 32B handles complicated scenarios well, compared to a lot of 70Bs I've tried. Characters are portrayed accurately.
34
+
35
+ - Severian
36
+
37
+ From the very limited testing I did, I quite like this. [...] I really like the way it writes. Granted, I'm completely shitfaced right now, but I'm pretty sure it's good.
38
+
39
+ - ALK
40
+
41
+ [This model portrays] my character card almost exactly the way that I write them. It's a bit of a dream to get that with many of the current LLM.
42
+
43
+ - ShotMisser64
44
+
45
+ Dataset
46
+ -
47
+ The worst mix of data you've ever seen. Like, seriously, you do not want to see the things that went into this model. It's bad.
48
+
49
+ "this is like washing down an adderall with a bottle of methylated rotgut" - inflatebot
50
+
51
+ Recommended Settings
52
+ -
53
+ Chat template: ChatML
54
+
55
+ Recommended samplers (not the be-all-end-all, try some on your own!):
56
+ -
57
+ Temp 0.85 / Top P 0.8 / Top A 0.3 / Rep Pen 1.03
58
+ Your samplers can go here! :3
59
+ Hyperparams
60
+
61
+ General
62
+ -
63
+ Epochs = 1
64
+ LR = 6e-5
65
+ LR Scheduler = Cosine
66
+ Optimizer = Paged AdamW 8bit
67
+ Effective batch size = 16
68
+
69
+ LoRA
70
+ -
71
+ Rank = 16
72
+ Alpha = 32
73
+ Dropout = 0.25 (Inspiration: Slush)
74
+
75
+ Credits
76
+ -
77
+ Humongous thanks to the people who created the data. I would credit you all, but that would be cheating ;)
78
+ Big thanks to all Allura members, for testing and emotional support ilya /platonic
79
+ especially to inflatebot who made the model card's image :3
80
+ Another big thanks to all the members of the ArliAI Discord server for testing! All of the people featured in the testimonials are from there :3
81
+
82
+ ---
83
  ## Use with llama.cpp
84
  Install llama.cpp through brew (works on Mac and Linux)
85