Update README.md
Browse files
README.md
CHANGED
@@ -28,7 +28,7 @@ One must have imagined him happy while pushing. Now, he is ecstatic.
|
|
28 |
|
29 |
## About
|
30 |
|
31 |
-
This is a pretty generic finetune of the 24b base model for multiturn instruct. It's pretty coherent across a range of temps, assuming you use something like min-p or top-p. It also supports reasoning blocks.
|
32 |
|
33 |
## System Prompts
|
34 |
|
@@ -52,4 +52,14 @@ v7-Tekken, same as the original instruct model.
|
|
52 |
|
53 |
## Dataset
|
54 |
|
55 |
-
This model was trained on [allura-org/inkstructmix-v0.1](https://hf.co/datasets/allura-org/inkstructmix-v0.1)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
|
29 |
## About
|
30 |
|
31 |
+
This is a pretty generic finetune of the 24b base model for multiturn ~~instruct~~ RP. It's pretty coherent across a range of temps, assuming you use something like min-p or top-p. It also supports reasoning blocks.
|
32 |
|
33 |
## System Prompts
|
34 |
|
|
|
52 |
|
53 |
## Dataset
|
54 |
|
55 |
+
~~This model was trained on [allura-org/inkstructmix-v0.1](https://hf.co/datasets/allura-org/inkstructmix-v0.1).~~
|
56 |
+
|
57 |
+
Okay. So.
|
58 |
+
|
59 |
+
It was *supposed* to be trained on the above dataset.
|
60 |
+
|
61 |
+
However, *my* dumb ass. Put. The roleplay data. Into the config. And didn't notice
|
62 |
+
|
63 |
+

|
64 |
+
|
65 |
+
So. Um. I guess this is supposed to be a roleplaying model? It sure doesn't act the part.
|