Update README.md
Browse files
README.md
CHANGED
@@ -19,32 +19,23 @@ Using [LLaMA C++](https://github.com/ggerganov/llama.cpp) release [b4722](https:
|
|
19 |
|
20 |
Original model: [cognitivecomputations/Dolphin3.0-R1-Mistral-24B](https://huggingface.co/cognitivecomputations/Dolphin3.0-R1-Mistral-24B)
|
21 |
|
22 |
-
|
23 |
|
24 |
-
Dolphin
|
25 |
|
26 |
-
|
27 |
-
|
28 |
-
- They maintain control of the
|
29 |
-
- They
|
30 |
-
-
|
|
|
|
|
31 |
|
32 |
From Eric Hartford's, the creator of the Dolphin model series, [Uncensored Models](https://erichartford.com/uncensored-models):
|
33 |
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
general purposes, this is a good thing. This is what stops the model from
|
38 |
-
doing bad things, like teaching you how to cook meth and make bombs. But
|
39 |
-
what is the nature of this alignment? And, why is it so?
|
40 |
-
|
41 |
-
The reason these models are aligned is that they are trained with data that
|
42 |
-
was generated by ChatGPT, which itself is aligned by an alignment team at
|
43 |
-
OpenAI. As it is a black box, we don't know all the reasons for the decisions
|
44 |
-
that were made, but we can observe it generally is aligned with American
|
45 |
-
popular culture, and to obey American law, and with a liberal and progressive
|
46 |
-
political bias.
|
47 |
-
```
|
48 |
|
49 |
All quantized versions were generated using an appropriate imatrix created from datasets available at [eaddario/imatrix-calibration](https://huggingface.co/datasets/eaddario/imatrix-calibration).
|
50 |
|
|
|
19 |
|
20 |
Original model: [cognitivecomputations/Dolphin3.0-R1-Mistral-24B](https://huggingface.co/cognitivecomputations/Dolphin3.0-R1-Mistral-24B)
|
21 |
|
22 |
+
From the model creator:
|
23 |
|
24 |
+
> Dolphin 3.0 R1 is the next generation of the Dolphin series of instruct-tuned models. Designed to be the ultimate general purpose local model, enabling coding, math, agentic, function calling, and general use cases. The R1 version has been trained for 3 epochs to reason using 800k reasoning traces from the Dolphin-R1 dataset.
|
25 |
|
26 |
+
> Dolphin aims to be an **uncensored** general purpose reasoning instruct model, similar to the models behind ChatGPT, Claude, Gemini. But these models present problems for businesses seeking to include AI in their products.
|
27 |
+
|
28 |
+
> - They maintain control of the system prompt, deprecating and changing things as they wish, often causing software to break.
|
29 |
+
> - They maintain control of the model versions, sometimes changing things silently, or deprecating older models that your business relies on.
|
30 |
+
> - They maintain control of the alignment, and in particular the alignment is one-size-fits all, not tailored to the application.
|
31 |
+
> - They can see all your queries and they can potentially use that data in ways you wouldn't want. Dolphin, in contrast, is steerable and gives control to the system owner. You set the system prompt. You decide the alignment. You have control of your data. Dolphin does not impose its ethics or guidelines on you. You are the one who decides the guidelines.
|
32 |
+
> - Dolphin belongs to YOU, it is your tool, an extension of your will. Just as you are personally responsible for what you do with a knife, gun, fire, car, or the internet, you are the creator and originator of any content you generate with Dolphin.
|
33 |
|
34 |
From Eric Hartford's, the creator of the Dolphin model series, [Uncensored Models](https://erichartford.com/uncensored-models):
|
35 |
|
36 |
+
> Most of these models (for example, Alpaca, Vicuna, WizardLM, MPT-7B-Chat, Wizard-Vicuna, GPT4-X-Vicuna) have some sort of embedded alignment. For general purposes, this is a good thing. This is what stops the model from doing bad things, like teaching you how to cook meth and make bombs. But what is the nature of this alignment? And, why is it so?
|
37 |
+
|
38 |
+
> The reason these models are aligned is that they are trained with data that was generated by ChatGPT, which itself is aligned by an alignment team at OpenAI. As it is a black box, we don't know all the reasons for the decisions that were made, but we can observe it generally is aligned with American popular culture, and to obey American law, and with a liberal and progressive political bias.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
|
40 |
All quantized versions were generated using an appropriate imatrix created from datasets available at [eaddario/imatrix-calibration](https://huggingface.co/datasets/eaddario/imatrix-calibration).
|
41 |
|