eaddario commited on
Commit
2345f28
·
verified ·
1 Parent(s): 74ab7af

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -22,9 +22,9 @@ Original model: [cognitivecomputations/Dolphin3.0-R1-Mistral-24B](https://huggin
22
  From the model creator:
23
 
24
  > Dolphin 3.0 R1 is the next generation of the Dolphin series of instruct-tuned models. Designed to be the ultimate general purpose local model, enabling coding, math, agentic, function calling, and general use cases. The R1 version has been trained for 3 epochs to reason using 800k reasoning traces from the Dolphin-R1 dataset.
25
-
26
  > Dolphin aims to be an **uncensored** general purpose reasoning instruct model, similar to the models behind ChatGPT, Claude, Gemini. But these models present problems for businesses seeking to include AI in their products.
27
-
28
  > - They maintain control of the system prompt, deprecating and changing things as they wish, often causing software to break.
29
  > - They maintain control of the model versions, sometimes changing things silently, or deprecating older models that your business relies on.
30
  > - They maintain control of the alignment, and in particular the alignment is one-size-fits all, not tailored to the application.
@@ -34,7 +34,7 @@ From the model creator:
34
  From Eric Hartford's, the creator of the Dolphin model series, [Uncensored Models](https://erichartford.com/uncensored-models):
35
 
36
  > Most of these models (for example, Alpaca, Vicuna, WizardLM, MPT-7B-Chat, Wizard-Vicuna, GPT4-X-Vicuna) have some sort of embedded alignment. For general purposes, this is a good thing. This is what stops the model from doing bad things, like teaching you how to cook meth and make bombs. But what is the nature of this alignment? And, why is it so?
37
-
38
  > The reason these models are aligned is that they are trained with data that was generated by ChatGPT, which itself is aligned by an alignment team at OpenAI. As it is a black box, we don't know all the reasons for the decisions that were made, but we can observe it generally is aligned with American popular culture, and to obey American law, and with a liberal and progressive political bias.
39
 
40
  All quantized versions were generated using an appropriate imatrix created from datasets available at [eaddario/imatrix-calibration](https://huggingface.co/datasets/eaddario/imatrix-calibration).
 
22
  From the model creator:
23
 
24
  > Dolphin 3.0 R1 is the next generation of the Dolphin series of instruct-tuned models. Designed to be the ultimate general purpose local model, enabling coding, math, agentic, function calling, and general use cases. The R1 version has been trained for 3 epochs to reason using 800k reasoning traces from the Dolphin-R1 dataset.
25
+ >
26
  > Dolphin aims to be an **uncensored** general purpose reasoning instruct model, similar to the models behind ChatGPT, Claude, Gemini. But these models present problems for businesses seeking to include AI in their products.
27
+ >
28
  > - They maintain control of the system prompt, deprecating and changing things as they wish, often causing software to break.
29
  > - They maintain control of the model versions, sometimes changing things silently, or deprecating older models that your business relies on.
30
  > - They maintain control of the alignment, and in particular the alignment is one-size-fits all, not tailored to the application.
 
34
  From Eric Hartford's, the creator of the Dolphin model series, [Uncensored Models](https://erichartford.com/uncensored-models):
35
 
36
  > Most of these models (for example, Alpaca, Vicuna, WizardLM, MPT-7B-Chat, Wizard-Vicuna, GPT4-X-Vicuna) have some sort of embedded alignment. For general purposes, this is a good thing. This is what stops the model from doing bad things, like teaching you how to cook meth and make bombs. But what is the nature of this alignment? And, why is it so?
37
+ >
38
  > The reason these models are aligned is that they are trained with data that was generated by ChatGPT, which itself is aligned by an alignment team at OpenAI. As it is a black box, we don't know all the reasons for the decisions that were made, but we can observe it generally is aligned with American popular culture, and to obey American law, and with a liberal and progressive political bias.
39
 
40
  All quantized versions were generated using an appropriate imatrix created from datasets available at [eaddario/imatrix-calibration](https://huggingface.co/datasets/eaddario/imatrix-calibration).