mradermacher commited on
Commit
cda1775
·
verified ·
1 Parent(s): 62f0ab2

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- base_model: ehartford/dolphin-2.5-mixtral-8x7b
3
  datasets:
4
  - ehartford/dolphin
5
  - jondurbin/airoboros-2.2.1
@@ -22,7 +22,7 @@ quantized_by: mradermacher
22
  <!-- ### convert_type: hf -->
23
  <!-- ### vocab_type: -->
24
  <!-- ### tags: -->
25
- static quants of https://huggingface.co/ehartford/dolphin-2.5-mixtral-8x7b
26
 
27
  <!-- provided-files -->
28
  weighted/imatrix quants are available at https://huggingface.co/mradermacher/dolphin-2.5-mixtral-8x7b-i1-GGUF
@@ -67,6 +67,6 @@ questions you might have and/or if you want some other model quantized.
67
 
68
  I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
69
  me use its servers and providing upgrades to my workstation to enable
70
- this work in my free time.
71
 
72
  <!-- end -->
 
1
  ---
2
+ base_model: cognitivecomputations/dolphin-2.5-mixtral-8x7b
3
  datasets:
4
  - ehartford/dolphin
5
  - jondurbin/airoboros-2.2.1
 
22
  <!-- ### convert_type: hf -->
23
  <!-- ### vocab_type: -->
24
  <!-- ### tags: -->
25
+ static quants of https://huggingface.co/cognitivecomputations/dolphin-2.5-mixtral-8x7b
26
 
27
  <!-- provided-files -->
28
  weighted/imatrix quants are available at https://huggingface.co/mradermacher/dolphin-2.5-mixtral-8x7b-i1-GGUF
 
67
 
68
  I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
69
  me use its servers and providing upgrades to my workstation to enable
70
+ this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
71
 
72
  <!-- end -->