Update README
Browse files
README.md
CHANGED
@@ -46,6 +46,13 @@ library_name: transformers
|
|
46 |
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a>
|
47 |
</p>
|
48 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
49 |
## 1. Introduction
|
50 |
|
51 |
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1.
|
|
|
46 |
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a>
|
47 |
</p>
|
48 |
|
49 |
+
# GGUF and Quantized versions of deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
|
50 |
+
This is a fork of [deepseek-ai/DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) where the safetensors have been converted to GGUF and quantized to BF16, Q8_0, and Q4_K
|
51 |
+
|
52 |
+
This model seems to perform really well in reasoning and text generation tasks. Given how the [DeepSeek](https://www.deepseek.com/) team managed to create and train the R1 models in a remarkable cost efficient way, it is a major achievement in the field!
|
53 |
+
|
54 |
+
From the original repo:
|
55 |
+
|
56 |
## 1. Introduction
|
57 |
|
58 |
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1.
|