Update README.md
Browse files
README.md
CHANGED
@@ -21,7 +21,7 @@ Original model: [deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co
|
|
21 |
|
22 |
All quantized versions were generated using an appropriate imatrix created from datasets available at [eaddario/imatrix-calibration](https://huggingface.co/datasets/eaddario/imatrix-calibration).
|
23 |
|
24 |
-
At its core, an Importance Matrix (imatrix) is a table or, more broadly, a structured representation that
|
25 |
|
26 |
The process to produce the quantized [GGUF](https://huggingface.co/docs/hub/en/gguf) models is roughly as follows:
|
27 |
|
|
|
21 |
|
22 |
All quantized versions were generated using an appropriate imatrix created from datasets available at [eaddario/imatrix-calibration](https://huggingface.co/datasets/eaddario/imatrix-calibration).
|
23 |
|
24 |
+
At its core, an Importance Matrix (imatrix) is a table or, more broadly, a structured representation that scores the relative importance of different features or parameters in a machine learning model. It essentially quantifies the "impact" each feature has on a specific outcome, prediction, or relationship being modeled.
|
25 |
|
26 |
The process to produce the quantized [GGUF](https://huggingface.co/docs/hub/en/gguf) models is roughly as follows:
|
27 |
|