Update README.md
Browse files
README.md
CHANGED
@@ -8,6 +8,8 @@ quantized_by: bartowski
|
|
8 |
|
9 |
## Llamacpp imatrix Quantizations of Reflection-Llama-3.1-70B
|
10 |
|
|
|
|
|
11 |
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3658">b3658</a> for quantization.
|
12 |
|
13 |
Original model: https://huggingface.co/mattshumer/Reflection-Llama-3.1-70B
|
|
|
8 |
|
9 |
## Llamacpp imatrix Quantizations of Reflection-Llama-3.1-70B
|
10 |
|
11 |
+
<b>Yes, this is with the fix to the tokenizer!</b>
|
12 |
+
|
13 |
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3658">b3658</a> for quantization.
|
14 |
|
15 |
Original model: https://huggingface.co/mattshumer/Reflection-Llama-3.1-70B
|