Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,11 @@ Converted for use with [llama.cpp](https://github.com/ggerganov/llama.cpp)
|
|
10 |
- 4-bit quantized
|
11 |
- Needs ~6GB of CPU RAM
|
12 |
- Won't work with alpaca.cpp or old llama.cpp (new ggml format requires latest llama.cpp)
|
13 |
-
- 7B parameter version
|
|
|
|
|
|
|
|
|
14 |
|
15 |
---
|
16 |
tags:
|
|
|
10 |
- 4-bit quantized
|
11 |
- Needs ~6GB of CPU RAM
|
12 |
- Won't work with alpaca.cpp or old llama.cpp (new ggml format requires latest llama.cpp)
|
13 |
+
- 7B parameter version
|
14 |
+
|
15 |
+
---
|
16 |
+
|
17 |
+
Bigger 13B version can be found here: https://huggingface.co/eachadea/ggml-vicuna-13b-4bit
|
18 |
|
19 |
---
|
20 |
tags:
|