Update README.md
Browse files
README.md
CHANGED
@@ -27,7 +27,7 @@ Please note that these GGMLs are **not compatible with llama.cpp, or currently w
|
|
27 |
|
28 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardCoder-15B-1.0-GPTQ)
|
29 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardCoder-15B-1.0-GGML)
|
30 |
-
* [
|
31 |
|
32 |
<!-- compatibility_ggml start -->
|
33 |
## Compatibilty
|
|
|
27 |
|
28 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardCoder-15B-1.0-GPTQ)
|
29 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardCoder-15B-1.0-GGML)
|
30 |
+
* [WizardLM's unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0)
|
31 |
|
32 |
<!-- compatibility_ggml start -->
|
33 |
## Compatibilty
|