L3.1 GGUFs require KoboldCPP 1.17.1 or newer to run.

Original Model: https://huggingface.co/ChaoticNeutrals/Sekhmet_Gimmel-L3.1-8B-v0.3

made with https://huggingface.co/FantasiaFoundry/GGUF-Quantization-Script

Models Q2_K_L, Q4_K_L, Q5_K_L, Q6_K_L, are using Q_8 output tensors and token embeddings

using bartowski's imatrix dataset

Downloads last month
116
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for Reiterate3680/Sekhmet_Gimmel-L3.1-8B-v0.3-GGUF

Quantized
(4)
this model