roleplaiapp's picture
Upload README.md with huggingface_hub
ea42ae0 verified
|
raw
history blame
723 Bytes
---
library_name: transformers
pipeline_tag: text-generation
tags:
- 2-bit
- Q2_K
- gguf
- instruct
- llama
- llama-cpp
- text-generation
- uncensored
---
# roleplaiapp/Llama-3.2-3B-Instruct-uncensored-Q2_K-GGUF
**Repo:** `roleplaiapp/Llama-3.2-3B-Instruct-uncensored-Q2_K-GGUF`
**Original Model:** `Llama-3.2-3B-Instruct-uncensored`
**Quantized File:** `Llama-3.2-3B-Instruct-uncensored.Q2_K.gguf`
**Quantization:** `GGUF`
**Quantization Method:** `Q2_K`
## Overview
This is a GGUF Q2_K quantized version of Llama-3.2-3B-Instruct-uncensored
## Quantization By
I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models.
I hope the community finds these quantizations useful.