--- library_name: transformers pipeline_tag: text-generation tags: - 6-bit - Q6_K - gguf - instruct - llama - llama-cpp - text-generation - uncensored --- # roleplaiapp/Llama-3.2-3B-Instruct-uncensored-Q6_K-GGUF **Repo:** `roleplaiapp/Llama-3.2-3B-Instruct-uncensored-Q6_K-GGUF` **Original Model:** `Llama-3.2-3B-Instruct-uncensored` **Quantized File:** `Llama-3.2-3B-Instruct-uncensored.Q6_K.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q6_K` ## Overview This is a GGUF Q6_K quantized version of Llama-3.2-3B-Instruct-uncensored ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.