metadata
library_name: transformers
pipeline_tag: text-generation
tags:
- 6-bit
- Q6_K
- deepseek
- gguf
- llama-cpp
- text-generation
- v03
- veltha
roleplaiapp/q-2.5-deepseek-r1-veltha-v0.3-Q6_K-GGUF
Repo: roleplaiapp/q-2.5-deepseek-r1-veltha-v0.3-Q6_K-GGUF
Original Model: q-2.5-deepseek-r1-veltha-v0.3
Quantized File: q-2.5-deepseek-r1-veltha-v0.3.Q6_K.gguf
Quantization: GGUF
Quantization Method: Q6_K
Overview
This is a GGUF Q6_K quantized version of q-2.5-deepseek-r1-veltha-v0.3
Quantization By
I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.
Andrew Webby @ RolePlai.