roleplaiapp/Minerva-14b-V0.1-i1-IQ3_S-GGUF

Repo: roleplaiapp/Minerva-14b-V0.1-i1-IQ3_S-GGUF Original Model: Minerva-14b-V0.1-i1 Quantized File: Minerva-14b-V0.1.i1-IQ3_S.gguf Quantization: GGUF Quantization Method: IQ3_S

Overview

This is a GGUF IQ3_S quantized version of Minerva-14b-V0.1-i1

Quantization By

I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.

Andrew Webby @ RolePlai.

Downloads last month
3
GGUF
Model size
14.8B params
Architecture
qwen2

3-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.