Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
ikawrakow
/
mistral-7b-quantized-gguf
like
5
GGUF
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Community
Deploy
Use this model
685245d
mistral-7b-quantized-gguf
1 contributor
History:
2 commits
ikawrakow
Adding 3-bit k-quants
685245d
about 1 year ago
.gitattributes
Safe
1.56 kB
Adding 3-bit k-quants
about 1 year ago
README.md
Safe
28 Bytes
initial commit
about 1 year ago
mistral-7b-q3km.gguf
Safe
3.52 GB
LFS
Adding 3-bit k-quants
about 1 year ago
mistral-7b-q3ks.gguf
Safe
3.16 GB
LFS
Adding 3-bit k-quants
about 1 year ago