DeepSeek-R1-Distill-Qwen-7B Quantized Models
This repository contains Q4_KM and Q5_KM quantized versions of the DeepSeek-R1-Distill-Qwen-7B model, optimized for efficient deployment while maintaining strong performance.
Discover our full range of quantized language models by visiting our SandLogic Lexicon HuggingFace. To learn more about our company and services, check out our website at SandLogic.
Model Description
These models are quantized versions of DeepSeek-R1-Distill-Qwen-7B, which is a distilled 7B parameter model based on the Qwen architecture. The model demonstrates that reasoning patterns from larger models can be effectively distilled into smaller architectures, resulting in exceptional performance on various benchmarks.
Key Features
- Fine-tuned using DeepSeek-R1 generated reasoning data
- Modified configurations and tokenizer optimized for performance
- Maintains strong reasoning capabilities while reducing model size
- Suitable for research and production deployment
Available Quantized Versions
Q4_KM Version
- 4-bit quantization using the K-means method
- Approximately 4 GB model size
- Optimal balance between model size and performance
- Recommended for resource-constrained environments
Q5_KM Version
- 5-bit quantization using the K-means method
- Approximately 4.5GB model size
- Higher precision than Q4 while maintaining significant size reduction
- Recommended when higher accuracy is needed
Usage
pip install llama-cpp-python
Please refer to the llama-cpp-python documentation to install with GPU support.
Basic Text Completion
Here's an example demonstrating how to use the high-level API for basic text completion:
from llama_cpp import Llama
llm = Llama(
model_path="model/path/",
verbose=False,
# n_gpu_layers=-1, # Uncomment to use GPU acceleration
# n_ctx=2048, # Uncomment to increase the context window
)
# Example of a reasoning task
output = llm(
"Q: Explain the concept of natural selection in simple terms. A: ",
max_tokens=256,
stop=["Q:", "\n\n"],
echo=False
)
print(output["choices"][0]["text"])
Model Configuration Changes
Please note that DeepSeek have made slight modifications to the original Qwen-7B configurations and tokenizer to optimize performance. When using these models, ensure you're using provided settings rather than the original Qwen-7B configurations.
License
This model inherits the license of the original DeepSeek-R1-Distill-Qwen-7B model. Please refer to the original model's license for usage terms and conditions.
Acknowledgments
We thank the DeepSeek AI team for open-sourcing their distilled models and demonstrating that smaller models can achieve impressive performance through effective distillation techniques. Special thanks also to the Qwen team for providing the base model architecture.
- Downloads last month
- 35
Model tree for SandLogicTechnologies/DeepSeek-R1-Distill-Qwen-7B-GGUF
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-7B