๐ฆ ALLaM-7B-Instruct-GGUF
This repository provides quantized GGUF versions of ALLaM-7B-Instruct, optimized for efficient inference using llama.cpp
.
โ ๏ธ Acknowledgment
The original model was developed by ALLaM-AI and is available here:
๐ ALLaM-7B-Instruct-Preview
This repository only provides quantized versions for improved performance on different hardware.
โจ Overview
ALLaM-7B-Instruct is an Arabic-centric instruction-tuned model based on Metaโs LLaMA architecture, designed for natural language understanding and generation in Arabic.
๐ Whatโs New?
โ
GGUF Format โ Optimized for llama.cpp
โ
Multiple Quantization Levels โ Balance between precision and efficiency
โ
Run on CPUs & Low-Resource Devices โ No need for high-end GPUs!
๐ Available Model Quantizations
Model Variant | Precision | Size | Best For |
---|---|---|---|
ALLaM-7B-Instruct-f16.gguf |
FP16 | Large | High-precision tasks |
ALLaM-7B-Instruct-Q8_0.gguf |
8-bit | Medium | Balanced quality & speed |
ALLaM-7B-Instruct-Q6_K.gguf |
6-bit | Small | Good trade-off |
ALLaM-7B-Instruct-Q5_0.gguf |
5-bit | Small | Alternative quantization |
ALLaM-7B-Instruct-Q5_K_M.gguf |
5-bit | Smaller | Fast inference |
ALLaM-7B-Instruct-Q4_0.gguf |
4-bit | Very Small | Legacy format |
ALLaM-7B-Instruct-Q4_K_M.gguf |
4-bit | Very Small | Low-memory devices |
ALLaM-7B-Instruct-Q2_K.gguf |
2-bit | Smallest | Extreme efficiency |
๐ Installation & Setup
1๏ธโฃ Install llama.cpp
Clone and build llama.cpp
:
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make
2๏ธโฃ Download the Model
Choose and download a .gguf
file from this repository.
3๏ธโฃ Run Inference
Use llama.cpp
to generate responses:
./main -m ALLaM-7B-Instruct-Q4_0.gguf -p "ููู ุฃุฌูุฒ ููุจ ุดุงููุ"
Expected Output:
ูุชุญุถูุฑ ููุจ ุดุงูุ ุงุบูู ุงูู
ุงุกุ ุถุน ุงูุดุงู ูู ุงูููุจุ ูุงุณูุจ ุงูู
ุงุก ุงูุณุงุฎู ูููู. ุงุชุฑูู ูุฏูุงุฆู ุซู
ุงุณุชู
ุชุน ุจู
ุฐุงูู!
๐ Benchmarks & Performance
Quantization Format | Model Size | CPU (Tokens/sec) | GPU (Tokens/sec) |
---|---|---|---|
FP16 | Large | ~2 | ~15 |
Q8_0 | Medium | ~4 | ~30 |
Q6_K | Smaller | ~6 | ~40 |
Q5_0 | Small | ~7 | ~42 |
Q5_K_M | Smaller | ~8 | ~45 |
Q4_0 | Very Small | ~9 | ~48 |
Q4_K_M | Very Small | ~10 | ~50 |
Q2_K | Smallest | ~12 | ~55 |
Performance may vary based on hardware and configuration.
๐ License
This model follows the ALLaM-AI license. Refer to their Hugging Face repository for details.
โค๏ธ Acknowledgments
- ALLaM-AI for developing the original ALLaM-7B-Instruct model.
- llama.cpp by ggerganov for optimized inference.
โญ Contributions & Feedback
If you find this quantized model useful, feel free to contribute, provide feedback, or share your results!
- Downloads last month
- 253
Model tree for eltay89/ALLaM-7B-Instruct-GGUF
Base model
ALLaM-AI/ALLaM-7B-Instruct-preview