AWQ 4bits version of open-r1/OpenR1-Qwen-7B

from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name = "open-r1/OpenR1-Qwen-7B"
model = AutoAWQForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
quant_config = { "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM" }
model.quantize(tokenizer, quant_config=quant_config)
Downloads last month
10
Safetensors
Model size
1.96B params
Tensor type
FP16
·
I32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for MPWARE/OpenR1-Qwen-7B-AWQ-4bits-GEMM

Quantized
(4)
this model

Collection including MPWARE/OpenR1-Qwen-7B-AWQ-4bits-GEMM