Transformers documentation

HQQ

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v4.49.0).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

HQQ

Half-Quadratic Quantization (HQQ) supports fast on-the-fly quantization for 8, 4, 3, 2, and even 1-bits. It doesn’t require calibration data, and it is compatible with any model modality (LLMs, vision, etc.).

HQQ further supports fine-tuning with PEFT and is fully compatible with torch.compile for even faster inference and training.

Install HQQ with the following command to get the latest version and to build its corresponding CUDA kernels.

pip install hqq

You can choose to either replace all the linear layers in a model with the same quantization config or dedicate a specific quantization config for specific linear layers.

replace all layers
specific layers only

Quantize a model by creating a HqqConfig and specifying the nbits and group_size to replace for all the linear layers (torch.nn.Linear) of the model.

from transformers import AutoModelForCausalLM, AutoTokenizer, HqqConfig

quant_config = HqqConfig(nbits=8, group_size=64)
model = transformers.AutoModelForCausalLM.from_pretrained(
    "meta-llama/Llama-3.1-8B", 
    torch_dtype=torch.float16, 
    device_map="cuda", 
    quantization_config=quant_config
)

Backends

HQQ supports various backends, including pure PyTorch and custom dequantization CUDA kernels. These backends are suitable for older GPUs and PEFT/QLoRA training.

from hqq.core.quantize import *

HQQLinear.set_backend(HQQBackend.PYTORCH)

For faster inference, HQQ supports 4-bit fused kernels (torchao and Marlin) after a model is quantized. These can reach up to 200 tokens/sec on a single 4090. The example below demonstrates enabling the torchao_int4 backend.

from hqq.utils.patching import prepare_for_inference

prepare_for_inference("model", backend="torchao_int4")

Refer to the Backend guide for more details.

Resources

Read the Half-Quadratic Quantization of Large Machine Learning Models blog post for more details about HQQ.

< > Update on GitHub