Qwen2.5-VL-3B-Instruct-quantized-w4a16

Model Overview

  • Model Architecture: Qwen/Qwen2.5-VL-3B-Instruct
    • Input: Vision-Text
    • Output: Text
  • Model Optimizations:
    • Weight quantization: INT4
    • Activation quantization: FP16
  • Release Date: 2/24/2025
  • Version: 1.0
  • Model Developers: Neural Magic

Quantized version of Qwen/Qwen2.5-VL-3B-Instruct.

Model Optimizations

This model was obtained by quantizing the weights of Qwen/Qwen2.5-VL-3B-Instruct to INT8 data type, ready for inference with vLLM >= 0.5.2.

Deployment

Use with vLLM

This model can be deployed efficiently using the vLLM backend, as shown in the example below.

from vllm.assets.image import ImageAsset
from vllm import LLM, SamplingParams

# prepare model
llm = LLM(
    model="neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16",
    trust_remote_code=True,
    max_model_len=4096,
    max_num_seqs=2,
)

# prepare inputs
question = "What is the content of this image?"
inputs = {
    "prompt": f"<|user|>\n<|image_1|>\n{question}<|end|>\n<|assistant|>\n",
    "multi_modal_data": {
        "image": ImageAsset("cherry_blossom").pil_image.convert("RGB")
    },
}

# generate response
print("========== SAMPLE GENERATION ==============")
outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
print(f"PROMPT  : {outputs[0].prompt}")
print(f"RESPONSE: {outputs[0].outputs[0].text}")
print("==========================================")

vLLM also supports OpenAI-compatible serving. See the documentation for more details.

Creation

This model was created with llm-compressor by running the code snippet below as part a multimodal announcement blog.

Model Creation Code
import base64
from io import BytesIO
import torch
from datasets import load_dataset
from qwen_vl_utils import process_vision_info
from transformers import AutoProcessor
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
from llmcompressor.transformers.tracing import (
    TraceableQwen2_5_VLForConditionalGeneration,
)
from compressed_tensors.quantization import QuantizationArgs, QuantizationType, QuantizationStrategy, ActivationOrdering, QuantizationScheme

# Load model.
model_id = "Qwen/Qwen2.5-VL-3B-Instruct"

model = TraceableQwen2_5_VLForConditionalGeneration.from_pretrained(
    model_id,
    device_map="auto",
    torch_dtype="auto",
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)

# Oneshot arguments
DATASET_ID = "lmms-lab/flickr30k"
DATASET_SPLIT = {"calibration": "test[:512]"}
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048

# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42)
dampening_frac=0.01

# Apply chat template and tokenize inputs.
def preprocess_and_tokenize(example):
    # preprocess
    buffered = BytesIO()
    example["image"].save(buffered, format="PNG")
    encoded_image = base64.b64encode(buffered.getvalue())
    encoded_image_text = encoded_image.decode("utf-8")
    base64_qwen = f"data:image;base64,{encoded_image_text}"
    messages = [
        {
            "role": "user",
            "content": [
                {"type": "image", "image": base64_qwen},
                {"type": "text", "text": "What does the image show?"},
            ],
        }
    ]
    text = processor.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )
    image_inputs, video_inputs = process_vision_info(messages)

    # tokenize
    return processor(
        text=[text],
        images=image_inputs,
        videos=video_inputs,
        padding=False,
        max_length=MAX_SEQUENCE_LENGTH,
        truncation=True,
    )
ds = ds.map(preprocess_and_tokenize, remove_columns=ds["calibration"].column_names)

# Define a oneshot data collator for multimodal inputs.
def data_collator(batch):
    assert len(batch) == 1
    return {key: torch.tensor(value) for key, value in batch[0].items()}

recipe = GPTQModifier(
    targets="Linear",
    config_groups={
        "config_group": QuantizationScheme(
            targets=["Linear"],
            weights=QuantizationArgs(
                num_bits=4,
                type=QuantizationType.INT,
                strategy=QuantizationStrategy.GROUP,
                group_size=128,
                symmetric=True,
                dynamic=False,
                actorder=ActivationOrdering.WEIGHT,
            ),
        ),
    },
    sequential_targets=["Qwen2_5_VLDecoderLayer"],
    ignore=["lm_head", "re:visual.*"],
    update_size=NUM_CALIBRATION_SAMPLES,
    dampening_frac=dampening_frac
)

SAVE_DIR=f"{model_id.split('/')[1]}-quantized.w4a16"

# Perform oneshot
oneshot(
    model=model,
    tokenizer=model_id,
    dataset=ds,
    recipe=recipe,
    max_seq_length=MAX_SEQUENCE_LENGTH,
    num_calibration_samples=NUM_CALIBRATION_SAMPLES,
    trust_remote_code_model=True,
    data_collator=data_collator,
    output_dir=SAVE_DIR
)

Evaluation

The model was evaluated using mistral-evals for vision-related tasks and using lm_evaluation_harness for select text-based benchmarks. The evaluations were conducted using the following commands:

Evaluation Commands

Vision Tasks

  • vqav2
  • docvqa
  • mathvista
  • mmmu
  • chartqa
vllm serve neuralmagic/pixtral-12b-quantized.w8a8 --tensor_parallel_size 1 --max_model_len 25000 --trust_remote_code --max_num_seqs 8 --gpu_memory_utilization 0.9 --dtype float16 --limit_mm_per_prompt image=7

python -m eval.run eval_vllm \
        --model_name neuralmagic/pixtral-12b-quantized.w8a8 \
        --url http://0.0.0.0:8000 \
        --output_dir ~/tmp \
        --eval_name <vision_task_name>

Text-based Tasks

MMLU

lm_eval \
  --model vllm \
  --model_args pretrained="<model_name>",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=<n>,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
  --tasks mmlu \
  --num_fewshot 5 \
  --batch_size auto \
  --output_path output_dir

MGSM

lm_eval \
  --model vllm \
  --model_args pretrained="<model_name>",dtype=auto,max_model_len=4096,max_gen_toks=2048,max_num_seqs=128,tensor_parallel_size=<n>,gpu_memory_utilization=0.9 \
  --tasks mgsm_cot_native \
  --num_fewshot 0 \
  --batch_size auto \
  --output_path output_dir

Accuracy

Category Metric Qwen/Qwen2.5-VL-3B-Instruct Qwen2.5-VL-3B-Instruct-quantized.W4A16 Recovery (%)
Vision MMMU (val, CoT)
explicit_prompt_relaxed_correctness
44.56 41.56 93.28%
VQAv2 (val)
vqa_match
75.94 73.58 96.89
DocVQA (val)
anls
92.53 91.58 98.97%
ChartQA (test, CoT)
anywhere_in_answer_relaxed_correctness
81.20 78.96 97.24%
Mathvista (testmini, CoT)
explicit_prompt_relaxed_correctness
54.15 45.75 84.51%
Average Score 69.28 66.29 95.68%
Text MGSM (CoT) 52.49 35.82 68.24%
MMLU (5-shot) 65.32 62.80 96.14%

Inference Performance

This model achieves up to 1.73x speedup in single-stream deployment and up to 3.87x speedup in multi-stream asynchronous deployment, depending on hardware and use-case scenario. The following performance benchmarks were conducted with vLLM version 0.7.2, and GuideLLM.

Benchmarking Command ``` guidellm --model neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16 --target "http://localhost:8000/v1" --data-type emulated --data prompt_tokens=,generated_tokens=,images=,width=,height= --max seconds 120 --backend aiohttp_server ```

Single-stream performance (measured with vLLM version 0.7.2)

Document Visual Question Answering
1680W x 2240H
64/128
Visual Reasoning
640W x 480H
128/128
Image Captioning
480W x 360H
0/128
Hardware Model Average Cost Reduction Latency (s) Queries Per Dollar Latency (s)th> Queries Per Dollar Latency (s) Queries Per Dollar
A6000x1 Qwen/Qwen2.5-VL-3B-Instruct 3.1 1454 1.8 2546 1.7 2610
neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w8a8 1.27 2.6 1708 1.3 3340 1.3 3459
neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16 1.57 2.4 1886 1.0 4409 1.0 4409
A100x1 Qwen/Qwen2.5-VL-3B-Instruct 2.2 920 1.3 1603 1.2 1636
neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w8a8 1.09 2.1 975 1.2 1743 1.1 1814
neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16 1.20 2.0 1011 1.0 2015 1.0 2012
H100x1 Qwen/Qwen2.5-VL-3B-Instruct 1.5 740 0.9 1221 0.9 1276
neuralmagic/Qwen2.5-VL-3B-Instruct-FP8-Dynamic 1.06 1.4 768 0.9 1276 0.8 1399
neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16 1.24 0.9 1219 0.9 1270 0.8 1304

**Use case profiles: Image Size (WxH) / prompt tokens / generation tokens

**QPD: Queries per dollar, based on on-demand cost at Lambda Labs (observed on 2/18/2025).

Multi-stream asynchronous performance (measured with vLLM version 0.7.2)

Document Visual Question Answering
1680W x 2240H
64/128
Visual Reasoning
640W x 480H
128/128
Image Captioning
480W x 360H
0/128
Hardware Model Average Cost Reduction Maximum throughput (QPS) Queries Per Dollar Maximum throughput (QPS) Queries Per Dollar Maximum throughput (QPS) Queries Per Dollar
A6000x1 Qwen/Qwen2.5-VL-3B-Instruct 0.5 2405 2.6 11889 2.9 12909
neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w8a8 1.26 0.6 2725 3.4 15162 3.9 17673
neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16 1.39 0.6 2548 3.9 17437 4.7 21223
A100x1 Qwen/Qwen2.5-VL-3B-Instruct 0.8 1663 3.9 7899 4.4 8924
neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w8a8 1.06 0.9 1734 4.2 8488 4.7 9548
neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16 1.10 0.9 1775 4.2 8540 5.1 10318
H100x1 Qwen/Qwen2.5-VL-3B-Instruct 1.1 1188 4.3 4656 4.3 4676
neuralmagic/Qwen2.5-VL-3B-Instruct-FP8-Dynamic 1.15 1.4 1570 4.3 4676 4.8 5220
neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16 1.96 4.2 4598 4.1 4505 4.4 4838

**Use case profiles: Image Size (WxH) / prompt tokens / generation tokens

**QPS: Queries per second.

**QPD: Queries per dollar, based on on-demand cost at Lambda Labs (observed on 2/18/2025).

Downloads last month
411
Safetensors
Model size
1.66B params
Tensor type
I64
I32
BF16
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for nm-testing/Qwen2.5-VL-3B-Instruct-quantized.w4a16

Unable to build the model tree, the base model loops to the model itself. Learn more.