Pixtral-Large-Instruct-2411-hf-quantized.w4a16
Model Overview
- Model Architecture: neuralmagic/Pixtral-Large-Instruct-2411-hf
- Input: Vision-Text
- Output: Text
- Model Optimizations:
- Weight quantization: INT4
- Activation quantization: FP16
- Release Date: 2/24/2025
- Version: 1.0
- Model Developers: Neural Magic
Quantized version of neuralmagic/Pixtral-Large-Instruct-2411-hf.
Model Optimizations
This model was obtained by quantizing the weights of neuralmagic/Pixtral-Large-Instruct-2411-hf to INT4 data type, ready for inference with vLLM >= 0.5.2.
Deployment
Use with vLLM
This model can be deployed efficiently using the vLLM backend, as shown in the example below.
from vllm.assets.image import ImageAsset
from vllm import LLM, SamplingParams
# prepare model
llm = LLM(
model="neuralmagic/Pixtral-Large-Instruct-2411-hf-quantized.w4a16",
trust_remote_code=True,
max_model_len=4096,
max_num_seqs=2,
)
# prepare inputs
question = "What is the content of this image?"
inputs = {
"prompt": f"<|user|>\n<|image_1|>\n{question}<|end|>\n<|assistant|>\n",
"multi_modal_data": {
"image": ImageAsset("cherry_blossom").pil_image.convert("RGB")
},
}
# generate response
print("========== SAMPLE GENERATION ==============")
outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
print(f"PROMPT : {outputs[0].prompt}")
print(f"RESPONSE: {outputs[0].outputs[0].text}")
print("==========================================")
vLLM also supports OpenAI-compatible serving. See the documentation for more details.
Creation
This model was created with llm-compressor by running the code snippet below as part a multimodal announcement blog.
Model Creation Code
import requests
import torch
from PIL import Image
from transformers import AutoProcessor
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
from llmcompressor.transformers.tracing import TraceableLlavaForConditionalGeneration
from compressed_tensors.quantization import QuantizationArgs, QuantizationType, QuantizationStrategy, ActivationOrdering, QuantizationScheme
# Load model.
model_id = "neuralmagic/Pixtral-Large-Instruct-2411-hf"
model = TraceableLlavaForConditionalGeneration.from_pretrained(
model_id, device_map="auto", torch_dtype="auto"
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
# Oneshot arguments
DATASET_ID = "flickr30k"
DATASET_SPLIT = {"calibration": "test[:512]"}
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048
dampening_frac=0.01
# Define a oneshot data collator for multimodal inputs.
def data_collator(batch):
assert len(batch) == 1
return {
"input_ids": torch.LongTensor(batch[0]["input_ids"]),
"attention_mask": torch.tensor(batch[0]["attention_mask"]),
"pixel_values": torch.tensor(batch[0]["pixel_values"]),
}
recipe = GPTQModifier(
targets="Linear",
config_groups={
"config_group": QuantizationScheme(
targets=["Linear"],
weights=QuantizationArgs(
num_bits=4,
type=QuantizationType.INT,
strategy=QuantizationStrategy.GROUP,
group_size=128,
symmetric=True,
dynamic=False,
actorder=ActivationOrdering.WEIGHT,
),
),
},
sequential_targets=["MistralDecoderLayer"],
ignore=["re:.*lm_head", "re:vision_tower.*", "re:multi_modal_projector.*"],
update_size=NUM_CALIBRATION_SAMPLES,
dampening_frac=dampening_frac,
)
SAVE_DIR=f"{model_id.split('/')[1]}-quantized.w4a16"
# Perform oneshot
oneshot(
model=model,
tokenizer=model_id,
dataset=DATASET_ID,
splits=DATASET_SPLIT,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
trust_remote_code_model=True,
data_collator=data_collator,
output_dir=SAVE_DIR
)
Evaluation
The model was evaluated using mistral-evals for vision-related tasks and using lm_evaluation_harness for select text-based benchmarks. The evaluations were conducted using the following commands:
Evaluation Commands
Vision Tasks
- vqav2
- docvqa
- mathvista
- mmmu
- chartqa
vllm serve neuralmagic/pixtral-12b-quantized.w8a8 --tensor_parallel_size 1 --max_model_len 25000 --trust_remote_code --max_num_seqs 8 --gpu_memory_utilization 0.9 --dtype float16 --limit_mm_per_prompt image=7
python -m eval.run eval_vllm \
--model_name neuralmagic/pixtral-12b-quantized.w8a8 \
--url http://0.0.0.0:8000 \
--output_dir ~/tmp \
--eval_name <vision_task_name>
Text-based Tasks
MMLU
lm_eval \
--model vllm \
--model_args pretrained="<model_name>",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=<n>,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
--tasks mmlu \
--num_fewshot 5 \
--batch_size auto \
--output_path output_dir
MGSM
lm_eval \
--model vllm \
--model_args pretrained="<model_name>",dtype=auto,max_model_len=4096,max_gen_toks=2048,max_num_seqs=128,tensor_parallel_size=<n>,gpu_memory_utilization=0.9 \
--tasks mgsm_cot_native \
--num_fewshot 0 \
--batch_size auto \
--output_path output_dir
Accuracy
Category | Metric | neuralmagic/Pixtral-Large-Instruct-2411-hf | neuralmagic/Pixtral-Large-Instruct-2411-hf-quantized.w4a16 | Recovery (%) |
---|---|---|---|---|
Vision | MMMU (val, CoT) explicit_prompt_relaxed_correctness |
63.56 | 60.56 | 95.28% |
VQAv2 (val) vqa_match |
79.03 | 79.04 | 100.01% | |
DocVQA (val) anls |
89.55 | 89.00 | 99.39% | |
ChartQA (test, CoT) anywhere_in_answer_relaxed_correctness |
82.24 | 81.52 | 99.12% | |
Mathvista (testmini, CoT) explicit_prompt_relaxed_correctness |
67.3 | 66.60 | 98.96% | |
Average Score | 76.34 | 75.34 | 98.69% | |
Text | MGSM (CoT) | 76.05 | 75.09 | 98.74% |
MMLU (5-shot) | 82.8 | 82.25 | 99.33% |
Inference Performance
This model achieves up to 2.80x speedup in single-stream deployment and up to 1.70x speedup in multi-stream asynchronous deployment, depending on hardware and use-case scenario. The following performance benchmarks were conducted with vLLM version 0.7.2, and GuideLLM.
Benchmarking Command
``` guidellm --model neuralmagic/Pixtral-Large-Instruct-2411-hf-quantized.w4a16 --target "http://localhost:8000/v1" --data-type emulated --data prompt_tokens=,generated_tokens=,images=,width=,height= --max seconds 120 --backend aiohttp_server ```Single-stream performance (measured with vLLM version 0.7.2)
Document Visual Question Answering 1680W x 2240H 64/128 |
Visual Reasoning 640W x 480H 128/128 |
Image Captioning 480W x 360H 0/128 |
|||||||
---|---|---|---|---|---|---|---|---|---|
Hardware | Number of GPUs | Model | Average Cost Reduction | Latency (s) | Queries Per Dollar | Latency (s) | Queries Per Dollar | Latency (s) | Queries Per Dollar |
A100 | 4 | neuralmagic/Pixtral-Large-Instruct-2411-hf | 7.5 | 67 | 6.5 | 77 | 6.4 | 79 | |
2 | neuralmagic/Pixtral-Large-Instruct-2411-hf-quantized.w8a8 | 1.86 | 8.1 | 124 | 7.1 | 142 | 6.8 | 148 | |
2 | neuralmagic/Pixtral-Large-Instruct-2411-hf-quantized.w4a16 | 2.52 | 6.9 | 147 | 5.1 | 199 | 4.5 | 221 | |
H100 | 4 | neuralmagic/Pixtral-Large-Instruct-2411-hf | 4.4 | 67 | 3.9 | 74 | 3.7 | 79 | |
2 | neuralmagic/Pixtral-Large-Instruct-2411-hf-FP8-Dynamic | 1.82 | 4.7 | 120 | 4.1 | 137 | 3.9 | 145 | |
2 | neuralmagic/Pixtral-Large-Instruct-2411-hf-quantized.w4a16 | 1.87 | 4.7 | 120 | 3.9 | 144 | 3.8 | 149 |
**Use case profiles: Image Size (WxH) / prompt tokens / generation tokens
**QPD: Queries per dollar, based on on-demand cost at Lambda Labs (observed on 2/18/2025).
Multi-stream asynchronous performance (measured with vLLM version 0.7.2)
<Document Visual Question Answering 1680W x 2240H 64/128 |
Visual Reasoning 640W x 480H 128/128 |
Image Captioning 480W x 360H 0/128 |
||||||
---|---|---|---|---|---|---|---|---|
Hardware | Model | Average Cost Reduction | Maximum throughput (QPS) | Queries Per Dollar | Maximum throughput (QPS) | Queries Per Dollar | Maximum throughput (QPS) | Queries Per Dollar |
A100x4 | neuralmagic/Pixtral-Large-Instruct-2411-hf | 0.4 | 222 | 0.7 | 341 | 0.8 | 399 | |
neuralmagic/Pixtral-Large-Instruct-2411-hf-quantized.w8a8 | 1.70 | 1.6 | 766 | 2.2 | 1142 | 2.6 | 1348 | |
neuralmagic/Pixtral-Large-Instruct-2411-hf-quantized.w4a16 | 1.48 | 1.0 | 552 | 2.0 | 1010 | 2.8 | 1360 | |
H100x4 | neuralmagic/Pixtral-Large-Instruct-2411-hf | 1.0 | 284 | 1.6 | 465 | 1.8 | 511 | |
neuralmagic/Pixtral-Large-Instruct-2411-hf-FP8-Dynamic | 1.61 | 3.4 | 905 | 5.2 | 1406 | 6.4 | 1759 | |
neuralmagic/Pixtral-Large-Instruct-2411-hf-quantized.w4a16 | 1.33 | 2.8 | 761 | 4.4 | 1228 | 5.4 | 1480 |
**Use case profiles: Image Size (WxH) / prompt tokens / generation tokens
**QPS: Queries per second.
**QPD: Queries per dollar, based on on-demand cost at Lambda Labs (observed on 2/18/2025).
- Downloads last month
- 267