DeepSeek-R1-AWQ / README.md
v2ray's picture
Updated README.md to include the latest command and performance.
007e788 verified
metadata
license: mit
language:
  - en
  - zh
base_model:
  - deepseek-ai/DeepSeek-R1
pipeline_tag: text-generation
library_name: transformers

DeepSeek R1 AWQ

AWQ of DeepSeek R1.

This quant modified some of the model code to fix an overflow issue when using float16.

To serve using vLLM with 8x 80GB GPUs, use the following command:

VLLM_WORKER_MULTIPROC_METHOD=spawn python -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 12345 --max-model-len 65536 --max-num-batched-tokens 65536 --trust-remote-code --tensor-parallel-size 8 --gpu-memory-utilization 0.97 --dtype float16 --served-model-name deepseek-reasoner --model cognitivecomputations/DeepSeek-R1-AWQ

You can download the wheel I built for PyTorch 2.6, Python 3.12 by clicking here.

Inference speed with batch size 1 and short prompt:

  • 8x H100: 48 TPS
  • 8x A100: 38 TPS

Note:

  • Inference speed will be better than FP8 at low batch size but worse than FP8 at high batch size, this is the nature of low bit quantization.
  • vLLM supports MLA for AWQ now, you can run this model with full context length on just 8x 80GB GPUs.