--- license: mit language: - en - zh base_model: - deepseek-ai/DeepSeek-R1 pipeline_tag: text-generation library_name: transformers --- # DeepSeek R1 AWQ AWQ of DeepSeek R1. This quant modified some of the model code to fix an overflow issue when using float16. To serve using vLLM with 8x 80GB GPUs, use the following command: ```sh VLLM_WORKER_MULTIPROC_METHOD=spawn python -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 12345 --max-model-len 65536 --max-num-batched-tokens 65536 --trust-remote-code --tensor-parallel-size 8 --gpu-memory-utilization 0.97 --dtype float16 --served-model-name deepseek-reasoner --model cognitivecomputations/DeepSeek-R1-AWQ ``` You can download the wheel I built for PyTorch 2.6, Python 3.12 by clicking [here](https://huggingface.co/x2ray/wheels/resolve/main/vllm-0.7.3.dev187%2Bg0ff1a4df.d20220101.cu126-cp312-cp312-linux_x86_64.whl). Inference speed with batch size 1 and short prompt: - 8x H100: 48 TPS - 8x A100: 38 TPS Note: - Inference speed will be better than FP8 at low batch size but worse than FP8 at high batch size, this is the nature of low bit quantization. - vLLM supports MLA for AWQ now, you can run this model with full context length on just 8x 80GB GPUs.