nm-research commited on
Commit
a9c8bbd
·
verified ·
1 Parent(s): 4c56500

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -281,14 +281,14 @@ lm_eval \
281
  ## Inference Performance
282
 
283
 
284
- This model achieves up to 1.6x speedup in both single-stream and multi-stream asynchronous deployment, depending on hardware and use-case scenario.
285
  The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.6.7.2, and [GuideLLM](https://github.com/neuralmagic/guidellm).
286
 
287
  <details>
288
  <summary>Benchmarking Command</summary>
289
 
290
  ```
291
- guidellm --model neuralmagic/DeepSeek-R1-Distill-Qwen-7B-quantized.w8a8 --target "http://localhost:8000/v1" --data-type emulated --data "prompt_tokens=<prompt_tokens>,generated_tokens=<generated_tokens>" --max seconds 360 --backend aiohttp_server
292
  ```
293
  </details>
294
 
 
281
  ## Inference Performance
282
 
283
 
284
+ This model achieves up to 2.6x speedup in single-stream deployment and up to 1.5x speedup in multi-stream asynchronous deployment, depending on hardware and use-case scenario.
285
  The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.6.7.2, and [GuideLLM](https://github.com/neuralmagic/guidellm).
286
 
287
  <details>
288
  <summary>Benchmarking Command</summary>
289
 
290
  ```
291
+ guidellm --model neuralmagic/DeepSeek-R1-Distill-Qwen-7B-quantized.w4a16 --target "http://localhost:8000/v1" --data-type emulated --data "prompt_tokens=<prompt_tokens>,generated_tokens=<generated_tokens>" --max seconds 360 --backend aiohttp_server
292
  ```
293
  </details>
294