Does DeepSeek-Llama-70B support tensor parallelism for multi-GPU inference?

#14
by Merk0701234 - opened

Hi everyone,

I am planning to run DeepSeek-Llama-70B on a system with 4x RTX 5090 GPUs (without NVLink). Since the model requires more VRAM than a single GPU can provide, I need to split it across multiple GPUs using tensor parallelism or another efficient method.

Does DeepSeek-Llama-70B natively support tensor parallelism with frameworks like DeepSpeed, Megatron-LM, or vLLM?

Has anyone successfully run this model on multiple GPUs without NVLink?

What would be the best approach to optimize inference speed and memory usage in this setup?

Thanks in advance for your insights!

You can run it with a similar configuration with vLLM (v0.6.5) without any issue from my experience (this is running on a EC2 with 4 A10G) just adapt the max_model_length and GPU usage to your need :

engine_args:
  model: DeepSeek-R1-Distill-Llama-70B-AWQ
  tensor_parallel_size: 4
  max_num_batched_tokens: 8192
  max_num_seqs: 40
  dtype: "float16"
  max_model_len: 80000
  gpu_memory_utilization: 0.85
  enable_prefix_caching: True
  served_model_name: deepseek-llama-awq

Sign up or log in to comment