NeMo
English
nvidia
llama3.1

Model Config Error on VLLM?

#3
by rodMetal - opened

Anyone getting this error?

"ValueError: No supported config format found in nvidia/Llama-3.1-Nemotron-70B-Instruct"

I was getting the same error. I went and used this one instead, works just fine:

https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct-HF

Not sure what the difference really is between these 2 repos, but the -HF one seems to have the params (safetensors).

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment