When using llama.cpp to deploy the DeepSeek - R1 - Q4_K_M model, garbled characters appear in the server's response.

#31
by KAMING - opened

image.png

how to slove that

Are you using the correct chat template for DeepSeek?

also set temperature to 0.6

Are you using the correct chat template for DeepSeek?

also set temperature to 0.6

I followed the instructions on this homepage and the GPU installation guide on GitHub. During the installation process, I didn't modify anything except the model path.
image.png

image.png

Sign up or log in to comment