When using llama.cpp to deploy the DeepSeek - R1 - Q4_K_M model, garbled characters appear in the server's response.
#31
by
KAMING
- opened
how to slove that
Are you using the correct chat template for DeepSeek?
also set temperature to 0.6