Regarding the issue of inconsistent calculation of tokens
#12
by
liguoyu3564
- opened
Hello, I am using vllm to deploy the inference service. The usage.prompt_tokens data returned by the calling interface is inconsistent with the token obtained using transformers.AutoTokenizer. The following is the test process:
vllm startup command:
docker run -itd \
--name deepseek-awq \
--network host \
--shm-size=1024m \
--gpus all \
-v $(pwd):/app \
--entrypoint "bash" \
docker.1ms.run/vllm/vllm-openai:v0.7.2 \
-c " python3 -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 8000 --enable_prefix_caching --max-model-len 65536 --trust-remote-code --tensor-parallel-size 8 --quantization moe_wna16 --gpu-memory-util 0.97 --kv-cache-dtype fp8_e5m2 --calculate-kv-scales --served-model-name deepseek-awq --enable-chunked-prefill --model /app/cognitivecomputations/DeepSeek-R1-awq"
curl --request POST \
--url http://10.1.30.59:8000/v1/chat/completions \
--header 'Authorization: Bearer sk-ddddddddddddddd' \
--header 'Content-Type: application/json' \
--data '{
"messages": [
{
"role": "user",
"content": "what is you name"
}
],
"stream": false,
// "stream_options":{
// "include_usage": true
// },
"model": "deepseek-awq",
"temperature": 0.5,
"presence_penalty": 0,
"frequency_penalty": 0,
"top_p": 1
}'
Response:
"usage": {
"prompt_tokens": 7,
"total_tokens": 120,
"completion_tokens": 113,
"prompt_tokens_details": null
},
python code:
# pip3 install transformers
# python3 deepseek_tokenizer.py
import transformers
chat_tokenizer_dir = "./"
tokenizer = transformers.AutoTokenizer.from_pretrained(
chat_tokenizer_dir, trust_remote_code=True
)
result = tokenizer.encode("what is you name")
print(result)
root@A100-GPU-59:/app/cognitivecomputations/DeepSeek-R1-awq# python3 deepseek_tokenizer.py
[0, 9602, 344, 440, 2329]
They use the same tokenizer configuration