apepkuss79 commited on
Commit
8194ca6
·
verified ·
1 Parent(s): 2d04a04

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -26,25 +26,25 @@ library_name: transformers
26
 
27
  <!-- - LlamaEdge version: [v0.14.3](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.14.3) -->
28
 
29
- <!-- - Prompt template
30
 
31
- - Prompt type: `chatml`
32
 
33
  - Prompt string
34
 
35
  ```text
36
- <|begin▁of▁sentence|>{system_message}<|User|>{user_message_1}<|Assistant|>{assistant_message_1}<|end▁of▁sentence|><|User|>{user_message_2}
37
- ``` -->
38
 
39
  - Context size: `128000`
40
 
41
- <!-- - Run as LlamaEdge service
42
 
43
  ```bash
44
  wasmedge --dir .:. --nn-preload default:GGML:AUTO:DeepSeek-R1-Distill-Qwen-14B-Q5_K_M.gguf \
45
  llama-api-server.wasm \
46
  --model-name DeepSeek-R1-Distill-Qwen-14B \
47
- --prompt-template chatml \
48
  --ctx-size 128000
49
  ```
50
 
@@ -53,9 +53,9 @@ library_name: transformers
53
  ```bash
54
  wasmedge --dir .:. --nn-preload default:GGML:AUTO:DeepSeek-R1-Distill-Qwen-14B-Q5_K_M.gguf \
55
  llama-chat.wasm \
56
- --prompt-template chatml \
57
  --ctx-size 128000
58
- ``` -->
59
 
60
  ## Quantized GGUF Models
61
 
 
26
 
27
  <!-- - LlamaEdge version: [v0.14.3](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.14.3) -->
28
 
29
+ - Prompt template
30
 
31
+ - Prompt type: `deepseek-chat-25`
32
 
33
  - Prompt string
34
 
35
  ```text
36
+ <|begin_of_sentence|>{system_message}<|User|>{user_message_1}<|Assistant|>{assistant_message_1}<|end_of_sentence|><|User|>{user_message_2}<|Assistant|>
37
+ ```
38
 
39
  - Context size: `128000`
40
 
41
+ - Run as LlamaEdge service
42
 
43
  ```bash
44
  wasmedge --dir .:. --nn-preload default:GGML:AUTO:DeepSeek-R1-Distill-Qwen-14B-Q5_K_M.gguf \
45
  llama-api-server.wasm \
46
  --model-name DeepSeek-R1-Distill-Qwen-14B \
47
+ --prompt-template deepseek-chat-25 \
48
  --ctx-size 128000
49
  ```
50
 
 
53
  ```bash
54
  wasmedge --dir .:. --nn-preload default:GGML:AUTO:DeepSeek-R1-Distill-Qwen-14B-Q5_K_M.gguf \
55
  llama-chat.wasm \
56
+ --prompt-template deepseek-chat-25 \
57
  --ctx-size 128000
58
+ ```
59
 
60
  ## Quantized GGUF Models
61