apepkuss79 commited on
Commit
26affb3
·
verified ·
1 Parent(s): e5b008f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -15
README.md CHANGED
@@ -66,7 +66,7 @@ tags:
66
  --ctx-size 131072
67
  ```
68
 
69
- <!-- ## Quantized GGUF Models
70
 
71
  | Name | Quant method | Bits | Size | Use case |
72
  | ---- | ---- | ---- | ---- | ----- |
@@ -77,21 +77,22 @@ tags:
77
  | [Qwen2.5-72B-Instruct-Q4_0.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q4_0.gguf) | Q4_0 | 4 | 41.2 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
78
  | [Qwen2.5-72B-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 47.4 GB| medium, balanced quality - recommended |
79
  | [Qwen2.5-72B-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 43.9 GB| small, greater quality loss |
80
- | [Qwen2.5-72B-Instruct-Q5_0-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q5_0-00001-of-00002.gguf) | Q5_0 | 5 | 32.2 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
81
- | [Qwen2.5-72B-Instruct-Q5_0-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q5_0-00002-of-00002.gguf) | Q5_0 | 5 | 18 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
82
  | [Qwen2.5-72B-Instruct-Q5_K_M-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q5_K_M-00001-of-00002.gguf) | Q5_K_M | 5 | 29.9 GB| large, very low quality loss - recommended |
83
  | [Qwen2.5-72B-Instruct-Q5_K_M-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q5_K_M-00002-of-00002.gguf) | Q5_K_M | 5 | 24.6 GB| large, very low quality loss - recommended |
84
- | [Qwen2.5-72B-Instruct-Q5_K_S-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q5_K_S-00001-of-00002.gguf) | Q5_K_S | 5 | 32.1 GB| large, low quality loss - recommended |
85
- | [Qwen2.5-72B-Instruct-Q5_K_S-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q5_K_S-00002-of-00002.gguf) | Q5_K_S | 5 | 32.1 GB| large, low quality loss - recommended |
86
- | [Qwen2.5-72B-Instruct-Q6_K-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q6_K-00001-of-00002.gguf) | Q6_K | 6 | 32.2 GB| very large, extremely low quality loss |
87
- | [Qwen2.5-72B-Instruct-Q6_K-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q6_K-00002-of-00002.gguf) | Q6_K | 6 | 32.2 GB| very large, extremely low quality loss |
88
- | [Qwen2.5-72B-Instruct-Q8_0-00001-of-00003.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q8_0-00001-of-00003.gguf) | Q8_0 | 8 | 32.1 GB| very large, extremely low quality loss - not recommended |
89
- | [Qwen2.5-72B-Instruct-Q8_0-00002-of-00003.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q8_0-00002-of-00003.gguf) | Q8_0 | 8 | 32.1 GB| very large, extremely low quality loss - not recommended |
90
- | [Qwen2.5-72B-Instruct-Q8_0-00003-of-00003.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q8_0-00003-of-00003.gguf) | Q8_0 | 8 | 32.1 GB| very large, extremely low quality loss - not recommended |
91
- | [Qwen2.5-72B-Instruct-f16-00001-of-00005.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-f16-00001-of-00005.gguf) | f16 | 16 | 31.9 GB| |
92
- | [Qwen2.5-72B-Instruct-f16-00002-of-00005.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-f16-00002-of-00005.gguf) | f16 | 16 | 32.1 GB| |
93
- | [Qwen2.5-72B-Instruct-f16-00003-of-00005.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-f16-00003-of-00005.gguf) | f16 | 16 | 32.1 GB| |
94
- | [Qwen2.5-72B-Instruct-f16-00004-of-00005.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-f16-00004-of-00005.gguf) | f16 | 16 | 32.1 GB| |
95
- | [Qwen2.5-72B-Instruct-f16-00005-of-00005.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-f16-00005-of-00005.gguf) | f16 | 16 | 17.3 GB| | -->
 
96
 
97
  *Quantized with llama.cpp b3751*
 
66
  --ctx-size 131072
67
  ```
68
 
69
+ ## Quantized GGUF Models
70
 
71
  | Name | Quant method | Bits | Size | Use case |
72
  | ---- | ---- | ---- | ---- | ----- |
 
77
  | [Qwen2.5-72B-Instruct-Q4_0.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q4_0.gguf) | Q4_0 | 4 | 41.2 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
78
  | [Qwen2.5-72B-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 47.4 GB| medium, balanced quality - recommended |
79
  | [Qwen2.5-72B-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 43.9 GB| small, greater quality loss |
80
+ | [Qwen2.5-72B-Instruct-Q5_0-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q5_0-00001-of-00002.gguf) | Q5_0 | 5 | 30.0 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
81
+ | [Qwen2.5-72B-Instruct-Q5_0-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q5_0-00002-of-00002.gguf) | Q5_0 | 5 | 20.2 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
82
  | [Qwen2.5-72B-Instruct-Q5_K_M-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q5_K_M-00001-of-00002.gguf) | Q5_K_M | 5 | 29.9 GB| large, very low quality loss - recommended |
83
  | [Qwen2.5-72B-Instruct-Q5_K_M-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q5_K_M-00002-of-00002.gguf) | Q5_K_M | 5 | 24.6 GB| large, very low quality loss - recommended |
84
+ | [Qwen2.5-72B-Instruct-Q5_K_S-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q5_K_S-00001-of-00002.gguf) | Q5_K_S | 5 | 29.9 GB| large, low quality loss - recommended |
85
+ | [Qwen2.5-72B-Instruct-Q5_K_S-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q5_K_S-00002-of-00002.gguf) | Q5_K_S | 5 | 21.4 GB| large, low quality loss - recommended |
86
+ | [Qwen2.5-72B-Instruct-Q6_K-00001-of-00003.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q6_K-00001-of-00003.gguf) | Q6_K | 6 | 29.8 GB| very large, extremely low quality loss |
87
+ | [Qwen2.5-72B-Instruct-Q6_K-00002-of-00003.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q6_K-00002-of-00003.gguf) | Q6_K | 6 | 29.8 GB| very large, extremely low quality loss |
88
+ | [Qwen2.5-72B-Instruct-Q6_K-00003-of-00003.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q6_K-00003-of-00003.gguf) | Q6_K | 6 | 4.66 GB| very large, extremely low quality loss |
89
+ | [Qwen2.5-72B-Instruct-Q8_0-00001-of-00003.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q8_0-00001-of-00003.gguf) | Q8_0 | 8 | 29.8 GB| very large, extremely low quality loss - not recommended |
90
+ | [Qwen2.5-72B-Instruct-Q8_0-00002-of-00003.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q8_0-00002-of-00003.gguf) | Q8_0 | 8 | 29.8 GB| very large, extremely low quality loss - not recommended |
91
+ | [Qwen2.5-72B-Instruct-Q8_0-00003-of-00003.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q8_0-00003-of-00003.gguf) | Q8_0 | 8 | 17.6 GB| very large, extremely low quality loss - not recommended |
92
+ | [Qwen2.5-72B-Instruct-f16-00001-of-00005.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-f16-00001-of-00005.gguf) | f16 | 16 | 29.8 GB| |
93
+ | [Qwen2.5-72B-Instruct-f16-00002-of-00005.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-f16-00002-of-00005.gguf) | f16 | 16 | 29.7 GB| |
94
+ | [Qwen2.5-72B-Instruct-f16-00003-of-00005.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-f16-00003-of-00005.gguf) | f16 | 16 | 29.5 GB| |
95
+ | [Qwen2.5-72B-Instruct-f16-00004-of-00005.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-f16-00004-of-00005.gguf) | f16 | 16 | 29.8 GB| |
96
+ | [Qwen2.5-72B-Instruct-f16-00005-of-00005.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-f16-00005-of-00005.gguf) | f16 | 16 | 26.6 GB| |
97
 
98
  *Quantized with llama.cpp b3751*