Perplexity comparsion results
#37 opened about 9 hours ago
by
inputout

Q2_K_XL model is the best? IQ2_XXS is better than Q2_K_XL in mmlu-pro benchmark
3
#36 opened about 22 hours ago
by
albertchow
Long-Form input takes too long
#35 opened 4 days ago
by
htkim27
Q2_K_XL 好还是 Q4好呢
1
#34 opened 5 days ago
by
jializou

is it uncensored?
2
#33 opened 6 days ago
by
Morrigan-Ship
Cannot Run `unsloth/DeepSeek-R1-GGUF` Model – Missing `configuration_deepseek.py`
2
#32 opened 10 days ago
by
syrys4750

When using llama.cpp to deploy the DeepSeek - R1 - Q4_K_M model, garbled characters appear in the server's response.
3
#31 opened 11 days ago
by
KAMING
各种量化版本的模型,在不同测评数据集上面的表现怎么样,有没有具体的测试结果
2
#29 opened 11 days ago
by
huanfa
when using with ollama, does it support kv_cache_type=q4_0 and flash_attention=1?
2
#28 opened 13 days ago
by
leonzy04
如何同时处理多个http请求
4
#27 opened 13 days ago
by
007hao
IQ1_S模型合并后部署于ollama上,推理生成效果差
2
#26 opened 13 days ago
by
gaozj
模型似乎被微调过
1
#25 opened 13 days ago
by
mogazheng
What is the base precision type(FP32/FP16) used in Q2/Q1 quantization?
#23 opened 15 days ago
by
ArYuZzz1
any benchmark results?
2
#22 opened 16 days ago
by
Wei-Wu
Accuracy of the dynamic quants compared to usual quants?
19
#21 opened 16 days ago
by
inputout

8bits quantization
5
#20 opened 17 days ago
by
ramkumarkoppu
New research paper, R1 type reasoning models can be drastically improved in quality
2
#19 opened 20 days ago
by
krustik
md5 / sha256 hashes please
1
#18 opened 22 days ago
by
ivanvolosyuk
Is there a model removing non-shared MoE experts?
4
#17 opened 23 days ago
by
ghostplant
A Step-by-step deployment guide with ollama
4
#16 opened 24 days ago
by
snowkylin

No think tokens visible
6
#15 opened 24 days ago
by
sudkamath
Over 2 tok/sec agg backed by NVMe SSD on 96GB RAM + 24GB VRAM AM5 rig with llama.cpp
9
#13 opened 25 days ago
by
ubergarm
Running the model with vLLM does not actually work
8
#12 opened 25 days ago
by
aikitoria

DeepSeek-R1-GGUF on LMStudio not available
2
#11 opened 25 days ago
by
32SkyDive
Where did the BF16 come from?
8
#10 opened 25 days ago
by
gshpychka
Inference speed
2
#9 opened 26 days ago
by
Iker

Running this model using vLLM Docker
4
#8 opened 26 days ago
by
moficodes

UD-IQ1_M models for distilled R1 versions?
3
#6 opened 27 days ago
by
SamPurkis
Llama.cpp server chat template
5
#4 opened 29 days ago
by
softwareweaver

Are the Q4 and Q5 models R1 or R1-Zero
18
#2 opened about 1 month ago
by
gng2info
What is the VRAM requirement to run this ?
5
#1 opened about 1 month ago
by
RageshAntony