Minzhi Huang
ChloeHuang1
·
AI & ML interests
None yet
Recent Activity
new activity
about 1 month ago
Valdemardi/DeepSeek-R1-Distill-Qwen-32B-AWQ:Can this model use with VLLM?
Organizations
None yet
ChloeHuang1's activity
VLLM with error Blockwise quantization only supports 16/32-bit floats, but got torch.uint8
6
#3 opened about 1 month ago
by
ChloeHuang1
Can this model use with VLLM?
3
#2 opened about 1 month ago
by
ChloeHuang1
VLLM with error Blockwise quantization only supports 16/32-bit floats, but got torch.uint8
6
#3 opened about 1 month ago
by
ChloeHuang1
VLLM with error Blockwise quantization only supports 16/32-bit floats, but got torch.uint8
6
#3 opened about 1 month ago
by
ChloeHuang1