https://huggingface.co/ValueFX9507/Tifa-DeepsexV2-7b-MGRPO-GGUF-F16
This is the latest GGUF in this repo, I hope you can improve it using i1 quantization, thanks for your efforts! Thank you🥰:
What a mess. Ill think about what I can do about this.
If at all possible, could you make a copy of that repo with just that file, so we have something to point at? I don't think huggingface supports this kind of set-up, and even if, out scripts don't :)
OK, I will make a copy and edit it.
@btaskel Thanks a lot. Please use https://huggingface.co/spaces/huggingface-projects/repo_duplicator and clone the model to your account once for every GGUF you want use to quantize. Take a look at the discussion in https://huggingface.co/mradermacher/model_requests/discussions/675#67b1563e1ccbf9111bcb46d8 if you want to know why this is required for us.
Oh, it's that guy... Yes, making a copy and deleting the extra files would be perfectly fine. It's still manual work for us, but we can point at that upstream repo, and it's clear what it is pointing at.
Oh, it's that guy... Yes, making a copy and deleting the extra files would be perfectly fine. It's still manual work for us, but we can point at that upstream repo, and it's clear what it is pointing at.
I understand your confusion. This is the repository I copied. Thank you!🤗
https://huggingface.co/btaskel/Tifa-DeepsexV2-7b-MGRPO-safetensors
Wow you even went through the effort to convert GGUF to SafeTensor! Wow thanks a lot that is awesome! It's queued! :D
You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#Tifa-DeepsexV2-7b-MGRPO-safetensors-GGUF for quants to appear.
Wow indeed