1.3B Model GGUF?
Hi Calcuis
Thank you very much for your work. The LTX-GGUF r2 are much better then before, and really good now. With the WAN-Model, I had a lot of struggle with the your GGUF-Version. But now after a mix of versions, it is working. But my hardware is to weak for the big model, the small WAN Model T2V is working really great. Normally there is no need for an gguf-version, and perhaps, it makes no sense. But I want to ask anyway: Is there a plan for a gguf version of the t2v-1.3 model? My hope is, that if the model is more tiny, then I could increase the size of the image. But to be honest, I do not know, if that is true.
1.3b seems a test model; not really ready for production; the tensors were trimmed up already; you could try to convert it with convertor zero and see does it work or not first
ok. I tested the zero convertor. It is really as simple as it could be. it is fast also. The following video creation gguf-run then was finished with an error, and the gguf-file was slightly larger then the safetensor :-). But this converter is really a good thing, when the returning file is working. I was surprised that I could not define the bit level of quantification. But it has chosen by itself. Anyway, thank you for your work.
it works for most of the common models, including encoder and decoders; but not all, since some of them have unique structure and compatibility problem, might need extra editing work