flux compatibility

#10
by xi0v - opened

Seems to error out when using flux models, is there a recommended precision for the flux based model+TEs(T5XXL/CLIP)?
Also to avoid generations timing out and decreasing from the qouta, you should add an option to download the diffusers format/single file .safetensors model before pressing the run button to be able to load it automatically and not time out.
And I believe LoRAs are still broken, so if you manage to fix it, you should support flux too.

to avoid generations timing out and decreasing from the qouta, you should add an option to download the diffusers format/single file .safetensors model before pressing the run button to be able to load it automatically and not time out.

When the Zero GPU space was buggy at worst, it used to consume Quota when loading models, but now it's CPU when loading models, so no worries. I think it would be better to have r3gm provide library-based support for single Safetensors file loading if it were to happen.
Right now the program is built on the assumption that models are loaded from the HF repo, so if I change this, it would require a lot of branching, and if I allow local models, I would have to worry about disk space in Spaces. I don't know when to call the cleanup process...

And I believe LoRAs are still broken, so if you manage to fix it,

Oh, seriously? I have to debug.

Seems to error out when using flux models,

Maybe 59 seconds is not enough. Actually, there is an internal switching function, but I haven't implemented it in the GUI. If it doesn't work in DiffuseCraft either, it might be a bug in the original DiffuseCraft or stablepy, since Flux support is still in beta.

This space is currently better suited for Flux, and there is no problem in generating it even if LoRA is not specified.
https://huggingface.co/spaces/John6666/flux-lora-the-explorer

LoRAs are still broken

I fixed it!
The reasons were terrible, man. I forgot to change some of the lowercase letters to uppercase when I changed the function specification.
A compiled language would have given me an error, but scripting languages have these things...
image.png

to avoid generations timing out and decreasing from the qouta, you should add an option to download the diffusers format/single file .safetensors model before pressing the run button to be able to load it automatically and not time out.

When the Zero GPU space was buggy at worst, it used to consume Quota when loading models, but now it's CPU when loading models, so no worries. I think it would be better to have r3gm provide library-based support for single Safetensors file loading if it were to happen.
Right now the program is built on the assumption that models are loaded from the HF repo, so if I change this, it would require a lot of branching, and if I allow local models, I would have to worry about disk space in Spaces. I don't know when to call the cleanup process...

And I believe LoRAs are still broken, so if you manage to fix it,

Oh, seriously? I have to debug.

Seems to error out when using flux models,

Maybe 59 seconds is not enough. Actually, there is an internal switching function, but I haven't implemented it in the GUI. If it doesn't work in DiffuseCraft either, it might be a bug in the original DiffuseCraft or stablepy, since Flux support is still in beta.

This space is currently better suited for Flux, and there is no problem in generating it even if LoRA is not specified.
https://huggingface.co/spaces/John6666/flux-lora-the-explorer

Oh great. Thank you!

LoRAs are still broken

I fixed it!
The reasons were terrible, man. I forgot to change some of the lowercase letters to uppercase when I changed the function specification.
A compiled language would have given me an error, but scripting languages have these things...
image.png

Thanks for the fix!

xi0v changed discussion status to closed

The Flux problem turned out to be a full-blown problem involving DiffuseCraft rather than my mistake, so I reported it to r3gm.

Still, I'm struggling to resolve the WebUI component dependency in SuperMerger.
At first I thought it was Python misidentifying paths on import, as is often the case, but maybe not.
Is there a conflict between the name of a particular module, modules.timer, and the name of some other module...?
I've limited the sys.path search path to one, and still the error...
Do you know of any common mistakes or workarounds in these cases?

Well, there is a way to get WebUI to work somehow and use SuperMerger as a plugin, but then there's the issue of compatibility with the Zero GPU space and ease of modification.

If the workaround doesn't seem to work, I'll try to cut out the necessary code, but I have a busy schedule tomorrow, so it will be the day after tomorrow.

The Flux problem turned out to be a full-blown problem involving DiffuseCraft rather than my mistake, so I reported it to r3gm.

Great, hopefully he'll implement it it in diffusecraft.

Do you know of any common mistakes or workarounds in these cases?

I did some light digging in A1111s source code and found the modules that are probably the reason for this not working at all

https://github.com/AUTOMATIC1111/stable-diffusion-webui/tree/82a973c04367123ae98bd9abdf80d9eda9b910e2/modules
Most of these modules are useless to us. The timer module is here https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/82a973c04367123ae98bd9abdf80d9eda9b910e2/modules/timer.py
Seems like we'll be able to do something with these?

Well, there is a way to get WebUI to work somehow and use SuperMerger as a plugin

If making supermerger standalone doesn't work then we should do this instead.

but then there's the issue of compatibility
with the Zero GPU space and ease of modification.

Yeah, If only zeroGPU didn't need the decorator and a certain gradio version

If the workaround doesn't seem to work, I'll try to cut out the necessary code, but I have a busy schedule tomorrow, so it will be the day after tomorrow.

Sure!
Take your time.

It's a tough job for the library authors, but there are some problems that would be exacerbated if not left to the library authors.
Maybe this time some process or dependency inside the library is conflicting with the Zero GPU space specification, but it would be better to support GPU-variable environments.

Most of these modules are useless to us.

Exactly. We just want to merge, we don't need it.
However, quite a few modules are imported one step short of circular import, and it seems dangerous to simply omit them. I am now thinking of exploring a folder structure that would allow successful imports with reference to Forge's all-in-one package (about 4GB ones).

Seems like we'll be able to do something with these?

We agreed that the module itself is not very important, so even in a worst case scenario, if we just copy and paste the methods from modules and bring them to the appropriate .py in the root, it would work. Aside from whether the work is easy or not, it is not difficult to achieve in terms of logic.

If making supermerger standalone doesn't work then we should do this instead.

Yes. The stand-alone version is still Plan A.

Sign up or log in to comment