pepe / README.md
openfree's picture
Update README.md
1fd59fe verified
|
raw
history blame
1.59 kB
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: A person in a bustling cafe pepe
output:
url: samples/1737602345329__000001000_0.jpg
- text: A pepe
output:
url: samples/pepe1.webp
- text: A pepe
output:
url: samples/pepe2.webp
- text: A pepe
output:
url: samples/pepe3.webp
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: pepe
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# pepe
<Gallery />
## Trigger words
You should use `pepe` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/openfree/pepe/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('openfree/pepe', weight_name='pepe.safetensors')
image = pipeline('A person in a bustling cafe pepe').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)