--- license: other license_name: bespoke-lora-trained-license license_link: https://multimodal.art/civitai-licenses?allowNoCredit=False&allowCommercialUse=RentCivit&allowDerivatives=False&allowDifferentLicense=True tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora - migrated - concept - screaming - sad - expressions - helper - tools - toolkit - happy - emotions - realistic - angry - cry - pose - tears - real life - surprised - expression - scared - disgusted - realisticlora - tool lora - concepts - poses and concept - photograpy - expression helper - expression helper 2.0 base_model: black-forest-labs/FLUX.1-dev instance_prompt: sad widget: - text: ' ' output: url: >- 38100980.jpeg - text: ' ' output: url: >- 38101154.jpeg - text: ' ' output: url: >- 38101139.jpeg - text: ' ' output: url: >- 38101151.jpeg - text: ' ' output: url: >- 38101150.jpeg - text: ' ' output: url: >- 38101152.jpeg - text: ' ' output: url: >- 38101155.jpeg - text: ' ' output: url: >- 38101270.jpeg - text: ' ' output: url: >- 38101273.jpeg - text: ' ' output: url: >- 38101268.jpeg - text: ' ' output: url: >- 38101271.jpeg - text: ' ' output: url: >- 38101269.jpeg - text: ' ' output: url: >- 38101274.jpeg - text: ' ' output: url: >- 38101272.jpeg - text: ' ' output: url: >- 38106684.jpeg - text: ' ' output: url: >- 38106688.jpeg - text: ' ' output: url: >- 38106685.jpeg - text: ' ' output: url: >- 38106686.jpeg --- # Expression Helper 2.0 ## Model description

EXPRESSION HELPER 2.0

I implemented these facial expressions for you:

You can use both tags and captions, both together and individually, my advice is to use the expression tag between parentheses before the caption.

You can also mark it by adding "very" to the expression in brackets, and with additional tags to add after the caption, such as "open mouth", "scream", "smile", "laugh", "terrified" etc.


A tip, when using Flux (in any case, not only with my LoRa) do not overdo it by only putting tags, tags must be used wisely and should only mark concepts that you have not managed to obtain with the caption alone.

DOES IT WORK WITH ANIME?

Quite, but it was not designed for expressions in this style, if this model is appreciated and users want it you can also make an Anime Expression Helper.

EXEMPLES

SAD:

ANGRY:

SCARED:

HAPPY:

SURPRISED:

DISGUSTED:

## Trigger words You should use `sad `, `very sad`, `happy`, `very happy`, `angry`, `very angry`, `surprised`, `open mouth`, `scared`, `very scared`, `disgusted`, `very disgusted` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Keltezaa/expression-helper-2-0/tree/main) them in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch device = "cuda" if torch.cuda.is_available() else "cpu" pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device) pipeline.load_lora_weights('Keltezaa/expression-helper-2-0', weight_name='expression_helper2.0.safetensors') image = pipeline('`sad `, `very sad`, `happy`, `very happy`, `angry`, `very angry`, `surprised`, `open mouth`, `scared`, `very scared`, `disgusted`, `very disgusted`').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)