Guides

Free image upscaler how to use Facelift in Stable Diffusion

Improve image quality with upscalers

Our software offers a variety of ways to bump the quality beyond Stable Diffusion’s default 512×512, like Adetailer, HighDef, and Facelift Upscaler 2X and 4X with specialized photo and anime models.

If you’re using Stable Diffusion 1.5 models, why not try Adetailer?

After Detailer for SD15 is the most modern of the three upscale methods, which pin-points faces and hands to boost trouble spots. You should try Adetailer first and follow through with the other upscaling techniques below after.

How to use HighDef

To use highdef, the input must be rendered at the standard 512×512. To boost up to 1024×1024 and see two variations of the effect, we then simply reply to an image with  /highdef

This can only be done once, and it will return two variations of the same image.  It doesn’t work with 768×768 images, which are already upsampled. The longer way to do it is to type /more /size:1024×1024 /images:2, which does the same thing.

You can then boost the pixels further with Facelift, explained below.

How to use Facelift

To use it, simply reply to an image with /facelift and you’re done. You can only upscale one image at a time, but the usage isn’t metered or limited in any way. You have unlimited facelifts, feel free to upscale everything all day long.

The /facelift command fixes faces (glitches, imperfections, rough and oily skin) as well as upscales your picture. Sometimes you don’t want faces retouched, so there are two other modes:

/facelift /anime — for any kind of illustrated works, pass the /anime secondary command. It works great on drawings, paintings, CGI, including semi-realistic CGI.

/facelift /photo — when you want upscaling but not face retouching, such as landscape photo, or when you do not want your character to have that “airbrushed” look

FACELIFT STRENGTH

Control the amount of facelifting by adding the strength parameter. The strength is set to 1 by default.

/facelift /strength:0.5

 

Intentional loss of detail

Just like turning on “beauty face” on your smart phone camera, facelift uses similar technologies to smooth fine details. This can sometimes result in overly smooth faces or loss of some unique facial details, like freckles or birthmarks. You can offset this by targeting an ideal render from the start (see first paragraph).

But more than often, the results are really impressive.

Typical render:

Upscaled image:

 

The “more” method

Reply with:

/more /size:1300×1300 /images:2

This feature is often misunderstood, so we recorded a whole one hour video podcast on how to use it.

 

Back to tutorials

How to use After Detailer (Adetailer) in Stable Diffusion, our adetailer tutorial

How to use After Detailer (Adetailer)

After Detailer works well for both illustrations and realistic photographs.

IMPORTANT – IT ONLY WORKS WITH STABLE DIFFUSION 1.5 MODELS 

There are multiple tools in PirateDiffusion to boost your quality. Adetailer is an easy, great one to start with. Also see the general upscaling methods of the more tool, highdef (high res fix) and Facelift’s various modes in these videos.

Stable Diffusion “After Detailer”, also commonly known as Adetailer, is an automatic touch-up tool, and is similar to Facelift. As the name implies, the image is first created and the software hunts down trouble spots to automatically improve them. While there are many modes of Adetailer, in our testing we’ve found that the most useful and effective deal with the face and hands.

Usage

/render /adetailer and your prompt goes here

Simply add the /adetailer tag after /render prompt.  At the time of this guide, /adetailer is ignored on /remix but we hope to add it to img2img functions like inpaint in time.

What’s going on technically

After Detailer is like a tiny program with its own mini-models. The Stable Diffusion community is creating bite-sized models that target common problems like beautiful hands and faces, and other body parts. In our first release, we are using the largest “YOLO FACE” and “YOLO HANDS” models to fix these two common problems.  The software scans the image for faces and hands and redraws the problem spot, when it is at least 70% confident that it has found a match. The confidence score is not adjustable at this time, but it almost works every time as intended without complicated patterns.

We will continue to evolve our implementation of Adetailer and add more models because we’re here for the enthusiasts, and we know you want more control. We’re with you. Until then, enjoy this beta release.

When to use Adetailer

If you don’t like the face or hands of a result, do a /showprompt on the image and add /adetailer to the prompt.

Use cases

Stable Diffusion generally has a hard time with faces that aren’t close-ups. You could use Adetailer in portraits, but it really shines on images where the faces and hands are lost, like a mid or distant group photo.

Before this tool existed, a person would create an image, then use a facelift or inpaint command to improve the outputs. Inpaint is run alongside the render command, potentially saving you multiple steps and time. It is a proactive way to boost the hands and faces in an image, especially when the image is not a close-up portrait. However, when elements are too far away, they may be ignored.

Before and After Example

To compare the effects of Adetailer, you can render two identical prompts with the guidance, sampler, and seed values set to the same values, and include /adetailer in one of the render prompts.

Let’s start with a basic control prompt without Adetailer with this prompt from PirateDiffusion podcast co-host Eggs Benadryl:

/render /seed:126 /sampler:dpm2m /guidance:7.0 a bunch of men sitting at a table playing poker, 1800 <photon>

We didn’t use any good negative prompts and the image is 512×512 “draft” quality, so a lot of the details are fuzzy.

This image has a lot of obvious problems:

We didn’t include negative inversions or negative prompts on purpose, so if you notice the guy on the right has a stump for a hand and the facial details are very bad. We can fix a lot of these things by simply adding Adetailer. Of course, you’ll want to do both for absolute best results, but for the purpose of this tutorial, we won’t use any negatives.

Now let’s add /adetailer.  It can be added anywhere after the /render prompt.


As you can see, the hands and faces are much better defined. When can then upscale this image to near 4K quality with these good features and we’ll get a much better result.

In some cases, Adetailer can have similar to results to HDR Latency correction, the /vass command.  Use both of these commands for a “second opinion” of what beauty is supposed to be. It is subjective!

Here’s Squibduck’s Saruman from the tail end of PirateDiffusion Show Episode 6 without Adetailer…

/render /adetailer /seed:913252 /sampler:upms /guidance:4.8 /size:704x960 ((((((((((((( <sharpen:0.89> ((Saruman)), certain, unfailing, infallible, unchangeable, correct, exact, right, proper /size:960x704 ))))))))))))) (unparalleled masterpiece, flawless), closeup , /steps:35 <noise:0.3> <photon> [[ lowres, worst_quality, blurry, plain background, white background, simple background , <deep-negative:-1.5> <easy-negative:-1.5> <bad-hands-v5:-1> , ogre, corpse, zombie, caveman, Hagrid, sapien, monkey, ape, boar, bear, dog, ugly, hideous, witch, man, masculine, male, guy, yeti, troglodyte, bro, apelike, cavewoman, bearded, lowres, worst quality, blurry, monster, goblin, orc, devil, demon, evil, Satan, dead, ghost, creepypasta, horror, scary, robot, android, unibrow, haggard, beast, ]] /images:1

And with is pictured below.  Just add /adetailer to the original prompt

When NOT to use Adetailer

If the image already has a clear face and hands, Adetailer can add many more details — which can be unwanted. The photo may end up looking overly HDR, or with too many hand veins or face wrinkles.  Use it when you need it.

Consider this image for example. It was already pleasant and had good hands.  We’re manually setting seed, sampler, guidance and the model to make a 1:1 comparison.

/render /seed:123 /sampler:dpm2m /guidance:7 #boost A charming mother holding her face <photon>

And now with /adetailer added:

The results are interesting. Her eyes did become more beautiful (subjective) but the hands are very overcooked. In this case, Adetailer is overkill.   Her pinky became insanely telescopic!

It might have been better to request more variations with the /remix or /more command instead, to get different facial features and hand options.

 

Frequently Asked Questions

What is the difference between Facelift (upscalers), HighDef, and Adetailer?

  • Adetailer is a parameters of /render. This means you can complete a rendered image with touched-up hands and faces in a single command, whereas the other touch up techniques require an additional step after the image is created. Adetailer uses a combination of detection scripts and feature-correcting mini-models such as yolo-face and yolo-hands, yolo meaning “you only look once”, a reference to its one-shot scan and replace capabilities. Each mini model is trained on many faces, hands, and clothing to apply the corrective effects.
  • Facelift is a different upscaling technique, more similar to the beautification feature of your smartphone.
  • HighDef (or the /more technique) is general pixel-boosting techniques with no specialization or knowledge of context. All of these techniques can be used one after another to increase the quality and detail of an image.

Does Adetailer have parameters?

The current implementation is all or nothing, with no additional parameters to set. This means that it tries to improve hands and faces on every detected match in the image, whether they are in the background or foreground.

We plan on adding additional features to adetailer, such as the ability to run it multiple times, to achieve the desired result.

Why did Adetailer change the face or the facial expression?

Adetailer is a model that is trained on thousands of faces, so will very likely change your character. This can’t be helped. You can minimize the effect when training a model and reinforcing the weights with a LoRA with inpaint instead.

Why didn’t Adetailer fix the faces or hands perfectly?

While it can greatly improve the quality, Adetailer works best when other best practices are observed, such as using proper negative inversions, negative prompts, and so on.

There are some situations that are just harder than others. For example, a hand holding a tennis racket is easier to classify and define than a few fingers pressed into a fluffy sandwich. Certain kinds of prompts will require more work.

When Adetailer fails, try Inpaint and/or try rewriting the prompt.

HDR Color correction in Stable Diffusion with VASS

What is VASS and HDR color and composition correction?

VASS is an alternative color and composition for SDXL. To use it, add the the /vass parameter to your SDXL render.

Whether or not it improves the image is highly subjective. If you prefer a more saturated image, VASS is probably not for you. If the image on the right is more pleasing, this HDR “correction” may help you achieve your goals.

The parameter VASS comes from the name of the creator. Timothy Alexis Vass is an independent researcher that has been exploring the SDXL latent space and has made some interesting observations. His aim is color correction, and improving the content of images. We have adapted his published code to run in PirateDiffusion.

Example of it in action:

As you can see in the figure above, more than the colors changed despite the same seed value.  The squirrel turned towards the viewer and, unfortunately, so did the the cake. “Squirrel” is at the beginning of the prompt, so having more importance to the composition, the cake also shrank and additional cakes were removed. We would be lying if we said these are all definitely benefits or compositional corrections that happen when /vass is used, but rather its an example of how it can impact an image.

Try it for yourself:

/render a cool cat <sdxl> /vass

Limitations: This only works in SDXL

To compare with and without Vass, lock the seed, sampler, and guidance to the same values as shown in the diagram above. Otherwise, the randomness from these parameters will give you an entirely different image every time.

Tip: You can also disable the SDXL refiner with /nofix to take upscaling into your own hands for more control, like this:

/render a cool cat <sdxl> /vass /nofix

Why and when to use Vass

Try it on SDXL images that are too yellow, off-center, or the color range feels limited. You should see better vibrance and cleaned up backgrounds.

Also see: VAE runtime swap

How do swap the VAE in runtime in Stable Diffusion

What is a VAE?

VAE is a file that is often included alongside Checkpoints (full models) that impacts the color and noise cleanup, to put it simply. You have have seen models advertised as “VAE baked” — literally packing this file into the SafeTensors format. Your humble team at PirateDiffusion reads these guides and installs the VAE that the creator recommends, so 99.9% of the time you are using a VAE already and it is silently working as intended.

More specifically, VAE is a special type of model that can be used to change the contrast, quality, and color saturation. If an image looks overly foggy and your guidance is set above 10, the VAE might be the culprit. VAE stands for “variational autoencoder” and is a technique that reclassifies images, similar to how a zip file can compress and restore an image. If you’re a math buff, the technical writeups are really interesting stuff

The VAE “rehydrates” the image based on the data that it has been exposed to, instead of discrete values. If all of your renders images appear desaturated, blurry, or have purple spots, changing the VAE is the best solution. (See troubleshooting below for more details about this bug). 16 bit VAE run fastest.

 

Most people won’t need to learn this feature, but we offer for enthusiast users that want the most control of their images.

Which VAE is best?

It depends on how you feel about Saturation and colorful particles in your images. We recommend trying different ones to find your groove.

Troubleshooting

Purple spots, unwanted bright green dots, and completely black images when doing /remix are the three most common VAE glitches. If you’re getting these, please let a moderator know and we’ll change the default VAE for the model.  You can also correct it with a runtime vae swap as shown below.

Shown below: This model uses a very bright VAE, but is leaking green dots on the shirt, fingers, and hair.

The Fix: Performing a VAE swap

Our support team can change the VAE at the model level, so you don’t have to do this every time. But maybe you’d like to try a few different looks?  Here’s how to swap it at runtime:

/render #sdxlreal a hamster singing in the subway /vae:GraydientPlatformAPI__bright-vae-xl

Recommended for SDXL

/vae:GraydientPlatformAPI__bright-vae-xl

Recommended for SD15 photos (vae-ft-mse-840000-ema-pruned)

/vae:GraydientPlatformAPI__sd-vae-ft-ema

Recommended for SD15 illustration or anime

/vae:GraydientPlatformAPI__vae-klf8anime2

 

Compatible VAEs

Using a different third-party VAE

Upload or find one on the Huggingface website with this folder directory setup:

https://huggingface.co/madebyollin/sdxl-vae-fp16-fix

Then replace the slashes and remove the front part of the URL, like this:

/render whatever /vae:madebyollin__sdxl-vae-fp16-fix

If you click that file you’ll see that it doesn’t contain a bunch of folders or a checkpoint, it just contains the VAE files.  Pointing into a binned VAE of a whole model will not load. In that case just ask us to load that model for you.

The vae folder must have the following characteristics:

  • A single VAE per folder, in a top-level folder of a Huggingface profile as shown above
  • The folder must contain a config.json
  • The file must be in .bin format
  • The bin file must be named “diffusion_pytorch_model.bin”

Where to find more:  Huggingface and Civitai may have others, but they must be converted to the format above

Other known working vae:

  • /vae:GraydientPlatformAPI__vae-blessed2 — less saturated than kofi2
  • /vae:GraydientPlatformAPI__vae-anything45 — less saturated than blessed2
  • /vae:GraydientPlatformAPI__vae-orange — medium saturation, but
  • /vae:GraydientPlatformAPI__vae-pastel — vivid colors like old Dutch masters