How to use After Detailer (Adetailer) in Stable Diffusion, our adetailer tutorial

How to use After Detailer (Adetailer) – SDXL and SD15 supported

Stable Diffusion “After Detailer”, also commonly known as Adetailer, is an automatic touch-up tool, and is similar to Facelift. As the name implies, the image is first created and the software hunts down trouble spots to automatically improve them. While there are many modes of Adetailer, in our testing we’ve found that the most useful and effective deal with the face and hands.


/render /adetailer and your prompt goes here

Simply add the /adetailer tag after /render prompt.  At the time of this guide, /adetailer is ignored on /remix but we hope to add it to img2img functions like inpaint in time.

What’s going on technically

After Detailer is like a tiny program with its own mini-models. The Stable Diffusion community is creating bite-sized models that target common problems like beautiful hands and faces, and other body parts. In our first release, we are using the largest “YOLO FACE” and “YOLO HANDS” models to fix these two common problems.  The software scans the image for faces and hands and redraws the problem spot, when it is at least 70% confident that it has found a match. The confidence score is not adjustable at this time, but it almost works every time as intended without complicated patterns.

We will continue to evolve our implementation of Adetailer and add more models because we’re here for the enthusiasts, and we know you want more control. We’re with you. Until then, enjoy this beta release.

When to use Adetailer

If you don’t like the face or hands of a result, do a /showprompt on the image and add /adetailer to the prompt.

Use cases

Stable Diffusion generally has a hard time with faces that aren’t close-ups. You could use Adetailer in portraits, but it really shines on images where the faces and hands are lost, like a mid or distant group photo.

Before this tool existed, a person would create an image, then use a facelift or inpaint command to improve the outputs. Inpaint is run alongside the render command, potentially saving you multiple steps and time. It is a proactive way to boost the hands and faces in an image, especially when the image is not a close-up portrait. However, when elements are too far away, they may be ignored.

Before and After Example

To compare the effects of Adetailer, you can render two identical prompts with the guidance, sampler, and seed values set to the same values, and include /adetailer in one of the render prompts.

Let’s start with a basic control prompt without Adetailer with this prompt from PirateDiffusion podcast co-host Eggs Benadryl:

/render /seed:126 /sampler:dpm2m /guidance:7.0 a bunch of men sitting at a table playing poker, 1800 <photon>

We didn’t use any good negative prompts and the image is 512×512 “draft” quality, so a lot of the details are fuzzy.

This image has a lot of obvious problems:

We didn’t include negative inversions or negative prompts on purpose, so if you notice the guy on the right has a stump for a hand and the facial details are very bad. We can fix a lot of these things by simply adding Adetailer. Of course, you’ll want to do both for absolute best results, but for the purpose of this tutorial, we won’t use any negatives.

Now let’s add /adetailer.  It can be added anywhere after the /render prompt.

As you can see, the hands and faces are much better defined. When can then upscale this image to near 4K quality with these good features and we’ll get a much better result.

In some cases, Adetailer can have similar to results to HDR Latency correction, the /vass command.  Use both of these commands for a “second opinion” of what beauty is supposed to be. It is subjective!

Here’s Squibduck’s Saruman from the tail end of PirateDiffusion Show Episode 6 without Adetailer…

/render /adetailer /seed:913252 /sampler:upms /guidance:4.8 /size:704x960 ((((((((((((( <sharpen:0.89> ((Saruman)), certain, unfailing, infallible, unchangeable, correct, exact, right, proper /size:960x704 ))))))))))))) (unparalleled masterpiece, flawless), closeup , /steps:35 <noise:0.3> <photon> [[ lowres, worst_quality, blurry, plain background, white background, simple background , <deep-negative:-1.5> <easy-negative:-1.5> <bad-hands-v5:-1> , ogre, corpse, zombie, caveman, Hagrid, sapien, monkey, ape, boar, bear, dog, ugly, hideous, witch, man, masculine, male, guy, yeti, troglodyte, bro, apelike, cavewoman, bearded, lowres, worst quality, blurry, monster, goblin, orc, devil, demon, evil, Satan, dead, ghost, creepypasta, horror, scary, robot, android, unibrow, haggard, beast, ]] /images:1

And with is pictured below.  Just add /adetailer to the original prompt

When NOT to use Adetailer

If the image already has a clear face and hands, Adetailer can add many more details — which can be unwanted. The photo may end up looking overly HDR, or with too many hand veins or face wrinkles.  Use it when you need it.

Consider this image for example. It was already pleasant and had good hands.  We’re manually setting seed, sampler, guidance and the model to make a 1:1 comparison.

/render /seed:123 /sampler:dpm2m /guidance:7 #boost A charming mother holding her face <photon>

And now with /adetailer added:

The results are interesting. Her eyes did become more beautiful (subjective) but the hands are very overcooked. In this case, Adetailer is overkill.   Her pinky became insanely telescopic!

It might have been better to request more variations with the /remix or /more command instead, to get different facial features and hand options.


Frequently Asked Questions

Can I use Adetailer with SDXL and Stable Diffusion 1.5?

Yes, as of March 2024 it accepts both

What is the difference between Facelift (upscalers), HighDef, and Adetailer?

  • Adetailer is a parameters of /render. This means you can complete a rendered image with touched-up hands and faces in a single command, whereas the other touch up techniques require an additional step after the image is created. Adetailer uses a combination of detection scripts and feature-correcting mini-models such as yolo-face and yolo-hands, yolo meaning “you only look once”, a reference to its one-shot scan and replace capabilities. Each mini model is trained on many faces, hands, and clothing to apply the corrective effects.
  • Facelift is a different upscaling technique, more similar to the beautification feature of your smartphone.
  • HighDef (or the /more technique) is general pixel-boosting techniques with no specialization or knowledge of context. All of these techniques can be used one after another to increase the quality and detail of an image.

Does Adetailer have parameters?

The current implementation is all or nothing, with no additional parameters to set. This means that it tries to improve hands and faces on every detected match in the image, whether they are in the background or foreground.

We plan on adding additional features to adetailer, such as the ability to run it multiple times, to achieve the desired result.

Why did Adetailer change the face or the facial expression?

Adetailer is a model that is trained on thousands of faces, so will very likely change your character. This can’t be helped. You can minimize the effect when training a model and reinforcing the weights with a LoRA with inpaint instead.

Why didn’t Adetailer fix the faces or hands perfectly?

While it can greatly improve the quality, Adetailer works best when other best practices are observed, such as using proper negative inversions, negative prompts, and so on.

There are some situations that are just harder than others. For example, a hand holding a tennis racket is easier to classify and define than a few fingers pressed into a fluffy sandwich. Certain kinds of prompts will require more work.

When Adetailer fails, try Inpaint and/or try rewriting the prompt.