HDR Color correction in Stable Diffusion with VASS

What is VASS and HDR color and composition correction?

VASS is an alternative color and composition for SDXL. To use it, add the the /vass parameter to your SDXL render.

Whether or not it improves the image is highly subjective. If you prefer a more saturated image, VASS is probably not for you. If the image on the right is more pleasing, this HDR “correction” may help you achieve your goals.

The parameter VASS comes from the name of the creator. Timothy Alexis Vass is an independent researcher that has been exploring the SDXL latent space and has made some interesting observations. His aim is color correction, and improving the content of images. We have adapted his published code to run in PirateDiffusion.

Example of it in action:

As you can see in the figure above, more than the colors changed despite the same seed value.  The squirrel turned towards the viewer and, unfortunately, so did the the cake. “Squirrel” is at the beginning of the prompt, so having more importance to the composition, the cake also shrank and additional cakes were removed. We would be lying if we said these are all definitely benefits or compositional corrections that happen when /vass is used, but rather its an example of how it can impact an image.

Try it for yourself:

/render a cool cat <sdxl> /vass

Limitations: This only works in SDXL

To compare with and without Vass, lock the seed, sampler, and guidance to the same values as shown in the diagram above. Otherwise, the randomness from these parameters will give you an entirely different image every time.

Tip: You can also disable the SDXL refiner with /nofix to take upscaling into your own hands for more control, like this:

/render a cool cat <sdxl> /vass /nofix

Why and when to use Vass

Try it on SDXL images that are too yellow, off-center, or the color range feels limited. You should see better vibrance and cleaned up backgrounds.

Also see: VAE runtime swap

How do swap the VAE in runtime in Stable Diffusion

What is a VAE?

VAE is a file that is often included alongside Checkpoints (full models) that impacts the color and noise cleanup, to put it simply. You have have seen models advertised as “VAE baked” — literally packing this file into the SafeTensors format. Your humble team at PirateDiffusion reads these guides and installs the VAE that the creator recommends, so 99.9% of the time you are using a VAE already and it is silently working as intended.

More specifically, VAE is a special type of model that can be used to change the contrast, quality, and color saturation. If an image looks overly foggy and your guidance is set above 10, the VAE might be the culprit. VAE stands for “variational autoencoder” and is a technique that reclassifies images, similar to how a zip file can compress and restore an image. If you’re a math buff, the technical writeups are really interesting stuff

The VAE “rehydrates” the image based on the data that it has been exposed to, instead of discrete values. If all of your renders images appear desaturated, blurry, or have purple spots, changing the VAE is the best solution. (See troubleshooting below for more details about this bug). 16 bit VAE run fastest.


Most people won’t need to learn this feature, but we offer for enthusiast users that want the most control of their images.

Which VAE is best?

It depends on how you feel about Saturation and colorful particles in your images. We recommend trying different ones to find your groove.


Purple spots, unwanted bright green dots, and completely black images when doing /remix are the three most common VAE glitches. If you’re getting these, please let a moderator know and we’ll change the default VAE for the model.  You can also correct it with a runtime vae swap as shown below.

Shown below: This model uses a very bright VAE, but is leaking green dots on the shirt, fingers, and hair.

The Fix: Performing a VAE swap

Our support team can change the VAE at the model level, so you don’t have to do this every time. But maybe you’d like to try a few different looks?  Here’s how to swap it at runtime:

/render #sdxlreal a hamster singing in the subway /vae:GraydientPlatformAPI__bright-vae-xl

Recommended for SDXL


Recommended for SD15 photos (vae-ft-mse-840000-ema-pruned)


Recommended for SD15 illustration or anime



Compatible VAEs

Using a different third-party VAE

Upload or find one on the Huggingface website with this folder directory setup:

Then replace the slashes and remove the front part of the URL, like this:

/render whatever /vae:madebyollin__sdxl-vae-fp16-fix

If you click that file you’ll see that it doesn’t contain a bunch of folders or a checkpoint, it just contains the VAE files.  Pointing into a binned VAE of a whole model will not load. In that case just ask us to load that model for you.

The vae folder must have the following characteristics:

  • A single VAE per folder, in a top-level folder of a Huggingface profile as shown above
  • The folder must contain a config.json
  • The file must be in .bin format
  • The bin file must be named “diffusion_pytorch_model.bin”

Where to find more:  Huggingface and Civitai may have others, but they must be converted to the format above

Other known working vae:

  • /vae:GraydientPlatformAPI__vae-blessed2 — less saturated than kofi2
  • /vae:GraydientPlatformAPI__vae-anything45 — less saturated than blessed2
  • /vae:GraydientPlatformAPI__vae-orange — medium saturation, but
  • /vae:GraydientPlatformAPI__vae-pastel — vivid colors like old Dutch masters
  • /vae:LittleApple-fp16__vae-ft-mse-840000-ema-pruned – great for realism