Cheat Sheet

Cheat Sheet

This is a quick lookup guide to every command. Beginners should start in PirateDiffusion tutorials and our Youtube channel. If you prefer to create using your web browser, check out Stable2go tutorials

 

Changelog – April 2024: 

(1) FaceSwaps just got easier: There is a new way to swap faces, check out the Facepush tutorial.  For members only, there’s also a new channel in the PirateDiffusion Telegram Playroom group for examples. (2) Better Web Editing: Our Stable2go WebUI integration also has more options including a VAE selector, Projects, and more.  Reply to your Telegram images with /unified to use it

 

Reply to an image with /unified to launch our new unified editing Stable2go experience, all on one page. The “other” field allows all of PirateDiffusion inline commands, too!

The new /bots command and Adetailer now works with SDXL!  Polly also now only uses your ‘Faved SDXL models

 

Account Basics

NEW USER

/start

HOW TO GET STARTED

Log into my.graydient.ai and find the Setup icon in the dashboard, and follow the instructions.

If you need help, hit up our support channel 

EMAIL

/email [email protected]
If you have upgraded your account on Patreon, your email and Patreon emails must match.  If you cannot activate and unlock all features, please type /debug and send that number 

USERNAME

/username Tiny Dancer
The bot will use your Telegram name as your public community username (shown in places like recipes, featured images). You can change this at any time with the /username command, like this:
WEBUI and PASSWORD
PirateDiffusion comes with a website companion for operations that are tricky to do, like organizing your files and inpainting.  To access your archive, type:
/webui

It is private by default. To set your password, use a command like this

 
/community /password:My-Super-Secure-Password123@#$%@#$
  To disable a password, type  
 /community /password:none

Model Basics

LIST OF MODELS
The term AI model can also mean chat models, so we call image models “concepts”. A concept can be a person, pose, effect, or art style.  To bring those things into your creation, we call their “trigger word” which appear in angled brackets like this: <sdxl> Our platform has over 8 terabytes of AI concepts preloaded. PirateDiffusion is never stuck in one style, we add new ones daily.
/concepts

The purpose of the /concepts command is to quickly look up trigger words for models as a list in Telegram. At the end of that list, you’ll also find a link to browse concepts visually on our website.

Using concepts

To create an image, pair a concept’s trigger word with the render command (explained below) and describe the photo.

To use one, use it’s trigger name anywhere in the prompt. For example, one of the most popular concepts of the moment is called Realistic Vision 4 SDXL, a realistic “base model” — the basis of the overall image style. The trigger name for Realistic Vision 4 SDXL is <realvis4-xl> so we would then prompt. 

/render a dog <realvis4-xl>
Tip: Choose 1 base model (like realvis4-xl) one or three loras, and a few negative inversions to create a balanced image. Adding too many or conflicting concepts (like 2 poses) may cause artifacts. Try a lesson or make one yourself

 

SEARCH MODELS

You can search right from Telegram.

/concept /search:emma

 

RECENT MODELS

Quickly recall the last ones you’ve tried:

/concept /recent

 

FAVORITE MODELS

Use the fave command to track and remember your favorite models in a personal list

/concept /fave:concept-name

 

MY DEFAULT MODEL

Stable Diffusion 1.5 is the default model, but why not replace it with your absolute go-to model instead?

This will impact both your Telegram and Stable2go account.  After you do this command, you don’t have to write that <concept-name> every time in your prompts. Of course, you can override it with a different model in a /render.

/concept /default:concept-name

Image creation 

 

POLLY

Polly is your image creation assistant and the fastest way to write detailed prompts. When you ‘Fave a model in the concepts page, Polly will choose one of your favorites at random.

To use Polly in Telegram, talk to it like this:

/polly a silly policeman investigates a donut factory

The reply will contain a prompt that you can render right there, or copy to your clipboard to use with the /render command (explained below).

You can also have a regular chat conversation by starting with /raw

/polly /raw explain the cultural significance of japanese randoseru

You can also use it from your web browser by logging into my.graydient.ai and clicking the Polly icon. 

BOTS

Polly can be customized. You can train your assistant

/bots

Bots will display a list of assistants that you’ve created. For example, if my assistant is Alice, I can use it like /alicebot go make me a sandwich.  The @piratediffusion_bot must be in the same channel with you.

To create a new one, type

/bots /new

 

RENDER 

If you don’t want prompt-writing help from Polly or your custom bots, switch to the /render command to go fully manual.  Try a quick lesson.

/render a close-up photo of a gorilla jogging in the park <sdxl>
The trigger word <sdxl> refers to a concept (explained at the top of this page)
Tips: How to make quick prompt edits:
After a prompt is submited, recall it quickly by pressing the UP arrow from Telegram Desktop. Or single-tap the echoed prompt confirmation that appears in an alt font.
 
Word order matters. Put the most important words towards the front of your prompt for best results:  Camera Angle, Who, What, Where. 

 

COMPOSE

Compose allows multi-prompt, multi-regional creation. It works best by specifying large canvas sizes and 1 image at a time.The available zones are: background, bottom, bottomcenter, bottomleft, bottomright, center, left, right, top, topcenter, topleft, topright.  The format for each zone is x1, y1, x2, y2

/compose /size:2000x700 /left:The tower of Thai food /center:Jungles of Thailand, tropical rainforest [blurry] /right: Castle made of stone, castle in the jungle, tropical vegetation

Another example

/compose /seed:1 /images:2 /size:384×960 /top:ZeldaKingdom GloomyCumulonimbus /bottom:1983Toyota Have a Cookie /guidance:5

Tip: Use a guidance of about 5 for better blending. You can also specify the model. Guidance applies to the entire image. We are working on adding regional guidance.

/compose  /size:1000x2000 /top: Lion roaring, lion on top of building /center: Apartment building, front of building, entrance /bottom: Dirty city streets, New York city streets [[ugly]] [[blurry] /images:1 /guidance:7

BLEND
(IP ADAPTERS)

Prerequisite: Please master ControlNet first.

Blend allows you to fuse multiple images that are stored images from your ControlNet library. This is based on a technology called IP Adapters. This is a horribly confusing name for most people, so we just call it blend.

First, create some preset image concepts by pasting a photo into you bot, and giving it a name exactly how you’d do it for ControlNet.  If you already have controls saved, those work, too.

/control /new:chicken

After you have two or more of those, you can blend them together.

/render /blend:chicken:1 /blend:zelda:-0.4

Tip: IP Adapters supports negative images. It is recommended to subtract a noise image to get better pictures.

You can control the noisiness of this image with /blendnoise

Default is 0.25. You can disable with /blendnoise:0.0

You can also set the strength of the IP Adapters effect with /blendguidance – default is 0.7

Tip: Blend can also be used when inpainting and SDXL models

Quality Boosters

 

ASPECT RATIO & SIZE

You can change the shape of an image easily with these shorthand commands: /portrait /tall /landscape /wide 
/render /portrait a gorilla jogging in the park <sdxl>
You can also manually set the resolution with /size. By default, draft images are created as 512×512. These can look a little fuzzy, so using the size command will give you a clearer-looking result. 
/render /size:768x768 a gorilla jogging in the park <sdxl>
Limitations: Stable Diffusion 1.5 is trained at 512×512, so going too high will result in double heads and other mutations. SDXL is trained at 1024×1024, so a size like 1200×800 will work better with an SDXL model than an SD 1.5 model, as there will be less likely repetition. If you are getting duplicate subjects by using /size try starting the prompt with 1 woman/man and describe the background in more detail at the end of the prompt. 
To achieve 2000×2000 and 4000×4000, use upscaling

 

POSITIVES, NEGATIVES

Positives tell the AI what to emphasize. This is expressed with (round brackets). Each pair of brackets represents a 1.1x multiple of positive reinforcement. Negatives do the opposite, using [square] brackets.

/render <sdxl> 

((high quality, masterpiece, masterwork))   [[low resolution, worst quality, blurry, mediocre, bad art, deformed, disfigured, elongated, disproportionate, anatomically incorrect, unrealistic proportions, mutant, mutated, melted, abstract, surrealism, sloppy, crooked]]

Takoyaki on a plate
Adding too many brackets may limit how much the AI can fill in, so if you are seeing glitches in your image, try reducing the number of brackets and inversions.

PROMPT LENGTH

The part of the software that ingests your prompt is called a Parser.  The parser has the biggest impact on prompt cohesion — how well the AI understands both what you’re trying to express and what to prioritize most.

PirateDiffusion has three parser modes: default, LPW, and Weights (parser “new”). They all have their strengths and disadvantages, so it really comes down to your prompting style and how you feel about syntaxt.

MODE 1 – DEFAULT PARSER (EASIEST)

The default parser offers the most compatibility and features, but can only pass 77 tokens (logical ideas or word parts) before Stable Diffusion stops paying attention to the long prompt. To convey what is important, you can add (positive) and [negative] reinforcement as explained in the section above (see positives). This works with SD 1.5 and SDXL.

MODE 2 – LPW PARSER  (INTERMEDIATE)

LPW stands for LONG PROMPT WEIGHTS and lets you go much further, but should not be combined with very heavy positive or negative prompting.  

LIMITATIONS OF LPW

  • Does not work well with Loras or Textual Inversions
  • May require a lower guidance value for best results
  • Don’t use heavy brackets:

    (((((This will break /lpw))))) and [[[[so will this]]]]

  • LPW is not 100% compatible with other features like /remix
  • Doesn’t work with LCM sampler
  • Doesn’t work with Parser:New (see “weights“) below
 
/render /lpw my spoon is too big, ((((big spoon)))) [small spoon], super big, massively big, you would not believe the size, and I've seen many spoons and let me tell you, this spoon in my hand, right here, is yuuuuge, the biggest spoon you'll ever see, and if anyone's said they've seen a bigger spoon, they're cheating, Big spoon, gigantic ladle, extra large serving bowl, oversized utensil, huge portion size, bulky kitchenware, impressive cooking tools, rustic table setting, hearty meals, heavyweight handle, strong grip, stylish design, handcrafted wooden piece, <coma2>

WORD WEIGHT

MODE 3 – PARSER “NEW” AKA PROMPT WEIGHTS (EXPERT)

Another strategy for better prompt cohesion is to give each word a “weight”.   The weight range is 0 – 2 using decimals, similar to LoRas. The syntax is a little tricky, but both positive and negative weights are supported for incredible precision

/render /parser:new  (blue cat:0.1) (green bird:0.2) [(blue cat:2), (red dog:0.1)]

Pay special attention to the negatives, which use a combination [( )] pair to express negatives. In the example above, blue cat and red dog are the negatives. This feature cannot be mixed with /lpw (above)

 

CLIP SKIP

This feature is controversial, as it is very subjective and the effects will vary greatly from model to model. 

AI models are made up of layers, and in the first layers it is said to contain too much general information (some might say, junk) resulting in boring or repetitive compositions.

The idea behind Clip Skip is to ignore the noise generated in these layers and go straight into the meat.

/render /images:1 /guidance:8 /seed:123 /size:1024x1024 /sampler:dpm2m /karras <sdxl> huge diet coke /clipskip:2

In theory, it in creases cohesion and prompt intention. However, in practice, clipping too many layers may result in a bad image. While the jury is out on this one, a popular “safe” setting is clipskip 2. 

 

GUIDANCE

CFG = Guidance, a value between 0-20 that controls how “strictly” or literally the AI should hang on to every word in your prompt. While it may be tempting to always smash guidance to 20 twenty, not allowing the AI any freedom may cause gaps or unwanted artifacts in the composition.

A guidance of 10 is about average, and 0 guidance is too low — things start to get fuzzy and surreal. When going above 10, images tend to be sharper as the AI is being asked to be more confident. 

Setting guidance too high (or too low) is the most common reason for glitches. 

/render <sdxl> [[<fastnegative-xl:-2>]]
/guidance:7
/size:1024x1024

Takoyaki on a plate
How high or how low the guidance should be set depends on the sampler that you are using. Samplers are explained below. The amount of steps allowed to “solve” an image can also play an important role.

 

 INVERSIONS

The models system has special models called negative inversions, also called negative embeddings. These models were trained on awful-looking images intentionally, to guide the AI on what not to do. Thus, by calling these models as a negative, they boost the quality significantly. The weight of these models must be in double [[negative brackets]], between -0.01 and -2.

/render <sdxl> [[<fastnegative-xl:-2>]]

Takoyaki on a plate
You’re probably wondering, “Why aren’t these added by default?” and we do offer that feature, too. See: /brew

RECIPES

Recipes are prompt templates. A recipe can contain tokens, samplers, models, textual inversions, and more. This is a real time saver, and worth learning if you find yourself repeating the same things in your prompt over and over.

When we introduced negative inversions , many asked “why aren’t these turned on by default?”, and the answer is control — everyone likes slightly different settings. Our solution to this was recipes: hashtags that summon prompt templates.

To see a list of them, type /recipes or browse them on our website:  the master list of prompt templates 

/recipes

Most recipes were created by the community, and may change at any time as they are managed by their respective owners. A prompt is intended to have only 1 recipe, or things can get weird. 

There are two ways to use a recipe. You can call it by it’s name using a hashtag, like the “boost” recipe:

/render a cool dog #boost

Some popular recipes are #nfix and #eggs and #everythingbad and #sdxlreal.

Important: When creating a recipe, it’s required to add $prompt somewhere in your positive prompt area, or the recipe can’t accept your text input. You can drop that in anywhere, it’s flexible.

In the “other commands” field, you can stack other quality-boosting parameters below.

AFTER DETAILER

The /render command has many powerful parameters. A popular one is After Detailer, aka Adetailer

/render /adetailer a close-up photo of a gorilla jogging in the park <sdxl>
This will scan your image for bad hands, eyes, and faces immediately after the image is created, and fixes them automatically. It works with SDXL and SD15 as of March ’24


Limitations:
It may miss faces turned to ~15 degrees or create two faces in this case.
 
After Detailer works best when used alongside good positive and negative prompts, and inversions, explained below:

SDXL REFINER TOGGLE

PirateDiffusion supports the Stable Diffusion 1.5, 2.1, and SDXL family of open source image models. SDXL has a secondary process called “refiner” that cleans up artifacts, but that clean-up process can lead to less saturated images. For a more vibrant, but perhaps flawed image, disable refiner like this:

/render a cool cat <sdxl> /nofix

 

Why and when to use it: When the image looks too washed, or skin colors look dull. Add post-processing using one of the reply commands (below) like /highdef or /facelift to make the image more finished.

Limitations: This only works in SDXL. 

FREE U

FreeU (Free Lunch in Diffusion U-Net) is an experimental detailer that expands the guidance range at four separate intervals during the render. There are 4 possible values, each between 0-2.
b1: backbone factor of the first stage
b2: backbone factor of the second stage
s1: skip factor of the first stage
s2: skip factor of the second stage

/render  <av5> a drunk horse in Rome /freeu:1.1,1.2,0.9,0.2

SDXL HDR & COMPOSITION CORRECTION

Timothy Alexis Vass is an independent researcher that has been exploring the SDXL latent space and has made some interesting observations. His aim is color correction, and improving the content of images. We have adapted his published code to run in PirateDiffusion.

/render a cool cat <sdxl> /vass
 

Why and when to use it: Try it on SDXL images that are too yellow, off-center, or the color range feels limited. You should see better vibrance and cleaned up backgrounds.

Limitations: This only works in SDXL. 

VAE OVERRIDE

VAE is a special type of model that can be used to change the contrast, quality, and color saturation. If an image looks overly foggy and your guidance is set above 10, the VAE might be the culprit. VAE stands for “variational autoencoder” and is a technique that reclassifies images, similar to how a zip file can compress and restore an image. The VAE “rehydrates” the image based on the data that it has been exposed to, instead of discrete values. If all of your renders images appear desaturated, blurry, or have purple spots, changing the VAE is the best solution. (Also please notify us so we can set the correct one by default). 16 bit VAE run fastest.

/render #sdxlreal a hamster singing in the subway /vae:GraydientPlatformAPI__bright-vae-xl

Available preset VAE options: 

  • /vae:GraydientPlatformAPI__bright-vae-xl
  • /vae:GraydientPlatformAPI__sd-vae-ft-ema
  • /vae:GraydientPlatformAPI__vae-klf8anime2
  • /vae:GraydientPlatformAPI__vae-blessed2
  • /vae:GraydientPlatformAPI__vae-anything45
  • /vae:GraydientPlatformAPI__vae-orange 
  • /vae:GraydientPlatformAPI__vae-pastel

Third party VAE:

Upload or find one on the Huggingface website with this folder directory setup:

https://huggingface.co/madebyollin/sdxl-vae-fp16-fix

Then replace the slashes and remove the front part of the URL, like this:

/render whatever /vae:madebyollin__sdxl-vae-fp16-fix

The vae folder must have the following characteristics:

  • A single VAE per folder, in a top-level folder of a Huggingface profile as shown above
  • The folder must contain a config.json
  • The file must be in .bin format
  • The bin file must be named “diffusion_pytorch_model.bin”
    Where to find more: Huggingface and Civitai may have others, but they must be converted to the format above

 

TRANSLATION

Do you prefer to prompt in a different language?  You can render images in over 50 languages, like this:

/render /translate un perro muy guapo <sdxl>

It’s not required to specify which language, it just knows. When using translate, avoid using slang. If the AI didn’t understand your prompt, try rephrasing it in more obvious terms.

 

SEED

Seed is a random number that allows you repeat the diffusion process and achieve a similar result every time.

/render /seed:123456 cats in space
Seed is a unique concept in AI creation, it isn’t like a photo ID in a database, but it can be used to repeat similar image patterns as the diffusion process begins from a specific noise profile.  To use an imperfect analogy: Imagine opening your eyes after a long slumber: the very first microsecond might be similar to the seed value. What eventually comes into focus depends on your prompt and parameters. When sharing a prompt with others or trying to repeat an image: models, seed, guidance, and sampler are the parameters you’ll want to lock in to get the same (stable) result. Any deviation of these will change the image.

 

STEPS

Steps are like the passes on a printer; they attempt to add quality with each pass.

/render /steps:25 (((cool))) cats in space, heavens
Setting steps to 25 is the average. If you don’t specify steps, we set it to 50 by default, which is high. The range of steps 1 to 100 when set manually, and as high as 200 steps when used with a preset. The presets are:
  • waymore – 200 steps, two images – best for quality
  • more  -100 steps, three images
  • less – 25 steps, six images
  • wayless – 15 steps, nine images!  – best for drafts
/render /steps:waymore (((cool))) cats in space, heavens
While it may be tempting to set /steps:waymore on every render, it just slows down your workflow as the compute time takes longer. Crank up the steps when you have crafted your best prompt. Alternatively, learn how to use the LCM sampler instead to get the highest quality images with the fewest number of steps. Too many steps can also fry an image.

 

SAMPLERS & LCM

To see a list of available samplers, simply type /samplers

Samplers are a popular tweaking feature by AI enthusiasts, and here’s what they do. The y’re also called noise schedulers. The amount of steps and the sampler you choose can make a big impact on an image. Even with low steps, you can get a terrific picture with our a sampler like DPM 2++ with the optional mode “Karras”.  See samplers page for comparisons. 

To use it, add this to your prompt

/render /sampler:dpm2m /karras  a beautiful woman <sdxl>

Karras is an optional mode that works with 4 samplers. In our testing, it can result in more pleasing results.

LCM in Stable Diffusion stands for Latent Consistency Models.  Use it to get images back faster by stacking it with lower steps and guidance. The tradeoff is speed above quality, though it can produce stunning images very quickly in large batches.

/render /sampler:lcm /guidance:1.5 /steps:6 /images:9 /size:1024x1024  <realvis2-xl> /seed:469498 /nofix Oil painting, oil on board, painted picture Retro fantasy art by Brom by Gerald Brom ((high quality, masterpiece,masterwork)) [[low resolution,m worst quality, blurry, mediocre, bad art, deformed, disfigured, elongated, disproportionate, anatomically incorrect, unrealistic proportions, melted, abstract, surrealism, sloppy, crooked, skull, skulls]] Closeup Portrait A wizard stands in an alien landscape desert wearing wizards robes and a magic hat

Tips:  When using SDXL, add /nofix to disable refiner, it may help boost quality, especially when doing /more

It works with SD 1.5 and SDXL models. Try it with guidance between 2-4 and steps between 8-12.  Please do experiment and share your findings in VIP prompt engineering discussion group, found in your membership section on Patreon.

It will vary by model, but even /guidance:1.5 /steps:6 /images:9 is returning good SDXL results in under 10 seconds!

/render 
/size:1024x1024
<airtist-xl> [[<fastnegative-xl:-2>]]
/sampler:lcm
/guidance:1.5
/steps:6
/nofix 
/vae:madebyollin__sdxl-vae-fp16-fix  

((high quality, masterpiece,masterwork))   [[low resolution, worst quality, blurry, mediocre, bad art, deformed, disfigured, elongated, disproportionate, anatomically incorrect, unrealistic proportions, mutant, mutated, melted, abstract, surrealism, sloppy, crooked]]

Takoyaki on a plate
In the example above, the creator is using the special LCM sampler that allows for very low guidance and low steps, yet still creates very high-quality images. Compare this prompt with something like: /sampler:dpm2m /karras /guidance:7 /steps:35
Skipping ahead, the VAE command controls colors, and /nofix turns off the SDXL refiner. These work well with LCM.

“Reply” Image creation commands

Literally reply to an image, as if you’re talking to a person. Reply commands are used to manipulate images and inspect information, such as IMG2IMG or looking up the original prompt.

MORE

more is the most common Reply command. It gives you back similar images when replying to an image already generated by a prompt. The /more command does not work on an uploaded image. 

/more

The more command is more powerful than it seems. It also accepts Strength, Guidance, and Size, so you can use it as a second-phase upscaler as well, particularly useful for Stable Diffusion 1.5 models. Check out this video tutorial to master it.

REMIX (IMG2IMG)

Remix is the image-to-Image style transfer command, also known as the re-prompting command. 

Remix requires an image as the input, and will destroy every pixel in the image to create a completely new image. It is similar to the /more command, but you can pass it another model name and a prompt to change the art style of the image.

Remix is the second most popular “reply” command.  Literally reply to a photo as if you are going to talk to it, and then enter the command.  This example switches whatever art style you started from into the base model called Level 4.

/remix a portrait <level4>

You can use your uploaded photos with /remix and it will alter them completely. To preserve pixels (such as not changing the face), consider drawing a mask with Inpaint instead.

HIGH DEF

The HighDef command (also known as High Res Fix) is a quick pixel doubler.  Simply reply to the picture to boost it.

/highdef

The highdef command doesn’t have parameters, because the /more command can do what HighDef does and more. This is simply here for your convenience. For pros that want more control, scroll back up and check the /more tutorial video.

After you’ve used the /highdef or /more command, you can upscale it one more time as explained below.

UPSCALERS

Facelift is intended for realistic portrait photographs. 

You can use /strength between 0.1 and 1 to control the effect. The default is 1 if not specified.

/facelift

The /facelift command also gives you access to our library of upscalers.  Add these parameters to control the effect:

 Facelift is a phase 2 upscaler. You should use HighDef first before using Facelift. This is the general upscaling command, which does two things: boosts face details and 4X’s the pixels. It works similar to beauty mode on your smartphone, meaning it can sometimes sandblast faces, especially illustrations.  Luckily it has other modes of operation:

/facelift /photo

This turns off the face retouching, and is good for landscapes or natural-looking portraits

/facelift /anime

Despite the name, it’s not just for anime – use it to boost any illustrations

/facelift /size:2000x2000

Limitations: Facelift will try to 4x your image, up to 4000×4000. Telegram does not typically allow this size, and if your image is already HD, trying to 4X will likely run out of memory.  If the image is too large and doesn’t come back, try picking it up by typing /history to grab it from web ui, as Telegram has file size limits. Alternatively, use the size parameter as shown above to use less ram when upscaling.

REFINE

Refine is for those times when you want to edit your text outside of Telegram and in your web browser. A quality of life thing.

Your subscription comes with both Telegram and Stable2go WebUI.  The refine command lets you switch between Telegram and the web interface. This comes in handy for making quick text changes in your web browser, instead of copy/paste.

/refine

The WebUI will launch in Brew mode by default. Click “Advanced” to switch to Render.

SHOWPROMPT

& COMPARE

To see an image’s prompt, right click on it and type /showprompt

/showprompt

This handy command lets you inspect how an image was made. It will show you the last action that was taken on the image. To see the full history, write it like this:  /showprompt /history

There is an image comparison tool built into the showprompt output.  Click on that link and the tool will open in the browser.

DESCRIBE (CLIP)

The /showprompt command will give you the exact prompt of an AI image, but what about non-AI images? 

Our /describe command will use computer vision techniques to write a prompt about any photo you’ve uploaded.

/describe

The language /describe uses can sometimes be curious. For example “arafed” means a fun or lively person.

BACKGROUND REMOVE

To remove a realistic background, simply reply to it with /bg

/bg

For illustrations of any kind, add this anime parameter and the removal will become sharper

/bg /anime

You can also add the PNG parameter to download an uncompressed image. It returns a high res JPG by default.

/bg /format:png

You can also use a hex color value to specify the color of the background

/bg /anime /format:png /color:FF4433 

You can also download the mask separately

/bg /masks

Tip: What can I do with the mask? To prompt only the background! It can’t be done in one step though. First reply to the mask with /showprompt to get the image code us for inpainting, or pick it from the inpainting recent masks. Add /maskinvert to the background instead of the foreground when making a render.

BACKGROUND REPLACE

You can also use ControlNet style commands to swap the background from a preset:
  1. Upload an background or render one
  2. Reply to the background photo with /control /new:Bedroom (or whatever room/area)
  3. Upload or render the target image, the second image that will receive the stored background
  4. Reply to the target with /bg /replace:Bedroom /blur:10
The blur parameter is between 0-255, which controls the feathering between the subject and the background.  Background Replace works best when the whole subject is in view, meaning that parts of the body or object aren’t obstructed by another object.  This will prevent the image from floating or creating an unrealistic background wrap.

SPIN AN OBJECT

You can rotate any image, as if it was a 3D object. It works much better after first removing the background with /bg
/spin
Spin works best after the background is removed. The spin command supports /guidance. Our system randomly chooses a guidance 2 – 10. For greater detail, also add /steps:100

GENERATIVE FILL, AKA INPAINTING

Inpaint is a useful for changing a spot area on a photo. Unlike After Detailer, the inpaint tool allows you to select and mask the area you’d like to change.

This tool has a GUI – it is integrated with Stable2go. Inpaint opens a web browser where you will literally paint on the picture, creating a masked area that can be prompted. Inpainting is possible on an uploaded non-AI photo (change the hands, sky, haircut, clothes, etc.) as well as AI images.

/inpaint fireflies buzzing around at night

In the GUI, there are special, specific art styles (inpaint models) available in the pulldown of this tool, so don’t forget to select one. . Use strength and guidance to control the effect, where Strength is referring to your inpaint prompt only.

/inpaint /size:512x768 /strength:1 /guidance:7 fireflies at (night)

Tip: The size will be inherited from the original picture, and 512×768 is recommended. Specifying a size is recommended, or it defaults to 512×512 which might squish your image. If a person is far away in the image, their face may change unless the image is a higher fidelity.

You can also inverse inpaint, such as first using the /bg command to automatically remove the background from an image, then prompt to change the background.  To achieve this, copy the mask ID from the /bg results.  Then use the /maskinvert property

/inpaint /mask:IAKdybW /maskinvert a majestic view of the night sky with planets and comets zipping by

OUTPAINT AKA CANVAS ZOOM AND PANNING

Expand any picture using the same rules as inpaint, but without a GUI, so we must specify a trigger word for what kind of art style to use. The exact same inpaint models are used for outpainting. Learn the names by browsing models page or by using

You can check what models are available with /concept /inpainting

/outpaint fireflies at night <sdxl-inpainting>

Outpaint has additional parameters. Use top, right, bottom, and left to control the direction of where the canvas should expand. If you leave out side, it will go in all four directions evenly. You can also add background blur bokeh (1-100), zoom factor (1-12), and contraction of the original area (0-256). Add strength to reign it in.

/outpaint /side:top /blur:10 /zoom:6 /contract:50 /strength:1 the moon is exploding, fireworks <sdxl-inpainting>

Optional parameters

/side:bottom/top/left/right – you can specify a direction to expand your picture or go in all four at once if you don’t add this command.

/blur:1-100 – blurs the edge between the original area and the newly added one.

/zoom:1-12 – affects the scale of the whole picture, by default it’s set to 4.

/contract:0-256 – makes the original area smaller in comparison to the outpainted one. By default, it’s set to 64.

TIP: For better results, change your prompts for each use of outpaint and include only the things you want to see in the newly expanded area. Copying original prompts may not always work as intended.

CONTROLNET

 

FACESWAP (ROOP)

 

FACEPUSH (new!)

ControlNet is a series of technologies that allow you to start with a photo and use the outlines, shape, pose, and other elements of an input image to create new images. At this time, the modes supported are contours, depth, edges, hands, pose, reference, segment, skeleton, and facepush, each which have child parameters. We can’t possibly cram it all here, so it’s better to take the tutorial

Viewing your controlnet saved presets

/control

Make a ControlNet Preset

First, upload an image. Then “reply” to that image with this command, giving it a name like “myfavoriteguy2”

/control /new:myfavoriteguy2

Controlnets are resolution sensitive, so it will respond with the image resolution as part of the name. So if I upload a photo of Will Smith, the bot will respond will-smith-1000×1000 or whatever my image size was. This is useful in helping you remember what size to target later.

Recall a ControlNet Preset

If you forgot what your preset does, use the show command to see them: /control or to see a specific one:

/control /show:myfavoriteguy2

Using ControlNet Modes

We’re always adding modes, so this list may not have the latest.  

The (new) shorthand parameter for control guidance is /cg:0.1-2 – controls how much the render should stick to a given controlnet mode. A sweet spot is 0.1-0.5.  You can also write it out the old long way as /controlguidance:1

Swapping Faces

FaceSwapping is also available as a reply command. You can also use this handy faceswap (roop, insightface) to change out the faces of an uploaded pic.  First create the control for the image, and then add a second picture to swap the face

As a reply command

/faceswap myfavoriteguy2

As render command

You can also use our FaceSwap technology in a way (similar to a LoRa), but it’s just a time-saving way to create a face swap. It does not have weights or much flexibility, it will find every realistic face in a new render and swap the images to 1 face. Use your same ControlNet preset names to use it.

/render a man eating a sandwich /facepush:myfavoriteguy2

Facepush Limitations

Facepush only works with Stable Diffusion 1.5 models like and the checkpoint must be realistic. It does not work with SDXL models, and may not work with the /more command or some high resolutions. This feature is experimental. If you’re having trouble with /facepush try rendering your prompt and then do /faceswap on the image. It will tell you if the image is not realistic enough. This can sometimes be fixed by applying a /facelift to sharpen the target.  /more and /remix may not work as expected (yet)

File & Queue Management

 

CANCEL

Use /cancel to abort what you’re rendering

/cancel

DOWNLOAD

The images that you see in Telegram are using the Telegram built-in image compressor. It limits the file size. To get around Telegram, reply to your image with /download to get an uncompressed RAW image
/download
If the download requests a password, see the Private Gallery Sharing and Passwords section in this Cheat Sheet.

 

DELETE

There’s two ways to delete images: locally from your device, or forever from your Cloud Drive. 

To delete images from your local device but keep them on the Cloud Drive, use Telegram’s built in delete feature. Long press an image (or right click on it from PC) and choose delete.

Reply to an image with /delete to wipe it from your cloud drive. 

/delete
You can also type /webui and launch our file manager, and use the organize command to batch delete images all at once.

 

HISTORY

See a list of your recently created images, within that given Telegram channel.  When used in public groups, it will only show images that you’ve already created in that public channel only, so /history is also sensitive about your privacy. 

/history

PNG

By default, PirateDiffusion creates images in near-lossless JPG. You can also create images as higher-resolution PNGs instead of JPG. Warning: This will use up 5-10X of the storage space. However, there is a catch. Telegram doesn’t display PNG files natively, so after the PNG is created, use the /download command (above) to see it.
/render a cool cat #everythingbad /format:png

VECTOR

IMAGE TO SVG / VECTOR! Reply to a generated or uploaded image to trace it as vectors, specifically SVG. Vectors images that can be infinitely zoomed since they aren’t rendered with rasterized pixels like regular images; so you have clear sharp edges. Great for printing; products, etc. Use the /bg command if you’re creating a logo or sticker.

/trace


All available options are listed below. We don’t know the optional ones yet, so help us arrive at a good default by sharing your findings in VIP chat.

  • speckle – integer number – default 4 – range 1 .. 128 – Discard patches smaller than X px in size
  • color – default – make image color
  • bw – make image black and white
  • mode – polygon, spline, or none – default spline – Curver fitting mode
  • precision – integer number – default 6 – range 1 .. 8 – Number of significant bits to use in an RGB channel – i.e., more color fidelity at the cost of more “splotches”
  • gradient – integer number – default 16 – range 1 ..128 – Color difference between gradient layers
  • corner – integer number – default 60 – range 1 .. 180 – Minimum momentary angle (degree) to be considered a corner
  • length – floating point number – default 4 – range 3.5 .. 10 – Perform iterative subdivide smooth until all segments are shorter than this length

Example trace with optional fine-tuned paramters:

/trace /color /gradient:32 /corner:1

WEB UI FILE MANAGER

Handy for managing your files in a browser, and looking at a visual list of models quickly.

/webui
Check the accounts section at the top of this page for things like password commands.

 

PING

“Are we down, is Telegram down, or did the queue forget me?”  /ping tries to answer all of these questions at a glance

/ping
If /ping doesn’t work, try /start. This will give the bot a nudge.

 

SETTINGS

You can override the default configuration of your bot, such as only selecting a specific sampler, steps, and a very useful one: setting your favorite base model in the place of Stable Diffusion 1.5

/settings
Available settings: /settings /concept:none /settings /guidance:random /settings /style:none /settings /sampler:random /settings /steps:default /settings /silent:off Pay close attention to the status messages when making a setting change, as it will tell you how to roll back the change.  Sometimes reverting requires the parameter off, none, or default Before making a change to the defaults, you may want to save the defaults as a loadout, as explained below, so you can roll back if you’re not happy with your configuration and want to start over.

 

LOADOUTS

The /settings command also displays your loadouts at the bottom of the response.

Make switching workflows easier by saving loadouts. For example, you may have a preferred base model and sampler for anime, and a different one for realism, or some other settings for a specific project or client.  Loadouts allow you to replace all of your settings in a flash.

For example, if I wanted to save my settings as is, I can make up a name like Morgan’s Presets Feb 14:

/settings /save:morgans-presets-feb14
Manage your loadouts: Save current settings: /settings /save:coolset1 Display current settings: /settings /show:coolset1 Retrieve a setting: /settings /load:coolset1

 

SILENT MODE

Are the debugging messages, like confirmation of prompt repeat, too wordy?  You can make the bot completely silent with this feature.  Just don’t forget you turned this on or you’ll think the bot is ignoring you, it won’t even tell you an estimated time of render or any confirmations!

/settings /silent:on
To turn it back on, just change ON to OFF<

AESTHETICS SCORING

Aesthetics is an option that enables an aesthetics evaluation model on a rendered image, as part of the rendering process. This is also now available in the Graydient API.

These machine learning models attempt to rate the visual quality / beauty of an image in a numerical way

/render a cool cat <sdxl> /aesthetics

 

It returns an aesthetics (“beauty”) score of 1-10 and an artifacts score of 1-5 (low being better). To see what is considered good or bad, here is the data set.  The score can also be seen in /showprompt

Deprecated

These old commands are still supported, but have been replaced by newer techniques. These are hard to recommend for production, but still fun to use.

BREW

Brew was replaced with /polly which uses LLM smarts to create better images.  What came before it was the /brew command which loads a random recipe. 

/brew a cool dog

Though its fun to use, we deprecated it because it too often creates very different from the desired prompt. Like a roll of the dice, brew is just for fun. 

MEME

We added this one as a joke, but it still works and is quite hilarious. You can add Internet Huge IMPACT font meme text to the top and bottom of your image, one section at a time.

/meme /top:One does not simply

(next turn)

/meme /bottom:Walk into Mordor

INSTRUCT

PIX 2 PIX

Replaced by: Inpaint, Outpaint, and Remix 

This was the hot thing at it’s time, a technology called Instruct Pix2Pix, the /edit command. Reply to a photo like this:

/edit add fireworks to the sky

Ask “what if” style questions for images of landscapes and nature, in natural language to see changes.  While this technology is cool, Instruct Pix2Pix produces low resolution results, so it is hard to recommend. It is also locked on its own art style, so it is not compatible with our concepts, loras, embeddings, etc.  It is locked at 512×512 resolution. If you’re working on lewds or anime, it’s the wrong tool for the job. Use /remix instead.You can also control the effect with a strength parameter

/edit /strength:0.5 What if the buildings are on fire?

STYLES

Styles was replaced by the more powerful recipes system.  Styles are used to make personal prompt shortcuts that are not intended for sharing. You can trade styles with another person using the copy command, but they will not work without knowing those codes. Our users found this confusing, so recipes are global. To explore this feature, type:

/styles