Cheat Sheet

Cheat Sheet

This is not a lesson, but a quick lookup resource for the PirateDiffusion Telegram Bot only as a supplement to our Telegram tutorials. If you prefer to use a Web UI, we also offer Stable2go web tutorials

What’s new: – LCM Sampler, VAE options, trace, spin, vass, compose, freeu

Account & Activation

NEW USER

 

USER ID

 

USERNAME

/start

If you have upgraded your account on Patreon, your email and Patreon emails must match.  If you cannot activate and unlock all features, please type /debug and send that number to our support channel.

/email billy@crystal.com

The bot will use your Telegram name as your public community username (shown in places like recipes, featured images). You can change this at any time with the /username command, like this:

/username Tiny Dancer

The Basics: Create and Browse

LIST OF MODELS

 

 

SEARCH

 

 

FAVORITE

 

 

DEFAULT

Models = Concepts

A concept is a person, pose, effect, or art style.  Unlike other AIs, PirateDiffusion, is never stuck in one style. We add new concepts daily

/concepts

The purpose of /concepts is to show you the trigger words for models, as shown on our website. To create an image, pair a concept’s trigger word with the render command (explained below) and describe the photo.

To use a concept, use it’s trigger name. Example: <sdxl>. You can also search right from telegram.

/concept /search:emma

If you have a favorite model, you can keep a short list. To add one, /concept /fave:concept-name

/concept /fave

You can also set your current favorite art style as your default, so don’t have to type it every time. This command will also set it as the default in your Stable2go Web UI as well.

/concept /default:sdxl

CREATE IMAGES

 

STEPS

 

GUIDANCE (CFG)

 

SIZES

 

SAMPLERS

 

NEGATIVE
EMBEDDINGS

 

CLIPSKIP

 

WORD WEIGHTS

The /render command creates images.  Try a quick lesson.

Render has parameters, such as aspect ratio: /portrait /tall /landscape /wide

Pair render with a concept’s trigger word (explained above) to get people, effects, poses, and art styles into your image. There are limitless combinations! Try a few common ones:

/render /portrait <sdxl> a gorilla jogging in the park, wearing a headband

You can also prompt in over 50 languages:

/render /translate un perro muy guapo <run-xl> /images:3 /size:768x768

If you’re coming from Auto1111, you’ll recognize these parameters. Seed is a number useful for repeating a result. CFG = Guidance, a value between 0-20 that controls how strict or creative the AI should be. 10 is about average, and 0 is very liberal. Steps is a value that affects how fine and detailed an image is. The default is 50, which is high. 25 is average. Too many steps can fry an image. Use /steps:waymore to hit max and /steps:wayless to create 9 quick drafts. Samplers are optional for enthusiasts, and here’s what they do.

/render /sampler:dpm2m /steps:25 /guidance:7.5 /seed:123456 (((cool))) cats in space, heavens

When adding multiple trigger words for concepts, add weights to control them. Don’t mix SD 1.5 and XL models, or you’ll get a mismatch error.

/render <chun-li:0.2><masterpiece:0.5><realcartoon3d-6> a woman kicking a refrigerator #boost

To boost quality effortlessly, use negative embeddings has a [[<funky syntax:-1>]]

/render Andy Kaufman hated picnics [[<easy-negative:-2>]] <level4>

For experts, adding /parser:new allows per-word weights and clipskip (1-12).  Clipskip and weights of 2 or less are recommended.

/render /parser:new /clipskip:2 abstract (blue cat:0.1) [(blue cat:2), (red dog:0.1)]
RECIPES

 

TEMPLATES

 

 

/recipes

Shows a list of prompt templates for beginners. A prompt is intended to have only 1 recipe, or things can get weird. Use recipes with a hashtag and their name.

/render a cool dog #boost

Some popular recipes are #nfix and #eggs and #everythingbad and #sdxlreal

To load a random recipe, use brew:

/brew a cool dog

Because Brew is totally random, it may create something very different from your prompt. Like a roll of the dice, brew is just for fun. If you are not a gambling person, use /render instead for accuracy.

Important: You can create your own recipes. Recipes need to be told where the input prompt will go, so literally copy this $prompt in the positive prompt when creating a recipe, or the recipe will completely ignore your prompt. Makes sense? I will remember to add $prompt – create a recipe

CANCEL

 

PING

 

HISTORY

 

DELETE

 

WEB UI

 

MY IMAGES

 

RESET PASSWORD

 

SHARING IMAGES

Abort what you’re rendering

/cancel

To see what the bot is working on, there are two status commands:

/ping

Ping shows you the latency between Telegram, the bot, and everyone’s render average time

/history

Reply to an image to wipe it from Telegram.  (Only removes it from our DB, not your local device)

/delete

View your personal history within that channel. Does not expose the history of your private renders in group situations, it works on a per-channel basis, so it shows the things you did in that channel that everyone already saw. However, in the case of files too large to send to telegram or errors in prompts, and picking up your files via web browser, explained below:

/webui

You can start in Telegram and move to WebUI using the refine command

/refine

PirateDiffusion is integrated with Stable2go, a web client that shares the same Stable Diffusion API. Many of the commands are interoperable, so you can write PirateDiffussion shorthand codes in the positive prompt box as well. This command cannot be used in a group situation, it only be used when talking to your personal chat directly with @piratediffusion_bot.

The /refine

This link is like a password, so protect it. It that logs you in automatically into your account. Protect that link and bookmark it so that you’re logged in every time, all the time

/gallery

The gallery command creates a web page of your images. You can set a password and pin to limit who can access it.

/community /password:none
/community /password:abc123

Please note that this password is NOT your account password. This is only for sharing images.

“Reply” commands

Literally reply to an image, as if you’re talking to a person. Reply commands are used to manipulate images and inspect information, such as IMG2IMG or looking up the original prompt.

VARIATIONS
/more

Gives you more of the same image prompt. Literally reply to a photo as if you are going to talk to it, and then enter the command. You can also use /more as an upscaler. See the HighRes fix section below to learn how.

IMG2IMG

 

REPROMPT

 

STYLE TRANSFER

Remix is the style transfer command. Start in one style or idea, and finish in another.  Literally reply to a photo as if you are going to talk to it, and then enter the command.  This example switches whatever art style you started from into the base model called Level 4.

/remix a portrait <level4>

Use guidance and strength to control the effect, where guidance refers to the original prompt, and strength refers to this new idea that you’re introducing.

/remix Walter Mercado wearing a dress made out of tuna cans in the summer of 1642 <photon>
HIGH RES FIX

HIGHDEF

 

CUSTOM HIGH RES FIX USING MORE

 

BOOST DETAIL

We recorded a whole hour on this, you should watch it

Doubles the pixels and brings out more details from a finished AI image. This is a standalone command. Highdef cannot be used after /render, it is an Img-2-Img function, not Text-2-Image.

/highdef

Tip: Clicking the little up arrows after an image is created is the shortcut for HighDef, or write your own. HighDef is just a convenient shortcut that’s using Img2Img to reprompt and boost the resolution. You can control this effect by using the /more command and some guiding parameters.

/more /size:1400x1400 /images:4 /strength:0.5 /steps:25

When going high resolution manually, lower the image count so the request doesn’t take longer than a minute. We are lending you the full power of the video card and server every time you create a picture, so we limit some batch sizes as a kindness to others.

UPSCALE Before you use upscalers, make sure your drafts are crisp

/facelift

Facelift is a phase 2 upscaler. You should use HighDef first before using Facelift. This is the general upscaling command, which does two things: boosts face details and 4X’s the pixels. It works similar to beauty mode on your smartphone, meaning it can sometimes sandblast faces, especially illustrations.  Luckily it has other modes of operation:

/facelift /photo

This turns off the face retouching, and is good for landscapes or natural-looking portraits

/facelift /anime

Despite the name, it’s not just for anime – use it to boost any illustrations

/facelift /size:2000x2000

Limitations: Facelift will try to 4x your image, up to 4000×4000. Telegram does not typically allow this size, and if your image is already HD, trying to 4X will likely run out of memory.  If the image is too large and doesn’t come back, try picking it up by typing /history to grab it from web ui, as Telegram has file size limits. Alternatively, use the size parameter as shown above to use less ram when upscaling.

SHOW PROMPT

 

DESCRIBE (CLIP)

 

/showprompt

Reply to an image to see how it was made. This will show you the last action taken on the image.

/showprompt /history

Often times, an image goes through multiple steps like upscaling. Adding /history will show you each of those steps

/describe

The /describe command uses computer vision to take a best guess of what the image is. This can then be used as a prompt. It may sometimes use the curious word “arafed” meaning fun or lively, wild person.

REMOVE BACKGROUND

 

REPLACE BACKGROUND

 

3D SPIN AN OBJECT

/bg

Simply reply to the image to remove the background. This is intended for realistic images.  For drawings, add the anime parameter

/bg /anime

You can also add the PNG parameter to download an uncompressed image. It returns a high res JPG by default.

/bg /format:png

You can also use a hex color value to specify the color of the background

/bg /anime /format:png /color:FF4433 

You can also download the mask separately

/bg /masks

Tip: What can I do with the mask? To prompt only the background! It can’t be done in one step though. First reply to the mask with /showprompt to get the image code us for inpainting, or pick it from the inpainting recent masks. Add /maskinvert to the background instead of the foreground when making a render.

/spin

Spin works best after the background is removed. The spin command supports /guidance. Our system randomly chooses between 2 – 10. For greater detail, try /steps:100

DOWNLOAD

 

/download

Directly downloads a higher resolution file, bypassing Telegram’s file size limitations.

Power Toys for Experts

INPAINT

 

INVERSE INPAINT

Useful for changing a spot area on a photo. This tool has a GUI – it is integrated with Stable2go. Inpaint opens a web browser where you will literally paint on the picture, creating a masked area that can be prompted. Inpainting is possible on an uploaded non-AI photo (change the hands, sky, haircut, clothes, etc.) as well as AI images.

/inpaint fireflies buzzing around at night

In the GUI, there are special, specific art styles (inpaint models) available in the pulldown of this tool, so don’t forget to select one. . Use strength and guidance to control the effect, where Strength is referring to your inpaint prompt only.

/inpaint /size:512x768 /strength:1 /guidance:7 fireflies at (night)

Tip: The size will be inherited from the original picture, and 512×768 is recommended. Specifying a size is recommended, or it defaults to 512×512 which might squish your image. If a person is far away in the image, their face may change unless the image is a higher fidelity.

You can also inverse inpaint, such as first using the /bg command to automatically remove the background from an image, then prompt to change the background.  To achieve this, copy the mask ID from the /bg results.  Then use the /maskinvert property

/inpaint /mask:IAKdybW /maskinvert a majestic view of the night sky with planets and comets zipping by

OUTPAINT Expands the picture using the same rules as inpaint, but without a GUI, so we must specify a trigger word for what kind of art style to use. The exact same inpaint models are used for outpainting. Learn the names by browsing models page or by using

You can check what models are available with /concept /inpainting

/outpaint fireflies at night <sdxl-inpainting>

Outpaint has additional parameters. Use top, right, bottom, and left to control the direction of where the canvas should expand. If you leave out side, it will go in all four directions evenly. You can also add background blur bokeh (1-100), zoom factor (1-12), and contraction of the original area (0-256). Add strength to reign it in.

/outpaint /side:top /blur:10 /zoom:6 /contract:50 /strength:1 the moon is exploding, fireworks <sdxl-inpainting>

Optional parameters

/side:bottom/top/left/right – you can specify a direction to expand your picture or go in all four at once if you don’t add this command.

/blur:1-100 – blurs the edge between the original area and the newly added one.

/zoom:1-12 – affects the scale of the whole picture, by default it’s set to 4.

/contract:0-256 – makes the original area smaller in comparison to the outpainted one. By default, it’s set to 64.

TIP: For better results, change your prompts for each use of outpaint and include only the things you want to see in the newly expanded area. Copying original prompts may not always work as intended.

CONTROLNET

 

FACESWAP (ROOP)

 

MODES

First, upload an image with this command and give it a name. It is then stored as a preset. You can browse your list of presets with this command. It will appear empty by default.

/control /new:make-up-a-name

If you forgot what your preset does, use the show commands to see them: /control or to see a specific one:

/control /show:the-name-you-made-up

The modes supported are contours, depth, edges, hands, pose, reference, segment, skeleton, each which have child parameters. We can’t possibly cram it all here, so it’s better to take the tutorial

The (new) shorthand parameter for control guidance is /cg:0.1-2 – controls how much the render should stick to a given controlnet mode. A sweet spot is 0.1-0.5.  You can also write it out the old long way as /controlguidance:1

FaceSwapping is also available as a reply command. You can also use this handy faceswap (roop, insightface) to change out the faces of an uploaded pic.  First create the control for the image, and then add a second picture to swap the face

/faceswap made-up-name-I-made-above
— AREA 51 —

EXPERIMENTAL / COMING SOON

😅 If you cannot properly use these features yet, you’re doing it right. 

Note: Commands that are not 100% finished yet, these are just sneak previews. These commands will go offline at random times as they are still being developed. This information below is subject to change without notice. Please follow our announcements channel for news on when these features are out of beta. Please don’t ask for support requests for these features yet, we know they’re not quite ready for primetime yet.
POLLY GPT BETA 
/polly write a prompt about a drunk pirate woman

Polly GPT is a 30-billion parameter large language model trained on Stable Diffusion. This is an early open beta on a limited number of servers, so please don’t expect speeds like ChatGPT. It can make awesome images when it’s not drunk and cursing.

SDXL VASS LATENT CORRECTION Timothy Alexis Vass is an independent researcher that has been exploring the SDXL latent space and has made some interesting observations. His aim is color correction, and improving the content of images. We have adapted his published code to run in PirateDiffusion.

/render a cool cat <sdxl> /vass

Why and when to use it: Try it on SDXL images that are too yellow, off-center, or the color range feels limited. You should see better vibrance and cleaned up backgrounds.

Limitations: This only works in SDXL. 

DISABLE SDXL REFINER
/nofix

When rendering with an SDXL model, it is possible to disable the Refiner process. Adding this to the average person’s workflow is hard to recommend, but for those that prefer it, this is for you. As the name implies, Refiner is a clean-up step that can greatly improve grainy images, but with the tradeoff of color and detail reduction. Expert users may prefer the raw unfinished image with brighter colors to then highdef or take into another command.

RENDER TO PNG
/render #boost a cool cat /format:png

You can now create images as higher-resolution PNGs instead of JPG. Warning: This will use up 5-10X of the storage space

SILENT MODE

+++

SETTINGS

+++

LOADOUTS

/silent

Reduces the number of status messages from the bot, like time estimates and render confirmations. Images will show up as soon as they’re ready without indication.

/settings

Displays your loadouts and current render defaults. For example, you can lock your sampler to always use DPM 2++ 2M Karras

/settings /sampler:dpm2m

You can save settings as loadouts, and have unlimited number of loadouts for different workflows, by saving your favorite parameters for each.  

save: /settings /save:coolset1
display: /settings /show:coolset1
use: /settings /load:coolset1

VECTOR TRACE IMAGE TO SVG / VECTOR! Reply to a generated or uploaded image to trace it as vectors, specifically SVG. Vectors images that can be infinitely zoomed since they aren’t rendered with rasterized pixels like regular images; so you have clear sharp edges. Great for printing; products, etc. Use the /bg command if you’re creating a logo or sticker.

/trace

All available options are listed below. We don’t know the optional ones yet, so help us arrive at a good default by sharing your findings in VIP chat.

  • speckle – integer number – default 4 – range 1 .. 128 – Discard patches smaller than X px in size
  • color – default – make image color
  • bw – make image black and white
  • mode – polygon, spline, or none – default spline – Curver fitting mode
  • precision – integer number – default 6 – range 1 .. 8 – Number of significant bits to use in an RGB channel – i.e., more color fidelity at the cost of more “splotches”
  • gradient – integer number – default 16 – range 1 ..128 – Color difference between gradient layers
  • corner – integer number – default 60 – range 1 .. 180 – Minimum momentary angle (degree) to be considered a corner
  • length – floating point number – default 4 – range 3.5 .. 10 – Perform iterative subdivide smooth until all segments are shorter than this length

Example trace with optional fine-tuned paramters:

/trace /color /gradient:32 /corner:1
MODEL MAKER We are testing building LoRAs right in your browser, allowing them to instantly appear in your private concepts browser. This feature is cost-intensive and requires more development, as it locks a server for a few minutes to generate the model. We hope to release this to the public before 2024. It will be included for Plus subscribers first. We’re hoping to strike a perfect balance between low cost and high quality.

This feature requires at least 5 clear images. In the case of people LoRAs, clear cropped photos of the face work best, but we also auto-crop to the face during training.

INTERESTS

+++

FEATURES

/interests

This feature is in early beta and is not supported at this time. While your images are rendering, we want to show you the very best prompts from the community but also relevant images based on your interests.

/featured

View a list of featured images recently and get the prompts

/feature

Nominate an image to be featured. The image may be shown at random with upvote buttons to other users, if it receives enough likes, it will get featured. It’s ok to nominate NSFW images, we’ll sort out the nudity on/off switch later for the viewers.

VAE OVERRIDE VAE is a special type of model that can be used to change the contrast, quality, and color saturation. If an image looks overly foggy and your guidance is set above 10, the VAE might be the culprit. VAE stands for “variational autoencoder” and is a technique that reclassifies images, similar to how a zip file can compress and restore an image. The VAE “rehydrates” the image based on the data that it has been exposed to, instead of discrete values. If all of your renders images appear desaturated, blurry, or have purple spots, changing the VAE is the best solution. (Also please notify us so we can set the correct one by default). 16 bit VAE run fastest.

How to use a different VAE at runtime (not fully supported yet)

/render #sdxlreal a hamster singing in the subway /vae:GraydientPlatformAPI__bright-vae-xl

Recommended for SDXL

/vae:GraydientPlatformAPI__bright-vae-xl

Recommended for SD15 photography

/vae:GraydientPlatformAPI__sd-vae-ft-ema

Recommended for SD15 illustration or anime

/vae:GraydientPlatformAPI__vae-klf8anime2

Using a different third-party VAE

Upload or find one on the Huggingface website with this folder directory setup:

https://huggingface.co/madebyollin/sdxl-vae-fp16-fix

Then replace the slashes and remove the front part of the URL, like this:

/render whatever /vae:madebyollin__sdxl-vae-fp16-fix

The vae folder must have the following characteristics:

  • A single VAE per folder, in a top-level folder of a Huggingface profile as shown above
  • The folder must contain a config.json
  • The file must be in .bin format
  • The bin file must be named “diffusion_pytorch_model.bin”

Where to find more:  Huggingface and Civitai may have others, but they must be converted to the format above

Other known working vae:

  • /vae:GraydientPlatformAPI__vae-blessed2 — less saturated than kofi2
  • /vae:GraydientPlatformAPI__vae-anything45 — less saturated than blessed2
  • /vae:GraydientPlatformAPI__vae-orange — medium saturation, but
  • /vae:GraydientPlatformAPI__vae-pastel — vivid colors like old Dutch masters
FREE U FreeU (Free Lunch in Diffusion U-Net) is an experimental detailer that expands the guidance range at four separate intervals during the render. There are 4 possible values, each between 0-2.
b1: backbone factor of the first stage
b2: backbone factor of the second stage
s1: skip factor of the first stage
s2: skip factor of the second stage

/render  <av5> a drunk horse in Rome /freeu:1.1,1.2,0.9,0.2
BLEND

 

IP ADAPTERS

Blend is an experimental image creation method that allows you to specify stored images from your ControlNet library. This is based on a technology called IP Adapters. This is a horribly confusing name for most people, so we just call it blend.

First, create some preset image concepts by pasting a photo into you bot, and giving it a name exactly how you’d do it for ControlNet.  If you already have controls saved, those work, too.

/control /new:chicken

After you have two or more of those, you can blend them together.

/render /blend:chicken:1 /blend:zelda:-0.4

Tip: IP Adapters supports negative images. It is recommended to subtract a noise image to get better pictures.

You can control the noisiness of this image with /blendnoise

Default is 0.25. You can disable with /blendnoise:0.0

You can also set the strength of the IP Adapters effect with /blendguidance – default is 0.7

Tip: Blend can also be used when inpainting and SDXL models

COMPOSE Compose allows multi-prompt, multi-regional creation. It works best by specifying large canvas sizes and 1 image at a time.The available zones are: background, bottom, bottomcenter, bottomleft, bottomright, center, left, right, top, topcenter, topleft, topright.  The format for each zone is x1, y1, x2, y2

/compose /size:2000x700 /left:The tower of Thai food /center:Jungles of Thailand, tropical rainforest [blurry] /right: Castle made of stone, castle in the jungle, tropical vegetation

Another example

/compose /seed:1 /images:2 /size:384×960 /top:ZeldaKingdom GloomyCumulonimbus /bottom:1983Toyota Have a Cookie /guidance:5 <photon>

Tip: Use a guidance of about 5 for better blending. You can also specify the model.  Guidance applies to the entire image. We are working on adding regional guidance.

/compose <realvis51> /size:1000x2000 /top: Lion roaring, lion on top of building /center: Apartment building, front of building, entrance /bottom: Dirty city streets, New York city streets [[ugly]] [[blurry] /images:1 /guidance:7
LONG PROMPT WEIGHTS for SDXL LPW offers unlimited length prompts and weights, offering better weights functionality

/render (((I AM SUDDENLY COLD)), closeup), breezy, fluent <sdxl> /lpw

Limitations: Small but positive weights and negative weights inside parenthesis aren’t well understood yet. This feature is in beta and may go offline at random times. When it’s working well, this feature may be turned on at all times and just quietly work without the need for a command.

LCM SAMPLER LCM in Stable Diffusion stands for Latent Consistency Models.  Use it to get images back faster by stacking it with lower steps and guidance. The tradeoff is speed above quality, though it can produce stunning images very quickly in large batches.

/render /sampler:lcm /guidance:1.5 /steps:6 /images:9 /size:1024x1024  <realvis2-xl> /seed:469498 /nofix Oil painting, oil on board, painted picture Retro fantasy art by Brom by Gerald Brom ((high quality, masterpiece,masterwork)) [[low resolution,m worst quality, blurry, mediocre, bad art, deformed, disfigured, elongated, disproportionate, anatomically incorrect, unrealistic proportions, melted, abstract, surrealism, sloppy, crooked, skull, skulls]] Closeup Portrait A wizard stands in an alien landscape desert wearing wizards robes and a magic hat

Tips:  When using SDXL, add /nofix to disable refiner, it may help boost quality, especially when doing /more

It works with SD 1.5 and SDXL models. Try it with guidance between 2-4 and steps between 8-12.  Please do experiment and share your findings in VIP prompt engineering discussion group, found in your membership section on Patreon.

It will vary by model, but even /guidance:1.5 /steps:6 /images:9 is returning good SDXL results in under 10 seconds!

DEPRECATED

These old commands are still supported, but have been replaced by newer techniques. Unless you know what you’re doing or are just tinkering, it’s better to try the newer methods.

STYLES
(replaced by recipes)
Styles was replaced by recipes but you can still use it.  Styles are used to make personal prompt shortcuts that are not intended for sharing. You can trade styles with another person using the copy command, but they will not work without knowing those codes. Our users found this confusing, so recipes are global.

/styles

INSTRUCT

PIX 2 PIX

(higher quality is possible with inpaint, outpaint, and remix)

/edit add fireworks to the sky

Ask “what if” style questions for images of landscapes and nature, in natural language to see changes.  While this technology is cool, Instruct Pix2Pix produces low resolution results, so it is hard to recommend. It is also locked on its own art style, so it is not compatible with our concepts, loras, embeddings, etc.  It is locked at 512×512 resolution. If you’re working on lewds or anime, it’s the wrong tool for the job. Use /remix instead.You can also control the effect with a strength parameter

/edit /strength:0.5 What if the buildings are on fire?