<?xml version="1.0" encoding="utf-8" ?><feed xmlns="http://www.w3.org/2005/Atom" xmlns:tt="http://teletype.in/" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"><title>Aleksander Yusup</title><author><name>Aleksander Yusup</name></author><id>https://teletype.in/atom/wisdomwizard</id><link rel="self" type="application/atom+xml" href="https://teletype.in/atom/wisdomwizard?offset=0"></link><link rel="alternate" type="text/html" href="https://teletype.in/@wisdomwizard?utm_source=teletype&amp;utm_medium=feed_atom&amp;utm_campaign=wisdomwizard"></link><link rel="next" type="application/rss+xml" href="https://teletype.in/atom/wisdomwizard?offset=10"></link><link rel="search" type="application/opensearchdescription+xml" title="Teletype" href="https://teletype.in/opensearch.xml"></link><updated>2026-04-05T13:52:42.395Z</updated><entry><id>wisdomwizard:beginners_guide-generate_ai_images_how_to</id><link rel="alternate" type="text/html" href="https://teletype.in/@wisdomwizard/beginners_guide-generate_ai_images_how_to?utm_source=teletype&amp;utm_medium=feed_atom&amp;utm_campaign=wisdomwizard"></link><title>Stable Diffusion WebUI AUTOMATIC1111: A Beginner’s Guide</title><published>2023-11-17T02:17:34.191Z</published><updated>2023-11-26T05:39:24.315Z</updated><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://img2.teletype.in/files/d6/c8/d6c842dd-75d7-47f1-bdf6-62c499c10835.png"></media:thumbnail><summary type="html">&lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/cover-Automatic11111-1200.jpg&quot;&gt;Stable Diffusion WebUI (AUTOMATIC1111 or A1111 for short) is the de facto GUI for advanced users. Thanks to the passionate community, most new features come to this free Stable Diffusion GUI first. But it is not the easiest software to use. Documentation is lacking. The extensive list of features it offers can be intimidating.</summary><content type="html">
  &lt;figure id=&quot;8tCX&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/cover-Automatic11111-1200.jpg&quot; width=&quot;1200&quot; /&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;cHE6&quot;&gt;Stable Diffusion WebUI (AUTOMATIC1111 or A1111 for short) is the &lt;strong&gt;de facto GUI&lt;/strong&gt; for advanced users. Thanks to the passionate community, most new features come to this free Stable Diffusion GUI first. But it is not the easiest software to use. Documentation is lacking. The extensive list of features it offers can be intimidating.&lt;/p&gt;
  &lt;p id=&quot;WJQA&quot;&gt;This guide will teach you &lt;strong&gt;how to use AUTOTMATIC1111&lt;/strong&gt; GUI. You can use it as a tutorial. There are plenty of examples you can follow step-by-step.&lt;/p&gt;
  &lt;p id=&quot;uD5C&quot;&gt;You can also use this guide as a&lt;strong&gt; reference manual&lt;/strong&gt;. Skip through it and see what is there. Come back to it when you actually need to use a feature.&lt;/p&gt;
  &lt;p id=&quot;7uui&quot;&gt;You will see many examples to demonstrate the effect of a setting because I believe this is the only way to make it clear.&lt;/p&gt;
  &lt;p id=&quot;tWzl&quot;&gt;&lt;/p&gt;
  &lt;p id=&quot;eMFD&quot;&gt;&lt;/p&gt;
  &lt;p id=&quot;ySYQ&quot;&gt;&lt;/p&gt;
  &lt;h2 id=&quot;CaUS&quot;&gt;1) Get access to Web UI&lt;/h2&gt;
  &lt;section style=&quot;background-color:hsl(hsl(55,  86%, var(--autocolor-background-lightness, 95%)), 85%, 85%);&quot;&gt;
    &lt;p id=&quot;r6u0&quot;&gt;&lt;br /&gt;Create an account on any platform that supports ai generation with stable diffusion, I prefer stadio because this service supports many models and generates very fast. &lt;br /&gt;&lt;br /&gt;You can find and start generating for free &lt;a href=&quot;https://stadio.ai/models?via=free-1h-gen&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt;.&lt;br /&gt;&lt;br /&gt;When you get access to panel you can start generating images. &lt;/p&gt;
  &lt;/section&gt;
  &lt;p id=&quot;MkqX&quot;&gt;&lt;/p&gt;
  &lt;p id=&quot;HybP&quot;&gt;&lt;/p&gt;
  &lt;p id=&quot;vSVe&quot;&gt;&lt;/p&gt;
  &lt;h2 id=&quot;ltR1&quot;&gt;2) Text-to-image tab&lt;/h2&gt;
  &lt;p id=&quot;FYGf&quot;&gt;You will see the &lt;strong&gt;txt2img&lt;/strong&gt; tab when you first start the GUI. This tab does the most basic function of Stable Diffusion: &lt;strong&gt;turning a text prompt into images.&lt;/strong&gt;&lt;/p&gt;
  &lt;figure id=&quot;ZKnv&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-41-1024x531.png&quot; width=&quot;1024&quot; /&gt;
  &lt;/figure&gt;
  &lt;h3 id=&quot;6QU6&quot;&gt;Basic usage&lt;/h3&gt;
  &lt;p id=&quot;dKgD&quot;&gt;These are the settings you may want to change if this is your first time using AUTOMATIC1111.&lt;/p&gt;
  &lt;figure id=&quot;kiKl&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-44-969x1024.png&quot; width=&quot;969&quot; /&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;lw3D&quot;&gt;&lt;strong&gt;Stable Diffusion Checkpoint&lt;/strong&gt;: Select the &lt;a href=&quot;https://stable-diffusion-art.com/models&quot; target=&quot;_blank&quot;&gt;model&lt;/a&gt; you want to use. First-time users can use the &lt;a href=&quot;https://stable-diffusion-art.com/models/#Stable_diffusion_v15&quot; target=&quot;_blank&quot;&gt;v1.5 base model&lt;/a&gt;.&lt;/p&gt;
  &lt;p id=&quot;NPok&quot;&gt;&lt;strong&gt;Prompt&lt;/strong&gt;: Describe what you want to see in the images. Below is an example. See the complete guide for &lt;a href=&quot;https://stable-diffusion-art.com/prompt-guide/&quot; target=&quot;_blank&quot;&gt;prompt building&lt;/a&gt; for a tutorial.&lt;/p&gt;
  &lt;blockquote id=&quot;RzJT&quot;&gt;A surrealist painting of a cat by Salvador Dali&lt;/blockquote&gt;
  &lt;p id=&quot;8DVb&quot;&gt;&lt;strong&gt;Width and height&lt;/strong&gt;: The size of the output image. You should set at least one side to 512 pixels when using a v1 model. For example, set the width to 512 and the height to 768 for a portrait image with a 2:3 aspect ratio.&lt;/p&gt;
  &lt;p id=&quot;laSc&quot;&gt;&lt;strong&gt;Batch size&lt;/strong&gt;: Number of images to be generated each time. You want to generate at least a few when testing a prompt because each one will differ.&lt;/p&gt;
  &lt;p id=&quot;6Ft6&quot;&gt;Finally, hit the &lt;strong&gt;Generate&lt;/strong&gt; button. After a short wait, you will get your images!&lt;/p&gt;
  &lt;figure id=&quot;5EMR&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-45-1024x1003.png&quot; width=&quot;1024&quot; /&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;dZpV&quot;&gt;By default, you will get an additional image of composite thumbnails.&lt;/p&gt;
  &lt;p id=&quot;0n5J&quot;&gt;You can &lt;strong&gt;save an image&lt;/strong&gt; to your local storage. First, select the image using the thumbnails below the main image canvas. Right-click the image to bring up the context menu. You should have options to save the image or copy the image to the clipboard.&lt;/p&gt;
  &lt;p id=&quot;5Exm&quot;&gt;That’s all you need to know for the basics! The rest of this section explains each function in more detail.&lt;/p&gt;
  &lt;h3 id=&quot;Re3I&quot;&gt;Image generation parameters&lt;/h3&gt;
  &lt;figure id=&quot;MHk8&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-46-967x1024.png&quot; width=&quot;967&quot; /&gt;
    &lt;figcaption&gt;Txt2img tab.&lt;/figcaption&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;Ecam&quot;&gt;&lt;strong&gt;Stable Diffusion checkpoint&lt;/strong&gt; is a dropdown menu for selecting &lt;a href=&quot;https://stable-diffusion-art.com/lora/&quot; target=&quot;_blank&quot;&gt;models&lt;/a&gt;. You need to put model files in the folder &lt;code&gt;stable-diffusion-webui&lt;/code&gt; &amp;gt; &lt;code&gt;models&lt;/code&gt; &amp;gt; &lt;code&gt;Stable-diffusion&lt;/code&gt;. See more about &lt;a href=&quot;https://stable-diffusion-art.com/models/#How_to_install_and_use_a_model&quot; target=&quot;_blank&quot;&gt;installing models&lt;/a&gt;.&lt;/p&gt;
  &lt;p id=&quot;HEjx&quot;&gt;The &lt;strong&gt;refresh&lt;/strong&gt; button next to the dropdown menu is for refreshing the list of models. It is used when you have just put a new model in the model folder and wish to update the list.&lt;/p&gt;
  &lt;p id=&quot;rRu6&quot;&gt;&lt;strong&gt;Prompt&lt;/strong&gt; text box: Put what you want to see in the images. Be detailed and specific. Use some try-and-true keywords. You can find a short list &lt;a href=&quot;https://stable-diffusion-art.com/how-to-come-up-with-good-prompts-for-ai-image-generation/#Some_good_keywords_for_you&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt; or a more extensive list in the &lt;a href=&quot;https://andrewongai.gumroad.com/l/stable_diffusion_prompt_generator&quot; target=&quot;_blank&quot;&gt;prompt generator&lt;/a&gt;.&lt;/p&gt;
  &lt;p id=&quot;1hNY&quot;&gt;&lt;strong&gt;Negative Prompt&lt;/strong&gt; text box: Put what you &lt;strong&gt;don’t&lt;/strong&gt; want to see. You should use a negative prompt when using v2 models. You can use a universal negative prompt. See &lt;a href=&quot;https://stable-diffusion-art.com/how-to-use-negative-prompts/&quot; target=&quot;_blank&quot;&gt;this article&lt;/a&gt; for details.&lt;/p&gt;
  &lt;p id=&quot;1RB5&quot;&gt;&lt;strong&gt;Sampling method&lt;/strong&gt;: The algorithm for the denoising process. I use &lt;em&gt;DPM++ 2M Karras&lt;/em&gt; because it balances speed and quality well. See &lt;a href=&quot;https://stable-diffusion-art.com/know-these-important-parameters-for-stunning-ai-images/#Sampling_methods&quot; target=&quot;_blank&quot;&gt;this section&lt;/a&gt; for more details. You may want to &lt;strong&gt;avoid any ancestral samplers&lt;/strong&gt; (The ones with an &lt;em&gt;a&lt;/em&gt;) because their images are unstable even at large sampling steps. This made tweaking the image difficult.&lt;/p&gt;
  &lt;p id=&quot;gyJq&quot;&gt;&lt;strong&gt;Sampling steps&lt;/strong&gt;: Number of sampling steps for the denoising process. The more the better, but it also takes longer. 25 steps work for most cases.&lt;/p&gt;
  &lt;p id=&quot;lRBW&quot;&gt;&lt;strong&gt;Width and height&lt;/strong&gt;: The size of the output image. You should set at least one side to 512 pixels for v1 models. For example, set the width to 512 and the height to 768 for a portrait image with a 2:3 aspect ratio. Set at least one side to 768 when using the v2-768px model.&lt;/p&gt;
  &lt;p id=&quot;fTJc&quot;&gt;&lt;strong&gt;Batch count&lt;/strong&gt;: Number of times you run the image generation pipeline.&lt;/p&gt;
  &lt;p id=&quot;WrhO&quot;&gt;&lt;strong&gt;Batch size&lt;/strong&gt;: Number of images to generate each time you run the pipeline.&lt;/p&gt;
  &lt;p id=&quot;qBbi&quot;&gt;The total number of images generated equals the batch count times the batch size. You would usually change the batch size because it is faster. You will only change the batch count if you run into memory issues.&lt;/p&gt;
  &lt;p id=&quot;GIlp&quot;&gt;&lt;strong&gt;CFG scale&lt;/strong&gt;: &lt;strong&gt;Classifier Free Guidance scale&lt;/strong&gt; is a parameter to control how much the model should respect your prompt.&lt;/p&gt;
  &lt;p id=&quot;oePI&quot;&gt;1 – Mostly ignore your prompt.&lt;br /&gt;3 – Be more creative.&lt;br /&gt;7 – A good balance between following the prompt and freedom.&lt;br /&gt;15 – Adhere more to the prompt.&lt;br /&gt;30 – Strictly follow the prompt.&lt;/p&gt;
  &lt;p id=&quot;ujA0&quot;&gt;The images below show the effect of changing CFG with fixed seed values. You don’t want to set CFG values too high or too low. Stable Diffusion will ignore your prompt if the CFG value is too low. The color of the images will be saturated when it is too high.&lt;/p&gt;
  &lt;figure id=&quot;hEfu&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2022/11/cfg.png&quot; width=&quot;900&quot; /&gt;
  &lt;/figure&gt;
  &lt;h3 id=&quot;9AA3&quot;&gt;Seed&lt;/h3&gt;
  &lt;p id=&quot;k0b8&quot;&gt;&lt;strong&gt;Seed&lt;/strong&gt;: The seed value used to generate the initial random tensor in the latent space. Practically, it controls the content of the image. Each image generated has its own seed value. AUTOMATIC1111 will use a random seed value if it is set to -1.&lt;/p&gt;
  &lt;p id=&quot;ClFe&quot;&gt;A common reason to fix the seed is to fix the content of an image and tweak the prompt. Let’s say I generated an image using the following prompt.&lt;/p&gt;
  &lt;blockquote id=&quot;ZSws&quot;&gt;photo of woman, dress, city night background&lt;/blockquote&gt;
  &lt;figure id=&quot;yjL0&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-47.png&quot; width=&quot;512&quot; /&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;oooR&quot;&gt;I like this image and want to tweak the prompt to add bracelets to her wrists. You will set the seed to the value of this image. The seed value is in the log message below the image canvas.&lt;/p&gt;
  &lt;figure id=&quot;nRSM&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-48-1024x633.png&quot; width=&quot;1024&quot; /&gt;
    &lt;figcaption&gt;An image’s seed value (highlighted) is in the log message.&lt;/figcaption&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;z5P0&quot;&gt;Copy this value to the seed value input box. Or use the recycle button to copy the seed value.&lt;/p&gt;
  &lt;figure id=&quot;jthl&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-49-1024x103.png&quot; width=&quot;1024&quot; /&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;F8zr&quot;&gt;Now add the term “bracelet” to the prompt&lt;/p&gt;
  &lt;blockquote id=&quot;TbSx&quot;&gt;photo of woman, dress, city night background, bracelet&lt;/blockquote&gt;
  &lt;p id=&quot;P0z1&quot;&gt;You get a similar picture with bracelets on her wrists.&lt;/p&gt;
  &lt;figure id=&quot;2r3t&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/08407-912695504-photo-of-woman-dress-city-night-background-bracelet.png&quot; width=&quot;512&quot; /&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;CxBT&quot;&gt;The scene could completely change because some keywords are strong enough to alter the composition. You may experiment with &lt;a href=&quot;https://stable-diffusion-art.com/prompt-guide/#Poor_man8217s_prompt-to-prompt&quot; target=&quot;_blank&quot;&gt;swapping in a keyword at a later sampling step&lt;/a&gt;.&lt;/p&gt;
  &lt;p id=&quot;fNl4&quot;&gt;Use the &lt;strong&gt;dice icon&lt;/strong&gt; to set the seed back to -1 (random).&lt;/p&gt;
  &lt;figure id=&quot;N4iv&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-51-1024x114.png&quot; width=&quot;1024&quot; /&gt;
  &lt;/figure&gt;
  &lt;h3 id=&quot;axGs&quot;&gt;Extra seed options&lt;/h3&gt;
  &lt;p id=&quot;Jeuk&quot;&gt;Checking the &lt;strong&gt;Extra&lt;/strong&gt; option will reveal the Extra Seed menu.&lt;/p&gt;
  &lt;figure id=&quot;0KMW&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-53-1024x286.png&quot; width=&quot;1024&quot; /&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;0R3y&quot;&gt;&lt;strong&gt;Variation seed&lt;/strong&gt;: An additional seed value you want to use.&lt;/p&gt;
  &lt;p id=&quot;2cHP&quot;&gt;&lt;strong&gt;Variation strength:&lt;/strong&gt; Degree of interpolation between the &lt;strong&gt;seed&lt;/strong&gt; and the &lt;strong&gt;variation seed&lt;/strong&gt;. Setting it to 0 uses the &lt;strong&gt;seed&lt;/strong&gt; value. Setting it to 1 uses the &lt;strong&gt;variation seed &lt;/strong&gt;value.&lt;/p&gt;
  &lt;p id=&quot;oXNE&quot;&gt;Here’s an example. Let’s say you have generated 2 images from the same prompt and settings. They have their own seed values, 1 and 3.&lt;/p&gt;
  &lt;figure id=&quot;Pjwg&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/seed-1-08481-1-photo-of-woman-hoodies-jeans-in-a-spaceship-with-windows-overlooking-a-planetulzzang-6500-v1.1_0.3.png&quot; width=&quot;512&quot; /&gt;
    &lt;figcaption&gt;First image: Seed value is 1.&lt;/figcaption&gt;
  &lt;/figure&gt;
  &lt;figure id=&quot;WA2z&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/seed-3-08482-3-photo-of-woman-hoodies-jeans-in-a-spaceship-with-windows-overlooking-a-planetulzzang-6500-v1.1_0.3.png&quot; width=&quot;512&quot; /&gt;
    &lt;figcaption&gt;Second image: Seed value is 3.&lt;/figcaption&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;EJcQ&quot;&gt;You want to generate a blend of these two images. You would set the seed to 1, the variation seed to 3, and adjust the variation strength between 0 and 1. In the experiment below, variation strength allows you to produce a transition of image content between the two seeds. The girl’s pose and background change gradually when the variation strength increases from 0 to 1.&lt;/p&gt;
  &lt;figure id=&quot;Nhg1&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/xyz_grid-0003-1-photo-of-woman-hoodies-jeans-in-a-spaceship-with-windows-overlooking-a-planetulzzang-6500-v1.1_0.3-1024x452.png&quot; width=&quot;1024&quot; /&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;lspf&quot;&gt;&lt;strong&gt;Resize seed from width/height: &lt;/strong&gt;Images will change dramatically if you change the image size, even if you use the same seed. This setting tries to fix the content of the image when resizing the image. You will put the new size in &lt;strong&gt;width&lt;/strong&gt; and &lt;strong&gt;height&lt;/strong&gt; sliders and the width and height of the original image here. Put the original seed value in the seed input box. Set variation strength to 0 to ignore the variation seed.&lt;/p&gt;
  &lt;p id=&quot;cJqv&quot;&gt;Let’s say you like this image, which is 512×800 with a seed value of 3.&lt;/p&gt;
  &lt;figure id=&quot;tXxS&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/seed-3-08482-3-photo-of-woman-hoodies-jeans-in-a-spaceship-with-windows-overlooking-a-planetulzzang-6500-v1.1_0.3.png&quot; width=&quot;512&quot; /&gt;
    &lt;figcaption&gt;512×800&lt;/figcaption&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;y0sp&quot;&gt;The composition will change drastically when you change the image size, even when keeping the same seed value.&lt;/p&gt;
  &lt;figure id=&quot;3364&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/size-512x600-08495-3-photo-of-woman-hoodies-jeans-in-a-spaceship-with-windows-overlooking-a-planetulzzang-6500-v1.1_0.3.png&quot; width=&quot;512&quot; /&gt;
    &lt;figcaption&gt;512×600&lt;/figcaption&gt;
  &lt;/figure&gt;
  &lt;figure id=&quot;3363&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/size-512x744-08499-3-photo-of-woman-hoodies-jeans-in-a-spaceship-with-windows-overlooking-a-planetulzzang-6500-v1.1_0.3.png&quot; width=&quot;512&quot; /&gt;
    &lt;figcaption&gt;512×744&lt;/figcaption&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;daem&quot;&gt;Setting a different size changes the image dramatically.&lt;/p&gt;
  &lt;p id=&quot;EbTf&quot;&gt;You will get something much closer to the original one with the new size when you turn on the &lt;strong&gt;resize seed from height and width settings&lt;/strong&gt;. They are not perfectly identical, but they are close.&lt;/p&gt;
  &lt;figure id=&quot;3365&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/size-512x600-resize-seed-08497-3-photo-of-woman-hoodies-jeans-in-a-spaceship-with-windows-overlooking-a-planetulzzang-6500-v1.1_0.3.png&quot; width=&quot;512&quot; /&gt;
    &lt;figcaption&gt;512×600&lt;/figcaption&gt;
  &lt;/figure&gt;
  &lt;figure id=&quot;3366&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/size-512x744-resize-512x800.-08499-3-photo-of-woman-hoodies-jeans-in-a-spaceship-with-windows-overlooking-a-planetulzzang-6500-v1.1_0.3.png&quot; width=&quot;512&quot; /&gt;
    &lt;figcaption&gt;512×744&lt;/figcaption&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;vnAA&quot;&gt;Images are much closer to the original one with the resize seed option.&lt;/p&gt;
  &lt;h3 id=&quot;WnCU&quot;&gt;Restore faces&lt;/h3&gt;
  &lt;p id=&quot;UGkW&quot;&gt;&lt;strong&gt;Restore faces&lt;/strong&gt; applies an additional model trained for restoring defects on faces. Below are before and after examples.&lt;/p&gt;
  &lt;figure id=&quot;3369&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-55.png&quot; width=&quot;512&quot; /&gt;
    &lt;figcaption&gt;Original&lt;/figcaption&gt;
  &lt;/figure&gt;
  &lt;figure id=&quot;3368&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-54.png&quot; width=&quot;512&quot; /&gt;
    &lt;figcaption&gt;Face Restore&lt;/figcaption&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;2FBf&quot;&gt;You must specify which face restoration model to use before using &lt;strong&gt;Restore Faces&lt;/strong&gt;. First, visit the &lt;strong&gt;Settings&lt;/strong&gt; tab. Navigate to the &lt;strong&gt;Face restoration&lt;/strong&gt; section. Select a face restoration model. &lt;strong&gt;CodeFormer&lt;/strong&gt; is a good choice. Set CodeFormer weight to 0 for maximal effect. Remember to click the &lt;strong&gt;Apply settings&lt;/strong&gt; button to save the settings!&lt;/p&gt;
  &lt;figure id=&quot;H3aL&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-56-1024x524.png&quot; width=&quot;1024&quot; /&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;UEai&quot;&gt;Go back to the &lt;strong&gt;txt2img&lt;/strong&gt; tab. Check &lt;strong&gt;Restore Faces&lt;/strong&gt;. The face restoration model will be applied to every image you generate.&lt;/p&gt;
  &lt;p id=&quot;Z6ZK&quot;&gt;You may want to turn off face restoration if you find that the application affects the style on the faces. Alternatively, you can increase the CodeFormer weight parameter to reduce the effect.&lt;/p&gt;
  &lt;h3 id=&quot;4elW&quot;&gt;Tiling&lt;/h3&gt;
  &lt;p id=&quot;VMiA&quot;&gt;You can use Stable Diffusion WebUI to create a repeating pattern like a wallpaper.&lt;/p&gt;
  &lt;p id=&quot;jZs2&quot;&gt;Note: The Tiling checkbox is now on the &lt;strong&gt;Settings&lt;/strong&gt; page.&lt;/p&gt;
  &lt;p id=&quot;brxM&quot;&gt;Use the &lt;strong&gt;Tiling&lt;/strong&gt; option to produce a periodic image that can be tiled. Below is an example.&lt;/p&gt;
  &lt;blockquote id=&quot;6f9R&quot;&gt;flowers pattern&lt;/blockquote&gt;
  &lt;figure id=&quot;4Mq3&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-57.png&quot; width=&quot;512&quot; /&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;mFfz&quot;&gt;This image can be tiled like wallpaper.&lt;/p&gt;
  &lt;figure id=&quot;ayZg&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-58-1024x1024.png&quot; width=&quot;1024&quot; /&gt;
    &lt;figcaption&gt;2×2 tiled.&lt;/figcaption&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;SPM0&quot;&gt;The true treasure of using Stable Diffusion is allowing you to create tiles of any images, not just traditional patterns. All you need is to come up with a text prompt.&lt;/p&gt;
  &lt;figure id=&quot;Na12&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-59-1024x1024.png&quot; width=&quot;1024&quot; /&gt;
  &lt;/figure&gt;
  &lt;h3 id=&quot;xtqM&quot;&gt;Hires. fix.&lt;/h3&gt;
  &lt;p id=&quot;wiII&quot;&gt;The &lt;strong&gt;high-resolution fix&lt;/strong&gt; option applies an &lt;a href=&quot;https://stable-diffusion-art.com/ai-upscaler/&quot; target=&quot;_blank&quot;&gt;upsacler&lt;/a&gt; to enlarge your image. You need this because the native resolution of Stable Diffusion is 512 pixels (or 768 pixels for certain v2 models). The image is too small for many usages.&lt;/p&gt;
  &lt;p id=&quot;guZM&quot;&gt;Why can’t you just set the width and height to higher, like 1024 pixels? Deviating from the native resolution would affect compositions and create problems like generating images with &lt;a href=&quot;https://stable-diffusion-art.com/common-problems-in-ai-images-and-how-to-fix-them/&quot; target=&quot;_blank&quot;&gt;two heads&lt;/a&gt;.&lt;/p&gt;
  &lt;p id=&quot;LnLM&quot;&gt;So, you must first generate a small image of 512 pixels on either side. Then scale it up to a bigger one.&lt;/p&gt;
  &lt;figure id=&quot;MNAP&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/11/image-81-1024x262.png&quot; width=&quot;1024&quot; /&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;6Dqe&quot;&gt;Click &lt;strong&gt;Hires. fix&lt;/strong&gt; to enable the high-resolution fix.&lt;/p&gt;
  &lt;p id=&quot;gJnS&quot;&gt;&lt;strong&gt;Upscaler&lt;/strong&gt;: Choose an upscaler to use. See &lt;a href=&quot;https://stable-diffusion-art.com/ai-upscaler/&quot; target=&quot;_blank&quot;&gt;this article&lt;/a&gt; for a primer.&lt;/p&gt;
  &lt;p id=&quot;Frnn&quot;&gt;The various &lt;em&gt;Latent&lt;/em&gt; upscaler options scale the image in the &lt;a href=&quot;https://stable-diffusion-art.com/how-stable-diffusion-work/#Latent_diffusion_model&quot; target=&quot;_blank&quot;&gt;latent space&lt;/a&gt;. It is done after the sampling steps of the text-to-image generation. The process is similar to &lt;a href=&quot;https://stable-diffusion-art.com/how-to-use-img2img-to-turn-an-amateur-drawing-to-professional-with-stable-diffusion-image-to-image/&quot; target=&quot;_blank&quot;&gt;image-to-image&lt;/a&gt;.&lt;/p&gt;
  &lt;p id=&quot;Dvj8&quot;&gt;Other options are a mix of traditional and AI upscalers. See the AI upscaler article for details.&lt;/p&gt;
  &lt;p id=&quot;C1wY&quot;&gt;&lt;strong&gt;Hires steps&lt;/strong&gt;: Only applicable to latent upscalers. It is the number of sampling steps after upscaling the latent image.&lt;/p&gt;
  &lt;p id=&quot;FJMQ&quot;&gt;&lt;strong&gt;Denoising strength&lt;/strong&gt;: Only applicable to latent upscalers. This parameter has the same meaning as in image-to-image. It controls the noise added to the latent image before performing the Hires sampling steps.&lt;/p&gt;
  &lt;p id=&quot;EUg6&quot;&gt;Now, let’s look at the effect of upscaling the image below to 2x, using &lt;em&gt;latent&lt;/em&gt; as the upscaler.&lt;/p&gt;
  &lt;figure id=&quot;hWM0&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/08864-3152202324-photo-of-woman-wearing-fantastic-hand-dyed-cotton-clothes-embellished-beaded-feather-decorative-fringe-knots-colorful-pigtail.png&quot; width=&quot;512&quot; /&gt;
    &lt;figcaption&gt;Original image&lt;/figcaption&gt;
  &lt;/figure&gt;
  &lt;figure id=&quot;3379&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/ds-0.4-08867-3152202324-photo-of-woman-wearing-fantastic-hand-dyed-cotton-clothes-embellished-beaded-feather-decorative-fringe-knots-colorful-pigtail-683x1024.png&quot; width=&quot;683&quot; /&gt;
    &lt;figcaption&gt;0.4&lt;/figcaption&gt;
  &lt;/figure&gt;
  &lt;figure id=&quot;3380&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/ds0.65-08863-3152202324-photo-of-woman-wearing-fantastic-hand-dyed-cotton-clothes-embellished-beaded-feather-decorative-fringe-knots-colorful-pigtail-683x1024.png&quot; width=&quot;683&quot; /&gt;
    &lt;figcaption&gt;0.65&lt;/figcaption&gt;
  &lt;/figure&gt;
  &lt;figure id=&quot;3381&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/ds-0.9-08865-3152202324-photo-of-woman-wearing-fantastic-hand-dyed-cotton-clothes-embellished-beaded-feather-decorative-fringe-knots-colorful-pigtail-683x1024.png&quot; width=&quot;683&quot; /&gt;
    &lt;figcaption&gt;0.9&lt;/figcaption&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;x21i&quot;&gt;The denoising strength of the latent upscaler must be higher than 0.5. Otherwise, you will get blurry images.&lt;/p&gt;
  &lt;p id=&quot;ZUX6&quot;&gt;For some reason, it must be larger than 0.5 to get a sharp image. Setting it too high will change the image a lot.&lt;/p&gt;
  &lt;p id=&quot;ZNj6&quot;&gt;The benefit of using a latent upscaler is the lack of upscaling artifacts other upscalers like ESRGAN may introduce. The decoder of Stable Diffusion produces the image, ensuring the style is consistent. The drawback is it would change the images to some extent, depending on the value of denoising strength.&lt;/p&gt;
  &lt;p id=&quot;c8QS&quot;&gt;The &lt;strong&gt;upscale factor&lt;/strong&gt; controls how many times larger the image will be. For example, setting it to 2 scales a 512-by-768 pixel image to 1024-by-1536 pixels.&lt;/p&gt;
  &lt;p id=&quot;JkvH&quot;&gt;Alternatively, you can specify the values of &lt;strong&gt;“resize width to”&lt;/strong&gt; and &lt;strong&gt;“resize height to”&lt;/strong&gt; to set the new image size.&lt;/p&gt;
  &lt;p id=&quot;mVqv&quot;&gt;You can avoid the troubles of setting the correct denoising strength by using an AI upscalers like ESRGAN. In general, separating the txt2img and the upscaling into two steps gives you more flexibility. I don’t use the high-resolution fix option but use the Extra page to do upscaling instead.&lt;/p&gt;
  &lt;h3 id=&quot;8z55&quot;&gt;Buttons under the Generate button&lt;/h3&gt;
  &lt;figure id=&quot;RI71&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-61.png&quot; width=&quot;678&quot; /&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;oNth&quot;&gt;From left to right:&lt;/p&gt;
  &lt;ol id=&quot;kS1u&quot;&gt;
    &lt;li id=&quot;LKK1&quot;&gt;&lt;strong&gt;Read the last parameters&lt;/strong&gt;: It will populate all fields so that you will generate the same images when pressing the Generate button. Note that the seed and the model override will be set. If this is not what you want, set the seed to -1 and remove the override.&lt;/li&gt;
  &lt;/ol&gt;
  &lt;figure id=&quot;Ap9w&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-62-1024x439.png&quot; width=&quot;1024&quot; /&gt;
    &lt;figcaption&gt;Seed value and Model override are highlighted.&lt;/figcaption&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;qWQn&quot;&gt;2. &lt;strong&gt;Trash icon&lt;/strong&gt;: Delete the current prompt and the negative prompt.&lt;/p&gt;
  &lt;p id=&quot;AO47&quot;&gt;3. &lt;strong&gt;Model icon&lt;/strong&gt;: Show extra networks. This button is for inserting hypernetworks, &lt;a href=&quot;https://stable-diffusion-art.com/embedding/&quot; target=&quot;_blank&quot;&gt;embeddings&lt;/a&gt;, and &lt;a href=&quot;https://stable-diffusion-art.com/lora/&quot; target=&quot;_blank&quot;&gt;LoRA&lt;/a&gt; phrases into the prompt.&lt;/p&gt;
  &lt;p id=&quot;Insj&quot;&gt;You can use the following two buttons to load and save a prompt and a negative prompt. The set is called a style. It can be a short phrase like an artist’s name, or it can be a full prompt.&lt;/p&gt;
  &lt;p id=&quot;VSt8&quot;&gt;4. &lt;strong&gt;Load style&lt;/strong&gt;: You can select multiple styles from the style dropdown menu below. Use this button to insert them into the prompt and the negative prompt.&lt;/p&gt;
  &lt;p id=&quot;KOkg&quot;&gt;5. &lt;strong&gt;Save style&lt;/strong&gt;: Save the prompt and the negative prompt. You will need to name the style.&lt;/p&gt;
  &lt;h3 id=&quot;EQA3&quot;&gt;Image file actions&lt;/h3&gt;
  &lt;figure id=&quot;KpfF&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-63-1024x923.png&quot; width=&quot;1024&quot; /&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;Qeda&quot;&gt;You will find a row of buttons for performing various functions on the images generated. From left to right…&lt;/p&gt;
  &lt;p id=&quot;070V&quot;&gt;&lt;strong&gt;Open folder&lt;/strong&gt;: Open the image output folder. It may not work for all systems.&lt;/p&gt;
  &lt;p id=&quot;ROON&quot;&gt;&lt;strong&gt;Save&lt;/strong&gt;: Save an image. After clicking, it will show a download link below the buttons. It will save all images if you select the image grid.&lt;/p&gt;
  &lt;p id=&quot;jeKF&quot;&gt;&lt;strong&gt;Zip&lt;/strong&gt;: Zip up the image(s) for download.&lt;/p&gt;
  &lt;p id=&quot;gXkS&quot;&gt;&lt;strong&gt;Send to img2img:&lt;/strong&gt; Send the selected image to the img2img tab.&lt;/p&gt;
  &lt;p id=&quot;D1b2&quot;&gt;&lt;strong&gt;Send to inpainting&lt;/strong&gt;: Send the selected image to the inpainting tab in the img2img tab.&lt;/p&gt;
  &lt;p id=&quot;ftjV&quot;&gt;&lt;strong&gt;Send to extras&lt;/strong&gt;: Send the selected image to the Extras tab.&lt;/p&gt;
  &lt;p id=&quot;sBCE&quot;&gt;&lt;/p&gt;
  &lt;h2 id=&quot;F85z&quot;&gt;Img2img tab&lt;/h2&gt;
  &lt;p id=&quot;eddQ&quot;&gt;The img2img tab is where you use the image-to-image functions. Most users would visit this tab for inpainting and turning an image into another.&lt;/p&gt;
  &lt;h3 id=&quot;eZiV&quot;&gt;Image-to-image&lt;/h3&gt;
  &lt;p id=&quot;hbh0&quot;&gt;An everyday use case in the img2img tab is to do… image-to-image. You can create new images that follow the composition of the base image.&lt;/p&gt;
  &lt;p id=&quot;yPWC&quot;&gt;&lt;strong&gt;Step 1&lt;/strong&gt;: Drag and drop the base image to the &lt;strong&gt;img2img tab&lt;/strong&gt; on the &lt;strong&gt;img2img page&lt;/strong&gt;.&lt;/p&gt;
  &lt;figure id=&quot;HkUN&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-64-1024x799.png&quot; width=&quot;1024&quot; /&gt;
    &lt;figcaption&gt;Base Image.&lt;/figcaption&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;TIiD&quot;&gt;&lt;strong&gt;Step 2&lt;/strong&gt;: Adjust width or height, so the new image has the same aspect ratio. You should see a rectangular frame in the image canvas indicating the aspect ratio. In the above landscape image, I set the &lt;strong&gt;width&lt;/strong&gt; to 760 while keeping the &lt;strong&gt;height&lt;/strong&gt; at 512.&lt;/p&gt;
  &lt;p id=&quot;msuL&quot;&gt;&lt;strong&gt;Step 3&lt;/strong&gt;: Set the &lt;strong&gt;sampling method&lt;/strong&gt; and &lt;strong&gt;sampling steps&lt;/strong&gt;. I typically use DPM++ 2M Karass with 25 steps.&lt;/p&gt;
  &lt;p id=&quot;JlmE&quot;&gt;&lt;strong&gt;Step 4&lt;/strong&gt;: Set batch size to 4.&lt;/p&gt;
  &lt;p id=&quot;tr8z&quot;&gt;&lt;strong&gt;Step 5&lt;/strong&gt;: Write a prompt for the new image. I will use the following prompt.&lt;/p&gt;
  &lt;blockquote id=&quot;eaD2&quot;&gt;A photorealistic illustration of a dragon&lt;/blockquote&gt;
  &lt;p id=&quot;R4ut&quot;&gt;&lt;strong&gt;Step 6&lt;/strong&gt;: Press the Generate button to generate images. Adjust denoising strength and repeat. Below are images with varying denoising strengths.&lt;/p&gt;
  &lt;figure id=&quot;3392&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/ds-0.4-02133-3040689655-A-photorealistic-illustration-of-a-dragon.png&quot; width=&quot;760&quot; /&gt;
    &lt;figcaption&gt;0.4&lt;/figcaption&gt;
  &lt;/figure&gt;
  &lt;figure id=&quot;3393&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/ds-0.6-02140-1675576514-A-photorealistic-illustration-of-a-dragon.png&quot; width=&quot;760&quot; /&gt;
    &lt;figcaption&gt;0.6&lt;/figcaption&gt;
  &lt;/figure&gt;
  &lt;figure id=&quot;3394&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/ds-0.8-02144-592193208-A-photorealistic-illustration-of-a-dragon.png&quot; width=&quot;760&quot; /&gt;
    &lt;figcaption&gt;0.8&lt;/figcaption&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;dW46&quot;&gt;Images produced by img2img with various denoising strengths.&lt;/p&gt;
  &lt;p id=&quot;Czqs&quot;&gt;Many settings are shared with txt2img. I am only going to explain the new ones.&lt;/p&gt;
  &lt;p id=&quot;xuUI&quot;&gt;&lt;strong&gt;Resize mode&lt;/strong&gt;: If the aspect ratio of the new image is not the same as that of the input image, there are a few ways to reconcile the difference.&lt;/p&gt;
  &lt;ul id=&quot;wHRp&quot;&gt;
    &lt;li id=&quot;iWt4&quot;&gt;“&lt;strong&gt;Just resize&lt;/strong&gt;” scales the input image to fit the new image dimension. It will stretch or squeeze the image.&lt;/li&gt;
    &lt;li id=&quot;PMwR&quot;&gt;“&lt;strong&gt;Crop and resize&lt;/strong&gt;” fits the new image canvas into the input image. The parts that don’t fit are removed. The aspect ratio of the original image will be preserved.&lt;/li&gt;
    &lt;li id=&quot;gPTc&quot;&gt;“Resize and fill” fits the input image into the new image canvas. The extra part is filled with the average color of the input image. The aspect ratio will be preserved.&lt;/li&gt;
    &lt;li id=&quot;iZEi&quot;&gt;“Just resize (latent upscale)” is similar to “Just resize”, but the scaling is done in the latent space. Use denoising strength larger than 0.5 to avoid blurry images.&lt;/li&gt;
  &lt;/ul&gt;
  &lt;figure id=&quot;3396&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-65.png&quot; width=&quot;512&quot; /&gt;
    &lt;figcaption&gt;Just resize&lt;/figcaption&gt;
  &lt;/figure&gt;
  &lt;figure id=&quot;3397&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-66.png&quot; width=&quot;512&quot; /&gt;
    &lt;figcaption&gt;Crop and resize&lt;/figcaption&gt;
  &lt;/figure&gt;
  &lt;figure id=&quot;3398&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-67.png&quot; width=&quot;512&quot; /&gt;
    &lt;figcaption&gt;Resize and fill&lt;/figcaption&gt;
  &lt;/figure&gt;
  &lt;figure id=&quot;3399&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-68.png&quot; width=&quot;512&quot; /&gt;
    &lt;figcaption&gt;Just resize (latent upscale)&lt;/figcaption&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;6rDm&quot;&gt;Resize mode&lt;/p&gt;
  &lt;p id=&quot;nq2s&quot;&gt;&lt;strong&gt;Denoising strength&lt;/strong&gt;: Control how much the image will change. Nothing changes if it is set to 0. New images don’t follow the input image if it is set to 1. 0.75 is a good starting point that have a good amount of changes.&lt;/p&gt;
  &lt;p id=&quot;xYuU&quot;&gt;You can use the built-in script &lt;strong&gt;poor man’s outpainting&lt;/strong&gt;: For extending an image. See the &lt;a href=&quot;https://stable-diffusion-art.com/outpainting/&quot; target=&quot;_blank&quot;&gt;outpainting guide&lt;/a&gt;.&lt;/p&gt;
  &lt;h3 id=&quot;4lt3&quot;&gt;Sketch&lt;/h3&gt;
  &lt;p id=&quot;CZWT&quot;&gt;Instead of uploading an image, you can sketch the initial picture. You should enable the color sketch tool using the following argument when starting the webui. (It is already enabled in the &lt;a href=&quot;https://stable-diffusion-art.com/automatic1111-colab/&quot; target=&quot;_blank&quot;&gt;Google Colab notebook&lt;/a&gt; in the &lt;a href=&quot;https://andrewongai.gumroad.com/l/stable_diffusion_quick_start&quot; target=&quot;_blank&quot;&gt;Quick Start Guide&lt;/a&gt;)&lt;/p&gt;
  &lt;pre id=&quot;Qqlu&quot;&gt;--gradio-img2img-tool color-sketch&lt;/pre&gt;
  &lt;p id=&quot;EwmH&quot;&gt;&lt;strong&gt;Step 1&lt;/strong&gt;: Navigate to &lt;strong&gt;sketch&lt;/strong&gt; tab on the img2img page.&lt;/p&gt;
  &lt;p id=&quot;LQRx&quot;&gt;&lt;strong&gt;Step 2&lt;/strong&gt;: Upload a background image to the canvas. You can use the black or white backgrounds below.&lt;/p&gt;
  &lt;p id=&quot;QEMY&quot;&gt;&lt;a href=&quot;https://stable-diffusion-art.com/wp-content/uploads/2022/12/512x512_black.png&quot; target=&quot;_blank&quot;&gt;Black background&lt;/a&gt;&lt;/p&gt;
  &lt;p id=&quot;2X1N&quot;&gt;&lt;a href=&quot;https://stable-diffusion-art.com/wp-content/uploads/2022/12/512x512.png&quot; target=&quot;_blank&quot;&gt;White background&lt;/a&gt;&lt;/p&gt;
  &lt;p id=&quot;yS2M&quot;&gt;&lt;strong&gt;Step 3&lt;/strong&gt;: Sketch your creation. With color sketch tool enabled, you should be able to sketch in color.&lt;/p&gt;
  &lt;p id=&quot;aJWf&quot;&gt;&lt;strong&gt;Step 4&lt;/strong&gt;: Write a prompt.&lt;/p&gt;
  &lt;blockquote id=&quot;Wh6C&quot;&gt;award wining house&lt;/blockquote&gt;
  &lt;p id=&quot;iszX&quot;&gt;&lt;strong&gt;Step 5&lt;/strong&gt;: Press &lt;strong&gt;Generate&lt;/strong&gt;.&lt;/p&gt;
  &lt;figure id=&quot;uAyw&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-69-1024x461.png&quot; width=&quot;1024&quot; /&gt;
    &lt;figcaption&gt;Sketch your own picture for image-to-image.&lt;/figcaption&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;5E3t&quot;&gt;You don’t have to draw something from scratch. You can use the sketch function to modify an image. Below is an example of removing the braids by painting them over and doing a round of image-to-image. Use the eye dropper tool to pick a color from the surrounding areas.&lt;/p&gt;
  &lt;figure id=&quot;EYIc&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-70-1024x494.png&quot; width=&quot;1024&quot; /&gt;
  &lt;/figure&gt;
  &lt;h3 id=&quot;fTOR&quot;&gt;Inpainting&lt;/h3&gt;
  &lt;p id=&quot;XDI4&quot;&gt;Perhaps the most used function in the&lt;strong&gt; img2img&lt;/strong&gt; tab is inpainting. You generated an image you like in the txt2img tab. But there’s a minor defect, and you want to regenerate it.&lt;/p&gt;
  &lt;p id=&quot;ZaXV&quot;&gt;Let’s say you have generated the following image in the &lt;strong&gt;txt2img&lt;/strong&gt; tab. You want to regenerate the face because it is garbled. You can use the &lt;strong&gt;Send to inpaint&lt;/strong&gt; button to send an image from the &lt;strong&gt;txt2img&lt;/strong&gt; tab to the &lt;strong&gt;img2img&lt;/strong&gt; tab.&lt;/p&gt;
  &lt;figure id=&quot;5GbG&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/02/image-86.png&quot; width=&quot;512&quot; /&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;C4T5&quot;&gt;You should see your image when switching to the Inpaint tab of the img2img page. Use the paintbrush tool to create a &lt;strong&gt;mask&lt;/strong&gt; over the area to be regenerated.&lt;/p&gt;
  &lt;figure id=&quot;4yJj&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/02/image-87-1024x870.png&quot; width=&quot;1024&quot; /&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;NBSa&quot;&gt;Parameters like image sizes have been set correctly because you used the “&lt;strong&gt;Send to inpaint&lt;/strong&gt;” function. You usually would adjust&lt;/p&gt;
  &lt;ul id=&quot;MM0U&quot;&gt;
    &lt;li id=&quot;Agui&quot;&gt;denoising strength: Start at 0.75. Increase to change more. Decrease to change less.&lt;/li&gt;
    &lt;li id=&quot;Pplh&quot;&gt;Mask content: original&lt;/li&gt;
    &lt;li id=&quot;kUUP&quot;&gt;Mask Mode: Inpaint masked&lt;/li&gt;
    &lt;li id=&quot;qgXN&quot;&gt;Batch size: 4&lt;/li&gt;
  &lt;/ul&gt;
  &lt;p id=&quot;6JaL&quot;&gt;Press the &lt;strong&gt;Generate&lt;/strong&gt; button. Pick the one you like.&lt;/p&gt;
  &lt;figure id=&quot;kKlP&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/02/image-88.png&quot; width=&quot;512&quot; /&gt;
  &lt;/figure&gt;
  &lt;h3 id=&quot;nY3G&quot;&gt;Zoom and pan in inpainting&lt;/h3&gt;
  &lt;figure id=&quot;UG63&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/08/image-51-1024x624.png&quot; width=&quot;1024&quot; /&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;GN2g&quot;&gt;Do you have difficulty in inpainting a small area? Hover over the &lt;em&gt;information icon&lt;/em&gt; in the top left corner to see keyboard shortcuts for &lt;strong&gt;zoom and pan&lt;/strong&gt;.&lt;/p&gt;
  &lt;ul id=&quot;Nmwa&quot;&gt;
    &lt;li id=&quot;oFFg&quot;&gt;Alt + Wheel / Opt + Wheel: Zoom in and out.&lt;/li&gt;
    &lt;li id=&quot;IQuj&quot;&gt;Ctrl + Wheel: Adjust the &lt;strong&gt;brush size&lt;/strong&gt;.&lt;/li&gt;
    &lt;li id=&quot;afZ7&quot;&gt;R: &lt;strong&gt;Reset&lt;/strong&gt; zoom.&lt;/li&gt;
    &lt;li id=&quot;wNXf&quot;&gt;S: Enter/Exit &lt;strong&gt;full screen&lt;/strong&gt;.&lt;/li&gt;
    &lt;li id=&quot;Ffg7&quot;&gt;Hold F and move the cursor to &lt;strong&gt;pan&lt;/strong&gt;.&lt;/li&gt;
  &lt;/ul&gt;
  &lt;p id=&quot;0f2U&quot;&gt;These shortcuts also work in &lt;strong&gt;Sketch&lt;/strong&gt; and &lt;strong&gt;Inpaint&lt;/strong&gt; &lt;strong&gt;Sketch&lt;/strong&gt;.&lt;/p&gt;
  &lt;h3 id=&quot;DDAs&quot;&gt;Inpaint sketch&lt;/h3&gt;
  &lt;p id=&quot;Vg9H&quot;&gt;Inpaint sketch combines inpainting and sketch. It lets you paint like in the sketch tab but only regenerates the painted area. The unpainted area is unchanged. Below is an example.&lt;/p&gt;
  &lt;figure id=&quot;IHxT&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-71-1024x865.png&quot; width=&quot;1024&quot; /&gt;
    &lt;figcaption&gt;Inpaint sketch.&lt;/figcaption&gt;
  &lt;/figure&gt;
  &lt;figure id=&quot;3406&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-74.png&quot; width=&quot;512&quot; /&gt;
  &lt;/figure&gt;
  &lt;figure id=&quot;3405&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-73.png&quot; width=&quot;512&quot; /&gt;
  &lt;/figure&gt;
  &lt;figure id=&quot;3404&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-72.png&quot; width=&quot;512&quot; /&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;OHSX&quot;&gt;Results from inpaint sketch.&lt;/p&gt;
  &lt;h3 id=&quot;FzQn&quot;&gt;Inpaint upload&lt;/h3&gt;
  &lt;p id=&quot;wfdC&quot;&gt;Inpaint upload lets you upload a separate mask file instead of drawing it.&lt;/p&gt;
  &lt;h3 id=&quot;wqG6&quot;&gt;Batch&lt;/h3&gt;
  &lt;p id=&quot;vSLD&quot;&gt;Batch lets you inpaint or perform image-to-image for multiple images.&lt;/p&gt;
  &lt;h3 id=&quot;LZgJ&quot;&gt;Get prompt from an image&lt;/h3&gt;
  &lt;p id=&quot;iaCC&quot;&gt;AUTOMATIC1111’s &lt;strong&gt;Interogate CLIP&lt;/strong&gt; button takes the image you upload to the img2img tab and guesses the prompt. It is useful when you want to work on images you don’t know the prompt. To get a guessed prompt from an image:&lt;/p&gt;
  &lt;p id=&quot;cR35&quot;&gt;&lt;strong&gt;Step 1&lt;/strong&gt;: Navigate to the &lt;strong&gt;img2img page&lt;/strong&gt;.&lt;/p&gt;
  &lt;p id=&quot;h5eN&quot;&gt;&lt;strong&gt;Step 2&lt;/strong&gt;: Upload an image to the &lt;strong&gt;img2img tab&lt;/strong&gt;.&lt;/p&gt;
  &lt;p id=&quot;wWoM&quot;&gt;&lt;strong&gt;Step 3&lt;/strong&gt;: Click the &lt;strong&gt;Interrogate CLIP&lt;/strong&gt; button.&lt;/p&gt;
  &lt;figure id=&quot;pdvo&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-75-1024x683.png&quot; width=&quot;1024&quot; /&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;LuDq&quot;&gt;A prompt will show up in the prompt text box.&lt;/p&gt;
  &lt;p id=&quot;C8MH&quot;&gt;The &lt;strong&gt;Interrogate DeepBooru&lt;/strong&gt; button offers a similar function, except it is designed for &lt;a href=&quot;https://github.com/KichangKim/DeepDanbooru&quot; target=&quot;_blank&quot;&gt;anime images&lt;/a&gt;.&lt;/p&gt;
  &lt;h2 id=&quot;jcHq&quot;&gt;Upscaling&lt;/h2&gt;
  &lt;p id=&quot;YpE1&quot;&gt;You will go to the &lt;strong&gt;Extra &lt;/strong&gt;page for scaling up an image. Why do you need AUTOMATIC1111 to enlarge an image? You can use an &lt;a href=&quot;https://stable-diffusion-art.com/ai-upscaler/&quot; target=&quot;_blank&quot;&gt;AI upscaler&lt;/a&gt; that is usually unavailable on your PC. Instead of paying for an AI upscaling service, you can do it for free here.&lt;/p&gt;
  &lt;h3 id=&quot;q5ze&quot;&gt;Basic Usage&lt;/h3&gt;
  &lt;p id=&quot;3Qdh&quot;&gt;Follow these steps to upscale an image.&lt;/p&gt;
  &lt;p id=&quot;lQUB&quot;&gt;&lt;strong&gt;Step 1&lt;/strong&gt;: Navigate to the &lt;strong&gt;Extra&lt;/strong&gt; page.&lt;/p&gt;
  &lt;p id=&quot;zFYR&quot;&gt;&lt;strong&gt;Step 2&lt;/strong&gt;: Upload an image to the image canvas.&lt;/p&gt;
  &lt;p id=&quot;dlEL&quot;&gt;&lt;strong&gt;Step 3&lt;/strong&gt;: Set the &lt;strong&gt;Scale by&lt;/strong&gt; factor under the &lt;strong&gt;resize&lt;/strong&gt; label. The new image will be this many times larger on each side. For example, a 200×400 image will become 800×1600 with a scale factor of 4.&lt;/p&gt;
  &lt;p id=&quot;UuAG&quot;&gt;&lt;strong&gt;Step 4&lt;/strong&gt;: Select Upscaler 1. A popular general-purpose AI upscaler is R-ESRGAN 4x+.&lt;/p&gt;
  &lt;p id=&quot;nhq4&quot;&gt;&lt;strong&gt; Step 5&lt;/strong&gt;: Press &lt;strong&gt;Generate&lt;/strong&gt;. You should get a new image on the right.&lt;/p&gt;
  &lt;figure id=&quot;tU8s&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-76-1024x512.png&quot; width=&quot;1024&quot; /&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;TynK&quot;&gt;Make sure to &lt;strong&gt;inspect the new image at full resolution&lt;/strong&gt;. For example, you can open the new image in a new tab and disable auto-fit. Upscalers could produce artifacts that you might overlook if it is shrunk.&lt;/p&gt;
  &lt;p id=&quot;VOkc&quot;&gt;Even if you don’t need 4x larger, for example, it can still enlarge it to 4x and resize it later. This could help improve sharpness.&lt;/p&gt;
  &lt;p id=&quot;WvnG&quot;&gt;&lt;strong&gt;Scale to&lt;/strong&gt;: Instead of setting a scale factor, you can specify the dimensions to resize in the “&lt;strong&gt;scale to&lt;/strong&gt;” tab.&lt;/p&gt;
  &lt;h3 id=&quot;8h67&quot;&gt;Upscalers&lt;/h3&gt;
  &lt;p id=&quot;Woia&quot;&gt;AUTOMATIC1111 offers a few upscalers by default.&lt;/p&gt;
  &lt;p id=&quot;OGxj&quot;&gt;&lt;strong&gt;Upscalers&lt;/strong&gt;: The Upscaler dropdown menu lists several built-in options. You can also install your own. See the &lt;a href=&quot;https://stable-diffusion-art.com/ai-upscaler/&quot; target=&quot;_blank&quot;&gt;AI upscaler article&lt;/a&gt; for instructions.&lt;/p&gt;
  &lt;p id=&quot;iu06&quot;&gt;&lt;strong&gt;Lanczos&lt;/strong&gt; and &lt;strong&gt;Nearest&lt;/strong&gt; are old-school upscalers. They are not as powerful but the behavior is predictable.&lt;/p&gt;
  &lt;p id=&quot;FudH&quot;&gt;&lt;strong&gt;ESRGAN&lt;/strong&gt;, &lt;strong&gt;R-ESRGAN&lt;/strong&gt;, &lt;strong&gt;ScuNet&lt;/strong&gt;, and &lt;strong&gt;SwinIR&lt;/strong&gt; are AI upscalers. They can literally make up content to increase resolution. Some are trained for a particle style. The best way to find out if they work for your image is to test them out. I may sound like a broken record now, but make sure to look at the image closely at full resolution.&lt;/p&gt;
  &lt;p id=&quot;qDbA&quot;&gt;&lt;strong&gt;Upscaler 2&lt;/strong&gt;: Sometimes, you want to combine the effect of two upscalers. This option lets you combine the results of two upscalers. The amount of blending is controlled by the &lt;strong&gt;Upscaler 2 Visibility&lt;/strong&gt; slider. A higher value shows upscaler 2 more.&lt;/p&gt;
  &lt;p id=&quot;QKIs&quot;&gt;Can’t find the upscaler you like? You can install additional upscalers from the &lt;a href=&quot;https://upscale.wiki/wiki/Model_Database&quot; target=&quot;_blank&quot;&gt;model library&lt;/a&gt;. See &lt;a href=&quot;https://stable-diffusion-art.com/ai-upscaler/#Installing_new_upscaler&quot; target=&quot;_blank&quot;&gt;installation instructions&lt;/a&gt;.&lt;/p&gt;
  &lt;h3 id=&quot;ssZZ&quot;&gt;Face Restoration&lt;/h3&gt;
  &lt;p id=&quot;hv3c&quot;&gt;You can optionally restore faces in the upscaling process. Two options are available: (1) GFPGAN, and (2) CodeFormer. Set the visibility of either one of them to apply the correction. As a rule of thumbnail, you should set the lowest value you can get away with so that the style of the image is not affected.&lt;/p&gt;
  &lt;figure id=&quot;ZhrX&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-78-1024x517.png&quot; width=&quot;1024&quot; /&gt;
  &lt;/figure&gt;
  &lt;h2 id=&quot;t1xm&quot;&gt;PNG Info&lt;/h2&gt;
  &lt;figure id=&quot;bfen&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/03/image-79-1024x336.png&quot; width=&quot;1024&quot; /&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;nUP7&quot;&gt;Many Stable Diffusion GUIs, including AUTOMATIC1111, write generation parameters to the image png file. This is a convenient function to get back the generation parameters quickly.&lt;/p&gt;
  &lt;p id=&quot;KpUu&quot;&gt;If AUTOMATIC1111 generates the image, you can use the &lt;strong&gt;Send to&lt;/strong&gt; buttons to quickly copy the parameters to various pages.&lt;/p&gt;
  &lt;p id=&quot;RJHY&quot;&gt;It is useful when you find an image on the web and want to see if the prompt is left in the file.&lt;/p&gt;
  &lt;p id=&quot;EvsV&quot;&gt;This function could be helpful even for an image that is not generated. You can quickly send the image and its dimension to a page.&lt;/p&gt;
  &lt;h2 id=&quot;xON7&quot;&gt;Installing extensions&lt;/h2&gt;
  &lt;figure id=&quot;O3zn&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/08/image-71-1024x304.png&quot; width=&quot;1024&quot; /&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;20j4&quot;&gt;To install an extension in&lt;/p&gt;
  &lt;ol id=&quot;cfdJ&quot;&gt;
    &lt;li id=&quot;OLfv&quot;&gt;Start AUTOMATIC1111 Web-UI normally.&lt;/li&gt;
  &lt;/ol&gt;
  &lt;p id=&quot;ujAW&quot;&gt;2. Navigate to the &lt;strong&gt;Extension&lt;/strong&gt; Page.&lt;/p&gt;
  &lt;p id=&quot;gMKc&quot;&gt;3. Click the &lt;strong&gt;Install from URL&lt;/strong&gt; tab.&lt;/p&gt;
  &lt;p id=&quot;cE2K&quot;&gt;4. Enter the extension’s URL in the &lt;strong&gt;URL for extension’s git repository&lt;/strong&gt; field.&lt;/p&gt;
  &lt;p id=&quot;OhkO&quot;&gt;5. Wait for the confirmation message that the installation is complete.&lt;/p&gt;
  &lt;p id=&quot;MRCU&quot;&gt;6. Restart AUTOMATIC1111. (Tips: Don’t use the Apply and Restart button. It doesn’t work sometimes. Close and Restart Stable Diffusion WebUI completely)&lt;/p&gt;
  &lt;h2 id=&quot;etOk&quot;&gt;Applying Styles in Stable Diffusion WebUI&lt;/h2&gt;
  &lt;p id=&quot;PIhg&quot;&gt;A common question is applying a style to the AI-generated images in Stable Diffusion WebUI. There are a few ways.&lt;/p&gt;
  &lt;h3 id=&quot;Lu3l&quot;&gt;Prompts&lt;/h3&gt;
  &lt;p id=&quot;e01h&quot;&gt;Using &lt;a href=&quot;https://stable-diffusion-art.com/prompt-guide/&quot; target=&quot;_blank&quot;&gt;prompts&lt;/a&gt; alone can achieve amazing styles, even using a base model like Stable Diffusion &lt;a href=&quot;https://stable-diffusion-art.com/models/#Stable_diffusion_v15&quot; target=&quot;_blank&quot;&gt;v1.5&lt;/a&gt; or &lt;a href=&quot;https://stable-diffusion-art.com/sdxl-model/&quot; target=&quot;_blank&quot;&gt;SDXL&lt;/a&gt;. For example, see over &lt;a href=&quot;https://stable-diffusion-art.com/sdxl-styles/&quot; target=&quot;_blank&quot;&gt;a hundred styles&lt;/a&gt; achieved using prompts with the SDXL model.&lt;/p&gt;
  &lt;p id=&quot;6dry&quot;&gt;If you prefer a more automated approach to applying styles with prompts, you can use the &lt;a href=&quot;https://github.com/ahgsql/StyleSelectorXL&quot; target=&quot;_blank&quot;&gt;SDXL Style Selector&lt;/a&gt; extension to add style keywords to your prompt.&lt;/p&gt;
  &lt;h3 id=&quot;EPST&quot;&gt;Checkpoint Models&lt;/h3&gt;
  &lt;p id=&quot;pWqT&quot;&gt;Thousands of &lt;a href=&quot;https://stable-diffusion-art.com/models/&quot; target=&quot;_blank&quot;&gt;custom checkpoint models&lt;/a&gt; fine-tuned to generate various styles are freely available. Go find them on Civitai or Huggingface.&lt;/p&gt;
  &lt;h3 id=&quot;2Mzf&quot;&gt;Lora, LyCORIS, embedding and hypernetwork&lt;/h3&gt;
  &lt;p id=&quot;yOzx&quot;&gt;&lt;a href=&quot;https://stable-diffusion-art.com/lora/&quot; target=&quot;_blank&quot;&gt;Lora&lt;/a&gt;, &lt;a href=&quot;https://stable-diffusion-art.com/lycoris/&quot; target=&quot;_blank&quot;&gt;LyCORIS&lt;/a&gt;, &lt;a href=&quot;https://stable-diffusion-art.com/embedding/&quot; target=&quot;_blank&quot;&gt;embedding&lt;/a&gt;, and &lt;a href=&quot;https://stable-diffusion-art.com/hypernetwork/&quot; target=&quot;_blank&quot;&gt;hypernetwork&lt;/a&gt; models are small files that modify a checkpoint model. They can be used to achieve different styles. Again, find them on Civitai or Huggingface.&lt;/p&gt;
  &lt;h2 id=&quot;jkRq&quot;&gt;Checkpoint merger&lt;/h2&gt;
  &lt;p id=&quot;hQGL&quot;&gt;AUTOMATIC1111’s checkpoint merger is for combining two or more models. You can combine up to 3 models to create a new model. It is usually for mixing the styles of two or more models. However, the merge result is not guaranteed. It could sometimes produce undesirable artifacts.&lt;/p&gt;
  &lt;p id=&quot;LXez&quot;&gt;&lt;strong&gt;Primary model (A, B, C)&lt;/strong&gt;: The input models. The merging will be done according to the formula displayed. The formula will change according to the interpolation method selected.&lt;/p&gt;
  &lt;p id=&quot;AEY3&quot;&gt;&lt;strong&gt;Interpolation methods&lt;/strong&gt;:&lt;/p&gt;
  &lt;ul id=&quot;AaH3&quot;&gt;
    &lt;li id=&quot;fe8z&quot;&gt;&lt;strong&gt;No interpolation&lt;/strong&gt;: Use model A only. This is for file conversion or replacing the &lt;a href=&quot;https://stable-diffusion-art.com/how-to-use-vae/&quot; target=&quot;_blank&quot;&gt;VAE&lt;/a&gt;.&lt;/li&gt;
    &lt;li id=&quot;je9H&quot;&gt;&lt;strong&gt;Weighted sum&lt;/strong&gt;: Merge two models A and B, with multiplier weight M applying to B. The formula is A * (1 – M) + B * M.&lt;/li&gt;
    &lt;li id=&quot;qO9H&quot;&gt;&lt;strong&gt;Add difference&lt;/strong&gt;: Merge three models using the formula A + (B – C) * M.&lt;/li&gt;
  &lt;/ul&gt;
  &lt;p id=&quot;sQa8&quot;&gt;&lt;strong&gt;Checkpoint format&lt;/strong&gt;&lt;/p&gt;
  &lt;ul id=&quot;bWIe&quot;&gt;
    &lt;li id=&quot;2ymu&quot;&gt;&lt;strong&gt;ckpt&lt;/strong&gt;: The original checkpoint model format.&lt;/li&gt;
    &lt;li id=&quot;8Pzj&quot;&gt;&lt;strong&gt;safetensors&lt;/strong&gt;: &lt;a href=&quot;https://github.com/huggingface/safetensors&quot; target=&quot;_blank&quot;&gt;SafeTensors&lt;/a&gt; is a new model format developed by Hugging Face. It is safe because, unlike ckpt models, loading a Safe Tensor model won’t execute any malicious codes even if they are in the model.&lt;/li&gt;
  &lt;/ul&gt;
  &lt;p id=&quot;W7yM&quot;&gt;&lt;strong&gt;Bake in VAE&lt;/strong&gt;: Replace the &lt;a href=&quot;https://stable-diffusion-art.com/how-to-use-vae/&quot; target=&quot;_blank&quot;&gt;VAE decoder&lt;/a&gt; with the one selected. It is for replacing the original one with a better one released by Stability.&lt;/p&gt;
  &lt;h2 id=&quot;vkE1&quot;&gt;Train&lt;/h2&gt;
  &lt;p id=&quot;iaZM&quot;&gt;The Train page is for training models. It currently supports &lt;a href=&quot;https://stable-diffusion-art.com/embedding/&quot; target=&quot;_blank&quot;&gt;textual inversion&lt;/a&gt; (embedding) and hypernetwork. I don’t have good luck using AUTOMATIC1111 for training, so I will not cover this section.&lt;/p&gt;
  &lt;h2 id=&quot;VP8f&quot;&gt;Settings&lt;/h2&gt;
  &lt;p id=&quot;gQM3&quot;&gt;There is an extensive list of settings on AUTOMATIC1111’s setting page. I won’t be able to go through them individually in this article. Here are some you want to check.&lt;/p&gt;
  &lt;p id=&quot;JDMh&quot;&gt;Make sure to click &lt;strong&gt;Apply settings&lt;/strong&gt; after changing any settings.&lt;/p&gt;
  &lt;h3 id=&quot;pJxR&quot;&gt;Face Restoration&lt;/h3&gt;
  &lt;p id=&quot;ZCFn&quot;&gt;Make sure to select the default face restoration method. &lt;strong&gt;CodeFormer&lt;/strong&gt; is a good one.&lt;/p&gt;
  &lt;h3 id=&quot;NiTn&quot;&gt;Stable Diffusion&lt;/h3&gt;
  &lt;p id=&quot;W7VD&quot;&gt;Download and select a &lt;a href=&quot;https://stable-diffusion-art.com/how-to-use-vae/&quot; target=&quot;_blank&quot;&gt;VAE&lt;/a&gt; released by Stability to improve eyes and faces in v1 models.&lt;/p&gt;
  &lt;h3 id=&quot;RDgr&quot;&gt;Quick Settings&lt;/h3&gt;
  &lt;p id=&quot;3Rhj&quot;&gt;Quick Settings&lt;/p&gt;
  &lt;p id=&quot;sIyT&quot;&gt;You can enable custom shortcuts on the top.&lt;/p&gt;
  &lt;p id=&quot;1ENo&quot;&gt;On the &lt;strong&gt;Settings&lt;/strong&gt; page, click &lt;strong&gt;Show All Pages&lt;/strong&gt; on the left panel.&lt;/p&gt;
  &lt;p id=&quot;jew2&quot;&gt;Search the word &lt;strong&gt;Quicksettings&lt;/strong&gt; gets you to the Quick Setting field.&lt;/p&gt;
  &lt;p id=&quot;bllO&quot;&gt;There are a lot of settings available for selection. For example, the following enables shortcuts for Clip Skip and custom image output directories.&lt;/p&gt;
  &lt;figure id=&quot;BWE8&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/06/image-54-1024x87.png&quot; width=&quot;1024&quot; /&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;y5kT&quot;&gt;After saving the settings and reloading the Web-UI, you will see the new shortcuts at the top of the page.&lt;/p&gt;
  &lt;figure id=&quot;cHHh&quot; class=&quot;m_custom&quot;&gt;
    &lt;img src=&quot;https://stable-diffusion-art.com/wp-content/uploads/2023/06/image-55-1024x180.png&quot; width=&quot;1024&quot; /&gt;
  &lt;/figure&gt;
  &lt;p id=&quot;cqDb&quot;&gt;The custom output directories come in handy for organizing the images.&lt;/p&gt;
  &lt;p id=&quot;iNqD&quot;&gt;Here is the list of Quick settings that are useful to enable&lt;/p&gt;
  &lt;ul id=&quot;YF48&quot;&gt;
    &lt;li id=&quot;ihX4&quot;&gt;CLIP_stop_at_last_layers&lt;/li&gt;
    &lt;li id=&quot;tBRp&quot;&gt;sd_vae&lt;/li&gt;
    &lt;li id=&quot;ekOE&quot;&gt;outdir_txt2img_samples&lt;/li&gt;
    &lt;li id=&quot;Uj99&quot;&gt;outdir_img2img_samples&lt;/li&gt;
  &lt;/ul&gt;

</content></entry></feed>