Try deploying Stable Diffusion locally.

I previously tried Midjourney through Discord, but I quickly ran out of free trial sessions, so I went to Taobao to buy a shared account for one month to continue testing. However, the novelty wore off after two or three days, and I hardly touched it. These past few days, I regained some interest and tried deploying Stable Diffusion locally.

I mainly referred to the article "Local Deployment of Stable Diffusion Tutorial, Tested and Successfully Installed" by Pancras Wen.

Basically, I followed the process described in the article, with only a minor issue encountered.

First, I downloaded Git and Python. Here, it is recommended to install Python 3.10.9 without specifying the reason, which I am also unaware of.

Then, I downloaded the stable-diffusion-webui software library: created a new folder and cloned it using Git:

git clone

After the download was complete, I ran the webui-user.bat batch file to automatically download and install the remaining components.

If there is a pip update prompt during the process, open a new cmd and run the provided command.

I encountered issues when cloning taming-transformers and CodeFormer, which resulted in error code 128. You can refer to the report here.

For taming-transformers, I successfully resolved it by restarting webui-user.bat, but it took quite some time.

For CodeFormer, even after reopening webui-user.bat multiple times, I couldn't resolve it, so I switched to using Git Bash for cloning and then ran webui-user.bat.

Afterwards, there were no further issues, and I continued with the download.

We can then access Stable Diffusion through a browser at

It is still very convenient to use, with explanations provided for each parameter, visible by hovering the mouse.

I tried calling it a few times with the default settings, and the results were quite poor.


Figure 1: Result with default settings, prompt is "a man is standing in front of a tank with the muzzle pointing at the man"

To generate high-quality images, appropriate settings are necessary, and the choice of sampler is crucial. There is a guide worth referring to: "Stable Diffusion Samplers: A Comprehensive Guide," which provides some recommendations:

  1. If you want to use relatively new models for quick generation with good quality, you can choose:
    • DPM++ 2M Karras, with 20-30 steps
    • UniPC, with 20-30 steps
  2. If you want high-quality images but don't care about convergence, you can choose:
    • DPM++ SDE Karras, with 8-12 steps (note: slower speed)
    • DDIM, with 10-15 steps
  3. If you want stable and reproducible results, avoid using old samplers such as Euler a, DPM2 a, DPM++ 2S a, DPM++ 2S a Karras.
  4. If you prefer simple choices, you can use Euler and Heun; the number of steps for Heun should not be excessive to save time.


Figure 2: Result of UniPC sampler with 22 steps, prompt is "a crying Chinese woman with iron chain around the neck"

I also looked at two other tutorials, and here are the references:

Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.