-
Notifications
You must be signed in to change notification settings - Fork 5.9k
Image Generation Tutorial
This feature allows you to generate images using diffusers models like Tongyi-MAI/Z-Image-Turbo directly within the web UI.
- Clone the repository with
git clone https://github.com/oobabooga/text-generation-webui
or download it from here and unzip it.
- Use the one-click installer.
- Windows: Double click on
start_windows.bat - Linux: Run
./start_linux.sh - macOS: Run
./start_macos.sh
Note: Image generation does not work with the portable builds in .zip format in the Releases page. You need the "full" version of the web UI.
- Once installation ends, browse to
http://127.0.0.1:7860/. - Click on "Image AI" on the left.
- Click on "Model" at the top.
- In the "Download model" field, paste
https://huggingface.co/Tongyi-MAI/Z-Image-Turboand click "Download". - Wait for the download to finish (it's 31 GB).
Select the quantization option in the "Quantization" menu and click "Load".
The memory usage for Z-Image-Turbo for each option is:
| Quantization Method | VRAM Usage |
|---|---|
| None (FP16/BF16) | 25613 MiB |
| bnb-8bit | 16301 MiB |
| bnb-8bit + CPU Offload | 16235 MiB |
| bnb-4bit | 11533 MiB |
| bnb-4bit + CPU Offload | 7677 MiB |
The torchao options support torch.compile for faster image generation, with float8wo specifically providing native hardware acceleration for RTX 40-series and newer GPUs.
Note: The next time you launch the web UI, the model will get automatically loaded with your last settings when you try to generate an image. You do not need to go to the Model tab and click "Load" each time.
- While still in the "Image AI" page, go to the "Generate" tab.
- Type your prompt and click on the Generate button.
- For Z-Image-Turbo, make sure to keep CFG Scale at 0 and Steps at 9. Do not write a Negative Prompt as it will get ignored with this CFG Scale value.
To use this feature, you need to load an LLM in the main "Model" page on the left.
If you have no idea what to use, do this to get started:
- Download Qwen3-4B-Q3_K_M.gguf to your
text-generation-webui/user_data/modelsfolder. - Select the model in the dropdown menu in the "Model" page.
- Click Load.
Then go back to the "Image AI" page and check "LLM Prompt Variations".
After that, your prompts will be automatically updated by the LLM each time you generate an image. If you use a "Sequential Count" value greater than 1, a new prompt will be created for each sequential batch.
The improvement in creativity is striking (prompt: Photo of a beautiful woman at night under moonlight):
It is possible to generate images using the project's API. Just make sure to start the server with --api, either by
- Passing the
--apiflag to yourstartscript, like./start_linux.sh --api, or - Writing
--apito youruser_data/CMD_FLAGS.txtfile and relaunching the web UI.
Here is an API call example:
curl http://127.0.0.1:5000/v1/images/generations \
-H "Content-Type: application/json" \
-d '{
"prompt": "an orange tree",
"steps": 9,
"cfg_scale": 0,
"batch_size": 1,
"batch_count": 1
}'