Skip to content

feat: Add Qwen-Image generation support and remove GLM-4.7-Flash backend#1

Open
XiaoKuge wants to merge 1 commit into
OminiX-ai:mainfrom
XiaoKuge:feat/qwen-image-support
Open

feat: Add Qwen-Image generation support and remove GLM-4.7-Flash backend#1
XiaoKuge wants to merge 1 commit into
OminiX-ai:mainfrom
XiaoKuge:feat/qwen-image-support

Conversation

@XiaoKuge
Copy link
Copy Markdown

@XiaoKuge XiaoKuge commented Feb 5, 2026

Summary

  • Add Qwen-Image 8-bit quantized image generation pipeline (DiT + 3D VAE) with classifier-free guidance, dynamic flow matching schedule, and configurable steps/guidance scale
  • Replace glm47-flash-mlx dependency with qwen-image-mlx; remove temporarily disabled GLM-4.7-Flash LLM backend
  • Add steps and guidance_scale fields to ImageGenerationRequest for per-request control

Test plan

  • Verify Qwen-Image model loads from qwen-image-8bit config entry
  • Test text-to-image generation with various prompts, step counts, and guidance scales
  • Confirm FLUX and Z-Image pipelines still work correctly after Option refactor
  • Verify img2img correctly rejects Qwen-Image model type
  • Check that GLM-4.7-Flash model references are fully removed

🤖 Generated with Claude Code

Replace glm47-flash-mlx dependency with qwen-image-mlx, adding full
Qwen-Image 8-bit quantized pipeline (DiT + 3D VAE) with classifier-free
guidance, dynamic flow matching schedule, and configurable steps/guidance
scale. Remove temporarily disabled GLM-4.7-Flash LLM backend.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant