Apply screentone to line drawings or colored illustrations with diffusion models.
Sketch2Manga - Drag and drop into ComfyUI to load the workflow
(Source @ini_pmh)
Illustration2Manga - Drag and drop into ComfyUI to load the workflow
(Source @curecu8)
Download a diffusion model for colorization (this demo used meinapastel for ComfyUI) and control_v11p_sd15_lineart.
Download the finetuned vae and diffusion model for screening.
Install ComfyUI.
Clone this repo to the ComfyUI directory and install dependencies:
git clone https://github.com/dmMaze/sketch2manga [ComfyUI Directory]/custom_nodes/sketch2manga
cd [ComfyUI Directory]/custom_nodes/sketch2manga
pip install -r requirements.txt
Launch ComfyUI, drag and drop the figure above to load the workflow.
Prepare environment
conda env create -f conda_env.yaml
pip install git+https://github.com/openai/CLIP.git
Download a diffusion model for colorization (this demo used anything-v4.5 for sd-webui) and control_v11p_sd15_lineart.
Download the finetuned vae and diffusion model for screening.
We're using stable-diffusion-webui @ bef51aed and sd-webui-controlnet @ aa2aa81, other versions might not work. For convient, you can use this hard fork. Put models metioned above into corresponding sd-webui directories, and launch webui python webui.py --api
.
Finally launch the gradio demo:
python gradio_demo/launch.py
There is an example webuiapi_demo.ipynb
showcasing inference using SD-WebUI API, it is a bit outdated though, but the logic applied is the same.
Our Illustration to Manga method compared with Mimic Manga (considered as SOTA)
Illustration (Input) | Mimic Manga | Ours |
---|---|---|
![]() |
![]() |
![]() |