-
Notifications
You must be signed in to change notification settings - Fork 108
Run ComfyUI with OneDiff
Yao Chi edited this page Oct 20, 2023
·
17 revisions
Run the following command to download the pre-installed image of ComfyUI + OneDiff:
docker pull oneflowinc/comfyui-onediff:latest
docker run -it --shm-size=8G -P --privileged --runtime=nvidia --rm \
--gpus all --network host \
-e ONEDIFF_INITIAL_PACKAGE_NAMES_FOR_CLASS_PROXIES=diffusers,/app/ComfyUI/comfy \
-v /path/to/comfyui/models/:/app/ComfyUI/models/ \
oneflowinc/comfyui-onediff python /app/ComfyUI/main.py
NOTE
- If you would like to run graph by VM, set env variable
ONEFLOW_RUN_GRAPH_BY_VM=1
- The env variable
ONEDIFF_INITIAL_PACKAGE_NAMES_FOR_CLASS_PROXIES=diffusers,/app/ComfyUI/comfy
is required - The directory structure of
/path/to/comfyui/models
should follow the structure of ComfyUI/models which means:
models/
├── checkpoints
├── clip
├── clip_vision
├── configs
├── controlnet
├── diffusers
├── embeddings
├── gligen
├── hypernetworks
├── loras
├── style_models
├── unet
├── upscale_models
├── vae
└── vae_approx
In the menu's section "Add Node/utils", there is a "Model Speed Up" Node that takes a MODEL as input and outputs an accelerated model.
Load the screenshot images below in ComfyUI to try the workflow