Skip to content

Commit

Permalink
Refine
Browse files Browse the repository at this point in the history
  • Loading branch information
lixiang007666 committed Jul 6, 2024
1 parent 89efe18 commit 8b57a88
Show file tree
Hide file tree
Showing 3 changed files with 25 additions and 7 deletions.
9 changes: 8 additions & 1 deletion benchmarks/text_to_image.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@
from diffusers.utils import load_image

from onediffx import compile_pipe, quantize_pipe # quantize_pipe currently only supports the nexfort backend.
from onediff.infer_compiler import oneflow_compile


def parse_args():
Expand Down Expand Up @@ -244,7 +245,13 @@ def main():
pass
elif args.compiler == "oneflow":
print("Oneflow backend is now active...")
pipe = compile_pipe(pipe)
# Note: The compile_pipe() based on the oneflow backend is incompatible with T5EncoderModel.
# pipe = compile_pipe(pipe)
if hasattr(pipe, "unet"):
pipe.unet = oneflow_compile(pipe.unet)
if hasattr(pipe, "transformer"):
pipe.transformer = oneflow_compile(pipe.transformer)
pipe.vae.decoder = oneflow_compile(pipe.vae.decoder)
elif args.compiler == "nexfort":
print("Nexfort backend is now active...")
if args.quantize:
Expand Down
20 changes: 16 additions & 4 deletions onediff_diffusers_extensions/examples/pixart/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Run PixArt with nexfort backend(Beta Release)
# Run PixArt with nexfort backend (Beta Release)


1. [Environment Setup](#environment-setup)
Expand Down Expand Up @@ -27,8 +27,8 @@ https://github.com/siliconflow/onediff/tree/main/src/onediff/infer_compiler/back

HF model:

- PixArt-sigma: https://huggingface.co/PixArt-alpha/PixArt-Sigma-XL-2-1024-MS
- PixArt-alpha: https://huggingface.co/PixArt-alpha/PixArt-XL-2-1024-MS
- PixArt-sigma: https://huggingface.co/PixArt-alpha/PixArt-Sigma-XL-2-1024-MS

HF pipeline: https://huggingface.co/docs/diffusers/main/en/api/pipelines/pixart

Expand All @@ -44,7 +44,7 @@ Compared to PixArt-alpha, PixArt-sigma extends the token length of the text enco
cd onediff
```

### Run 1024*1024 without compile(the original pytorch HF diffusers pipeline)
### Run 1024*1024 without compile (the original pytorch HF diffusers pipeline)
```
# To test sigma, specify the --model parameter as `PixArt-alpha/PixArt-Sigma-XL-2-1024-MS`.
python3 ./benchmarks/text_to_image.py \
Expand All @@ -56,7 +56,19 @@ python3 ./benchmarks/text_to_image.py \
--prompt "product photography, world of warcraft orc warrior, white background"
```

### Run 1024*1024 with compile
### Run 1024*1024 with oneflow backend compile

```
python3 ./benchmarks/text_to_image.py \
--model PixArt-alpha/PixArt-XL-2-1024-MS \
--scheduler none \
--steps 20 \
--compiler oneflow \
--output-image ./pixart_alpha_compile.png \
--prompt "product photography, world of warcraft orc warrior, white background"
```

### Run 1024*1024 with nexfort backend compile
```
python3 ./benchmarks/text_to_image.py \
--model PixArt-alpha/PixArt-XL-2-1024-MS \
Expand Down
3 changes: 1 addition & 2 deletions src/onediff/infer_compiler/backends/nexfort/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,8 +42,7 @@ python3 -m nexfort.utils.clear_inductor_cache
Advanced cache functionality is currently in development.

### Dynamic shape
Onediff's nexfort backend also supports out-of-the-box dynamic shape inference. You just need to enable `dynamic` during compilation, as in `'{"mode": "max-autotune
", "dynamic": true}'`. To understand how dynamic shape support works, please refer to the <https://pytorch.org/docs/stable/generated/torch.compile.html> and <https://github.com/pytorch/pytorch/blob/main/docs/source/torch.compiler_dynamic_shapes.rst> page. To avoid over-specialization and re-compilation, you need to initially call your model with a non-typical shape. For example: you can first call your Stable Diffusion model with a shape of 512x768 (height != width).
Onediff's nexfort backend also supports out-of-the-box dynamic shape inference. You just need to enable `dynamic` during compilation, as in `'{"mode": "max-autotune", "dynamic": true}'`. To understand how dynamic shape support works, please refer to the <https://pytorch.org/docs/stable/generated/torch.compile.html> and <https://github.com/pytorch/pytorch/blob/main/docs/source/torch.compiler_dynamic_shapes.rst> page. To avoid over-specialization and re-compilation, you need to initially call your model with a non-typical shape. For example: you can first call your Stable Diffusion model with a shape of 512x768 (height != width).

Test SDXL:
```
Expand Down

0 comments on commit 8b57a88

Please sign in to comment.