Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ControlNet result very bad. #203

Open
ShenZheng2000 opened this issue Jun 20, 2024 · 2 comments
Open

ControlNet result very bad. #203

ShenZheng2000 opened this issue Jun 20, 2024 · 2 comments

Comments

@ShenZheng2000
Copy link

Here is the inference script I used for controlnet image to image translation. Note that I already download your config.json and diffusion_pytorch_model.safetensors and put them into controlnet.

from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
from diffusers.utils import load_image
import torch

base_model_path = "runwayml/stable-diffusion-v1-5"
controlnet_path = "controlnet"

controlnet = ControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
    base_model_path, controlnet=controlnet, torch_dtype=torch.float16
)

# speed up diffusion process with faster scheduler and memory optimization
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
# remove following line if xformers is not installed or when using Torch 2.0.
# pipe.enable_xformers_memory_efficient_attention() # NOTE: comment for now because torch<2.0
# memory optimization.
pipe.enable_model_cpu_offload()

control_image = load_image("bdd100k/images/100k/train_day/0a0a0b1a-7c39d841.jpg")
# prompt = "turn this into a night driving scene"
prompt = "day to night"

# generate image
generator = torch.manual_seed(0)
image = pipe(
    prompt, num_inference_steps=20, generator=generator, image=control_image
).images[0]
image.save("./output.png")

However, the result is very bad (Screenshot below).
image

@starrywintersky
Copy link

starrywintersky commented Jun 21, 2024

It seems that you need to calculate depth rather than directly put the raw image into controlnet pipeline:
`from transformers import pipeline
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
from PIL import Image
import numpy as np
import torch
from diffusers.utils import load_image

depth_estimator = pipeline('depth-estimation')

image = load_image("https://huggingface.co/lllyasviel/sd-controlnet-depth/resolve/main/images/stormtrooper.png")

image = depth_estimator(image)['depth']
image = np.array(image)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
image = Image.fromarray(image)

controlnet = ControlNetModel.from_pretrained(
"lllyasviel/sd-controlnet-depth", torch_dtype=torch.float16
)

pipe = StableDiffusionControlNetPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16
)

pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)

pipe.enable_xformers_memory_efficient_attention()

pipe.enable_model_cpu_offload()

image = pipe("Stormtrooper's lecture", image, num_inference_steps=20).images[0]

image.save('./images/stormtrooper_depth_out.png')
`

@ShenZheng2000
Copy link
Author

ShenZheng2000 commented Jun 22, 2024

Thank you for your response. My current code isn't working. Could you update the README for ControlNet with a working example (code and images) to help me debug?

Here is my code

from transformers import pipeline
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
from PIL import Image
import numpy as np
import torch
from diffusers.utils import load_image

# Paths
base_model_path = "runwayml/stable-diffusion-v1-5"
controlnet_path = "controlnet"  # Use a ControlNet model suited for depth

# Step 1: Estimate depth of the input image
depth_estimator = pipeline('depth-estimation')

# Load the input image
input_image_path = "bdd100k/images/100k/train_day/0a0a0b1a-7c39d841.jpg"
control_image = load_image(input_image_path)

# Estimate depth
depth_map = depth_estimator(control_image)['depth']
depth_map = np.array(depth_map)
depth_map = depth_map[:, :, None]  # Add a channel dimension
depth_map = np.concatenate([depth_map, depth_map, depth_map], axis=2)  # Make it 3-channel
depth_map = Image.fromarray(depth_map)

# Step 2: Set up the ControlNet pipeline with the depth model
controlnet = ControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
    base_model_path, controlnet=controlnet, torch_dtype=torch.float16
)

# Optimize the pipeline
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
# pipe.enable_xformers_memory_efficient_attention()
pipe.enable_model_cpu_offload()

# Define the prompt
prompt = "day to night"

# Step 3: Generate the image using the depth map as the control image
generator = torch.manual_seed(0)
output_image = pipe(
    prompt, num_inference_steps=20, generator=generator, image=depth_map
).images[0]

# Save the output image
output_image.save("./output.png")

Here is my original (left) and result image (right)
image

I have also tried changing controlnet_path to this

controlnet_path = "lllyasviel/sd-controlnet-depth"

And get the following result, which is still bad.
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants