An easy to use web app implementation of text guided image inpainting. (Replace any object or scene you want!)
Stable.Edit.Demo.mp4
Easiest setup by running in colab
- Click the link
- Click Runtime -> Hardware Accelerator change runtime to GPU for better speed up in generation.
- Run the cell in sequence
- Once the cell with "Run web app" is running, enter your huggingface access token
- Click the generated url from previous cell:
Name | Description | Default |
---|---|---|
Steps | Number of steps to generate the image. Higher steps usually lead to a higher quality image at the expense of slower inference. | 50 |
Guidance | Higher guidance scale encourages to generate images that are closely linked to the text prompt , usually at the expense of lower image quality. |
7.5 |
Brush / Eraser | Brush to draw mask on the region to replace/ Eraser to erase the mask | |
Seed | Change the seed num to get different result for same prompt, same seed num will generate same result for same prompt | 0 |
Negative prompt | The prompt or prompts not to guide the image generation. |
- Add support for stable-diffusion v2 from StabilityAI
- Support for multiple images generation at once
- Optimize the inference performance with xformers & TensorRT
- Support for exemplar guidance from paint-by-example
Huge thanks to runway-ml for the inpainting model, huggingface for providing an easy to use inpainting pipeline.