Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Edit Vector #20

Open
baopmessi opened this issue Dec 4, 2023 · 14 comments
Open

Edit Vector #20

baopmessi opened this issue Dec 4, 2023 · 14 comments

Comments

@baopmessi
Copy link

Thanks for awesome work!.
How can i obtain a attribute edit vector (exp: smile, glasses, ...)

@williamyang1991
Copy link
Owner

please refer to #17

@baopmessi
Copy link
Author

baopmessi commented Dec 4, 2023

Thanks for answer quickly!
I load vector from interfaceGan and inference but it's not working. The face has no change as expected.
image
image

@williamyang1991
Copy link
Owner

The vector used in our age model has been saved within the model ckpt: ckpt['editing_w']

def load_model(path, device):
    local_path = hf_hub_download('PKUWilliamYang/StyleGANEX', path)
    ckpt = torch.load(local_path, map_location='cpu')
    opts = ckpt['opts']
    opts['checkpoint_path'] = local_path
    opts['device'] = device
    opts = Namespace(**opts)
    pspex = pSp(opts).to(device).eval()
    pspex.latent_avg = pspex.latent_avg.to(device)
    if 'editing_w' in ckpt.keys():
        return pspex, ckpt['editing_w'].clone().to(device)
    return pspex

If you want to use another vector, you need to train a new model on that vector.

@baopmessi
Copy link
Author

baopmessi commented Dec 5, 2023

I have a few questions:

  • the ckpt in styleganex_edit_age.pt is a model trained as fine-tuned from Video face editing. https://github.com/williamyang1991/StyleGANEX#video-editing-1
  • The results of editing the face (young face) from the Video face editing are much better than inversion + adding editing vectors to w+ ?. Can i use inversion + editing vectors instead of training for each editing vectors

@williamyang1991
Copy link
Owner

  • inversion + editing vectors: for image editing
  • training for each editing vectors: for video editing, because in this way, we can use skip connection and temporal consistency loss to improve the quality and coherence of the video. And you will need no optimization per frame for inverion, which saves time during inferene.

@baopmessi
Copy link
Author

Thanks you for your help!
I have succeeded in inversion + editing vector (from the ckpt in your pretrained_models/styleganex_edit_age.pt).
image
However, I couldn't find where the editing vector comes from. I tried using editing vectors from the InterfaceGAN repo as #17 , but they didn't work. Could you please share how you obtained the editing vector as in pretrained_models/styleganex_edit_age.pt?
image

@williamyang1991
Copy link
Owner

This is how I obtain the editing vector #17 (comment)
I have no idea why your case fails.

In huggingface, we also provide the editing vectors

self.editing_dicts = torch.load(hf_hub_download('PKUWilliamYang/StyleGANEX', 'direction_dics.pt'))
editing_w = self.editing_dicts['smile'].to(self.device)

@baopmessi
Copy link
Author

baopmessi commented Dec 6, 2023

Oh! I see.

  • self.editing_dicts = torch.load(hf_hub_download('PKUWilliamYang/StyleGANEX', 'direction_dics.pt')) editing_w = self.editing_dicts['smile'].to(self.device)
    The smile direction in this code snippet is same to that in the LowRankGAN repo link, not in InterfaceGAN.

  • However, I couldn't find the age direction in the LowRankGAN repo, and the age vectors in InterfaceGAN don't match the "reduce_age" in your edit_direction. Can you help me obtain it? Did you select the vector after running link ?

Big thanks!

@williamyang1991
Copy link
Owner

Your interfacegan vector is based on StyleGAN-CelebHQ
You should check StyleGAN-FFHQ

@baopmessi
Copy link
Author

baopmessi commented Dec 6, 2023

I have tried all the age vectors in 1, but none of them work as well as yours. Because I want to retrain with some new expressions, I need to find the origin of the edit vectors.

@williamyang1991
Copy link
Owner

image

I check my code. I use the edting vector from anycostgan

@nlcefn
Copy link

nlcefn commented Jan 4, 2024

Can I edit an image using the StyleGANEX inversion model, similar to an image-to-image model with skip connections, by modifying the vector to achieve various styles, such as adding glasses? Additionally, when using the StyleGANEX model to transform images, can I edit the picture for multiple styles using a single model by calculating the editing vector, and does this process involve skip connections?

@williamyang1991
Copy link
Owner

Two ways of editing

  1. StyleGANEX inversion + arbitrary editing vector.
  2. Train encoder with skip connections, to build an image-to-image model on the editing vector of adding glasses. Once trained, this encoder with skip connections will only work on this editing vector. Not work for other editing vectors.

Both way can handle arbitrary face images.

@nlcefn
Copy link

nlcefn commented Feb 2, 2024

If I want to train a model with skip connections for another editing style, how should I start the training process, and how many datasets should I prepare?and how should I specify the command-line parameters for training?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants