Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confusion about the formula of content adversarial loss #56

Open
Victarry opened this issue Jun 1, 2021 · 1 comment
Open

Confusion about the formula of content adversarial loss #56

Victarry opened this issue Jun 1, 2021 · 1 comment

Comments

@Victarry
Copy link

Victarry commented Jun 1, 2021

In the original paper, the content adversarial loss is:
image

However, according to the code:

def backward_contentD(self, imageA, imageB):
    pred_fake = self.disContent.forward(imageA.detach())
    pred_real = self.disContent.forward(imageB.detach())
    for it, (out_a, out_b) in enumerate(zip(pred_fake, pred_real)):
      out_fake = nn.functional.sigmoid(out_a)
      out_real = nn.functional.sigmoid(out_b)
      all1 = torch.ones((out_real.size(0))).cuda(self.gpu)
      all0 = torch.zeros((out_fake.size(0))).cuda(self.gpu)
      ad_true_loss = nn.functional.binary_cross_entropy(out_real, all1)
      ad_fake_loss = nn.functional.binary_cross_entropy(out_fake, all0)
    loss_D = ad_true_loss + ad_fake_loss
    loss_D.backward()
    return loss_D

I think this formula should be written as:
image
After all, the discriminator get optimal when it outputs 0.5 for all input in the situation of original formula

@Wolfybox
Copy link

Duplicate Issue: #17

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants