You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I have a question about the loss in function train_gcmc() in transD_movielens.py. Below is the code. Why are you accumulating the l_penalty_2 of all discriminators(the code which I made it bold)? In my opinion, each discriminator could be trained seperately with its own loss, and it has nothing to do with other discriminators.
for k in range(0,args.D_steps):
l_penalty_2 = 0
for fairD_disc, fair_optim in zip(masked_fairD_set,
masked_optimizer_fairD_set):
if fairD_disc is not None and fair_optim is not None:
fair_optim.zero_grad() l_penalty_2 += fairD_disc(filter_l_emb.detach(),
p_batch[:,0],True)
if not args.use_cross_entropy:
fairD_loss = -1*(1 - l_penalty_2)
else:
fairD_loss = l_penalty_2
fairD_loss.backward(retain_graph=True)
fair_optim.step()
The text was updated successfully, but these errors were encountered:
Hello, I have a question about the loss in function train_gcmc() in transD_movielens.py. Below is the code. Why are you accumulating the l_penalty_2 of all discriminators(the code which I made it bold)? In my opinion, each discriminator could be trained seperately with its own loss, and it has nothing to do with other discriminators.
for k in range(0,args.D_steps):
l_penalty_2 = 0
for fairD_disc, fair_optim in zip(masked_fairD_set,
masked_optimizer_fairD_set):
if fairD_disc is not None and fair_optim is not None:
fair_optim.zero_grad()
l_penalty_2 += fairD_disc(filter_l_emb.detach(),
p_batch[:,0],True)
if not args.use_cross_entropy:
fairD_loss = -1*(1 - l_penalty_2)
else:
fairD_loss = l_penalty_2
fairD_loss.backward(retain_graph=True)
fair_optim.step()
The text was updated successfully, but these errors were encountered: