You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, regarding the code in distill_mtt.py, I found that the net_eval we use is a pre-trained buffer on the entire dataset. If we rely on the buffer previously trained on the complete dataset, will it mask the independent verification ability of the synthetic data on the model training effect, and will it overestimate the effectiveness of the synthetic data?
net_eval = get_network(model_eval, channel, num_classes, im_size).to(args.device) # get a random model
for b, p in zip(buffer[it_eval][0], net_eval.state_dict().items()):
# load the same weights
p[1].copy_(b.data)
eval_labs = label_syn
with torch.no_grad():
image_save = image_syn
image_syn_eval, label_syn_eval = copy.deepcopy(image_save.detach()), copy.deepcopy(eval_labs.detach()) # avoid any unaware modification
saved_epoch_eval_train = args.epoch_eval_train
args.epoch_eval_train = args.override_epoch_eval_train
teacher_net, acc_train, acc_test = evaluate_synset(it_eval, net_eval, image_syn_eval, label_syn_eval, testloader, args, texture=args.texture)
args.epoch_eval_train = saved_epoch_eval_train
The text was updated successfully, but these errors were encountered:
Hello, regarding the code in distill_mtt.py, I found that the net_eval we use is a pre-trained buffer on the entire dataset. If we rely on the buffer previously trained on the complete dataset, will it mask the independent verification ability of the synthetic data on the model training effect, and will it overestimate the effectiveness of the synthetic data?
The text was updated successfully, but these errors were encountered: