Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The reverse sampling results are not ideal #9

Open
2000lf opened this issue Dec 14, 2024 · 0 comments
Open

The reverse sampling results are not ideal #9

2000lf opened this issue Dec 14, 2024 · 0 comments

Comments

@2000lf
Copy link

2000lf commented Dec 14, 2024

noise = torch.randn_like(im).to(device)
    t = torch.full((im.shape[0],), diffusion_config['num_timesteps']-1, device=device)
    #t = torch.randint(0, diffusion_config['num_timesteps'], (im.shape[0],)).to(device)
    xt = scheduler.add_noise(im, noise, t)

for i in tqdm(reversed(range(diffusion_config['num_timesteps']))):
        # Get prediction of noise
        noise_pred = model(xt, torch.as_tensor(i).unsqueeze(0).to(device))
        
        # Use scheduler to get x0 and xt-1
        xt, x0_pred = scheduler.sample_prev_timestep(xt, noise, torch.as_tensor(i).to(device))```

I used the xt from the forward process to replace the original random noise, and used the noise added during the forward process to replace the model's output for reverse sampling, in order to validate the reverse sampling process. However, my results are not ideal. Do you have any insights on this
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant