Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could not reproduce the results demonstrated in the paper #11

Open
bananaman1983 opened this issue Jan 5, 2024 · 3 comments
Open

Could not reproduce the results demonstrated in the paper #11

bananaman1983 opened this issue Jan 5, 2024 · 3 comments

Comments

@bananaman1983
Copy link

bananaman1983 commented Jan 5, 2024

First off, I am very intrigued by the approach this project is taking. I think utilizing polarized photography is a valid way to tackle specular highlights since it should theoretically remove the specular reflections leaving only diffuse reflections.

However there seem to be some inconsistencies with the code or the dataset available on the github from the paper which causes differences in the resulting output.

I have managed to redirect the references to the dataset by modifying the train_specularitynet.py, to acquire checkpoints after around 100 epochs of training, but in some instances they do not seem to yield outputs equivalent to the results demonstrated by authors.

For instance, wider areas of specular reflections seem to get omitted. Take an example here:
input24_input

input24_result_refined_iters1

I would be pleased to hear from the authors about some training details on the dataset and if the code on the github is the up-to-date version with which they used to produce the end result for the paper.

For the dataset, of the training set of 2210 scenes from the paper, substantial amount seem to be lost, leaving around 900 scenes at most.

As for the code, I have noticed some lines are commented out of spec.py where diffuse dataset is replaced by specular data, leaving calculated masks blank. I wonder if this was intentional. If so, I would be happy to know why this was necessary and your opinion on if reversing it would improve the results.

class SpecDataset(torch.utils.data.Dataset):
    def __init__(self, opt, datadir, dirA='spec', dirB='nospec',imgsize=None):
        super(SpecDataset, self).__init__()
        self.opt = opt
        self.datadir = datadir
        self.dirA = dirA
        self.dirB = dirB
        self.fnsA = sorted(os.listdir(join(datadir,dirA)))
        #self.fnsB = sorted(os.listdir(join(datadir,dirB))) 
        self.fnsB = self.fnsA
        self.imgsize = imgsize
        # np.random.seed(0)
        print('Load {} items in {} ...'.format(len(self.fnsA),datadir))

I could upload the fixed version in the pull request but I see that the license disclaimer is missing in this project and would feel relieved to know under what license the current code is distributed.

I admire the work you have published and would love to hear from you anytime soon.

@JovinLeong
Copy link

I'm facing the same issue - @bananaman1983 would it be possible to upload your fixed version? Thanks so much!!!

@bananaman1983
Copy link
Author

I'm facing the same issue - @bananaman1983 would it be possible to upload your fixed version? Thanks so much!!!
Sorry. Without a clear declaration of what scope of license this project is built upon from the author, I don't think I am allowed to publicly post any form of code modified from the source.

@JovinLeong
Copy link

Understood, let's just wait on @jianweiguo then

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants