You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried both train and test code on Laval Indoor dataset, and from the test code, I get one result as the image shown above. It seems like the Guassian Map that mentioned in the paper, and I wonder how to reconstruct the environment map from this image.
Thank you so much for your excellent work and I look forward to any reply.
The text was updated successfully, but these errors were encountered:
A generation network is followed to translate gaussian map to illumination maps, i.e., GenProjector.
The generation network is highly biased to the training scene, I recommend you to train it with your data.
Traceback (most recent call last):
File "train.py", line 97, in
dist_emloss = Sam_Loss(dist_pred, dist_gt).sum() * 1000.0
File "/venv/py37_zero-XRWy4lKA/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
TypeError: forward() missing 1 required positional argument: 'geometry'
so you have use the geometry in the training?
I tried both train and test code on Laval Indoor dataset, and from the test code, I get one result as the image shown above. It seems like the Guassian Map that mentioned in the paper, and I wonder how to reconstruct the environment map from this image.
Thank you so much for your excellent work and I look forward to any reply.
The text was updated successfully, but these errors were encountered: