Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

requirements to run the code #4

Open
weberhen opened this issue Apr 1, 2021 · 19 comments
Open

requirements to run the code #4

weberhen opened this issue Apr 1, 2021 · 19 comments

Comments

@weberhen
Copy link

weberhen commented Apr 1, 2021

Hi again!

Could you please share your running environment so I can try to run the code with little to no changes in it please?
I'm facing some silly problems like conversion from numpy to torch, which I suspect is because you use another pytorch version than mine, otherwise you would have the same problem as me. One example of error I'm getting:

$ Illumination-Estimation/RegressionNetwork/train.py

  • Number of params: 9.50M
    0 optim: 0.001
    Traceback (most recent call last):
    File "Illumination-Estimation/RegressionNetwork/train.py", line 82, in
    dist_emloss = GMLoss(dist_pred, dist_gt, depth_gt).sum() * 1000.0
    File "/miniconda3/envs/pt110/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call
    result = self.forward(*input, **kwargs)
    File "/Illumination-Estimation/RegressionNetwork/gmloss/samples_loss.py", line 43, in forward
    scaling=self.scaling, geometry=geometry)
    File "/Illumination-Estimation/RegressionNetwork/gmloss/samples_loss.py", line 72, in sinkhorn_tensorized
    self.distance = distance(batchsize=B, geometry=geometry)
    File "/Illumination-Estimation/RegressionNetwork/gmloss/utils.py", line 79, in init
    anchors = geometric_points(self.N, geometry)
    File "/Illumination-Estimation/RegressionNetwork/gmloss/utils.py", line 70, in geometric_points
    points[:, 0] = radius * np.cos(theta)
    TypeError: mul(): argument 'other' (position 1) must be Tensor, not numpy.ndarray

Thanks!

@goddice
Copy link

goddice commented Apr 5, 2021

How did you preprocess the raw Laval HDR dataset to get the training runnable? Thanks

@weberhen
Copy link
Author

weberhen commented Apr 6, 2021

Once you have access to the depth and HDR data from Laval HDR dataset, you can use the function depth_for_anchors_calc in my forked code: https://github.com/weberhen/Illumination-Estimation/blob/main/RegressionNetwork/data.py

Please let me know if you find some bug :)

@goddice
Copy link

goddice commented Apr 6, 2021

Once you have access to the depth and HDR data from Laval HDR dataset, you can use the function depth_for_anchors_calc in my forked code: https://github.com/weberhen/Illumination-Estimation/blob/main/RegressionNetwork/data.py

Please let me know if you find some bug :)

Thank you for your reply. Yes, I have access to the depth and HDR data. So where is the script the call "depth_for_anchors_calc" function. Or could you kindly teach me which scripts need to run in order to train the model from scratch? Suppose now I only have the raw depth and HDR data. Thanks!

@weberhen
Copy link
Author

weberhen commented Apr 7, 2021

the code I used to preprocess the dataset is this old commit, I ended up erasing it by mistake but you can use it here: https://github.com/weberhen/Illumination-Estimation/blob/5960738cdd7184c3cd897a47840db6b647d013ac/RegressionNetwork/representation/distribution_representation.py

this code creates the pkl files necessary during training. It will take the GT panorama and the depth and will create a pickle file with those infos.

Once you have the pkl files, all you need is to run train.py from the RegressionNetwork folder using my forked code: https://github.com/weberhen/Illumination-Estimation

But to be honest I'm not sure its working, its on epoch 38 and it outputs always the same prediction since epoch 3.

@LeoDarcy
Copy link

the code I used to preprocess the dataset is this old commit, I ended up erasing it by mistake but you can use it here: https://github.com/weberhen/Illumination-Estimation/blob/5960738cdd7184c3cd897a47840db6b647d013ac/RegressionNetwork/representation/distribution_representation.py

this code creates the pkl files necessary during training. It will take the GT panorama and the depth and will create a pickle file with those infos.

Once you have the pkl files, all you need is to run train.py from the RegressionNetwork folder using my forked code: https://github.com/weberhen/Illumination-Estimation

But to be honest I'm not sure its working, its on epoch 38 and it outputs always the same prediction since epoch 3.

EMLight needs the image warping operation in Gardner 's work to warp the raw HDR and get input images. But I am not sure whether the codes about warping operation are released here. How do you sample images and warp panoramas? Thanks in advanced.

@fnzhan
Copy link
Owner

fnzhan commented May 20, 2021

There is some trick for the training. I first overfit the model in a small dataset, then train it on the full dataset.
Thanks to the intellectual property of the Laval dataset, I am not sure if trained model can be released.

@weberhen
Copy link
Author

Hello @fnzhan ,

I work with prof Jean-François Lalonde, one of the creators of the dataset. You can share the trained model, the only thing is that it can be only used for research purposes :)

@cyjouc
Copy link

cyjouc commented Jun 9, 2021

the code I used to preprocess the dataset is this old commit, I ended up erasing it by mistake but you can use it here: https://github.com/weberhen/Illumination-Estimation/blob/5960738cdd7184c3cd897a47840db6b647d013ac/RegressionNetwork/representation/distribution_representation.py

this code creates the pkl files necessary during training. It will take the GT panorama and the depth and will create a pickle file with those infos.

Once you have the pkl files, all you need is to run train.py from the RegressionNetwork folder using my forked code: https://github.com/weberhen/Illumination-Estimation

But to be honest I'm not sure its working, its on epoch 38 and it outputs always the same prediction since epoch 3.

Hi,Would you like share the code that generate the pkl files? or share the Dataset preprocessing code?Wish for you reply!

@cyjouc
Copy link

cyjouc commented Jun 15, 2021

Once you have access to the depth and HDR data from Laval HDR dataset, you can use the function depth_for_anchors_calc in my forked code: https://github.com/weberhen/Illumination-Estimation/blob/main/RegressionNetwork/data.py
Please let me know if you find some bug :)

Thank you for your reply. Yes, I have access to the depth and HDR data. So where is the script the call "depth_for_anchors_calc" function. Or could you kindly teach me which scripts need to run in order to train the model from scratch? Suppose now I only have the raw depth and HDR data. Thanks!

Hi,would you like share the process of the depth and HDR data from Laval HDR dataset?wish for your reply!

@xjsxjs
Copy link

xjsxjs commented May 12, 2022

the code I used to preprocess the dataset is this old commit, I ended up erasing it by mistake but you can use it here: https://github.com/weberhen/Illumination-Estimation/blob/5960738cdd7184c3cd897a47840db6b647d013ac/RegressionNetwork/representation/distribution_representation.py

this code creates the pkl files necessary during training. It will take the GT panorama and the depth and will create a pickle file with those infos.

Once you have the pkl files, all you need is to run train.py from the RegressionNetwork folder using my forked code: https://github.com/weberhen/Illumination-Estimation

But to be honest I'm not sure its working, its on epoch 38 and it outputs always the same prediction since epoch 3.

Hello sir, this connection may not be open, how to deal with the original Laval data set, if you can tell me, I will be very grateful!

@weberhen
Copy link
Author

Hi!

You can try my fork: https://github.com/weberhen/Illumination-Estimation-1

@jxl0131
Copy link

jxl0131 commented Sep 27, 2022

Hi!

You can try my fork: https://github.com/weberhen/Illumination-Estimation-1

hello!

I am trying to follow your code to crop Fov from Hdr, using 'gen_hdr_crops.py'. But i found that your code using some modules like 'envmap'、'exzer', which i am not familiar with, and wonder how to pip install them. After google, i guess these modules come from ' skylibs'(https://github.com/soravux/skylibs), but still get some error after 'pip install --upgrade skylibs'. So can you share me with your requirements.txt ?

Thanks!

@weberhen
Copy link
Author

weberhen commented Oct 4, 2022

Hi @jxl0131 !

Its indeed skylibs. I just tested 'pip install --upgrade skylibs' and it worked. I suggest you ask the original developer if you cannot install since it will be the easier way to get gen_hdr_crops.py to work.

Good luck!

@jxl0131
Copy link

jxl0131 commented Oct 9, 2022

Hi @jxl0131 !

Its indeed skylibs. I just tested 'pip install --upgrade skylibs' and it worked. I suggest you ask the original developer if you cannot install since it will be the easier way to get gen_hdr_crops.py to work.

Good luck!

thanks!

I am ok with my enviromment now. I am following your edited GenProjector code, and i found that in your 'Illumination-Estimation-1/GenProjector/data.py' file, function 'getitem', you try to get envmap_exr from '/crop' directory. I can't understand it, i think it should be another directory containing full panorama picture instead crops. wish for your reply!

@weberhen
Copy link
Author

Hi!

I'm sorry about that, its called crop but inside were envmaps, I cant recall why I named it this way. But you can see that the code continues with the creation of the actual crop from that envmap, so just replace the folder 'crop' to match your dataset structure and it should work.

@jxl0131
Copy link

jxl0131 commented Oct 11, 2022

Hi!

I'm sorry about that, its called crop but inside were envmaps, I cant recall why I named it this way. But you can see that the code continues with the creation of the actual crop from that envmap, so just replace the folder 'crop' to match your dataset structure and it should work.

haha, i can understand what you said! Thanks!

@jxl0131
Copy link

jxl0131 commented Oct 17, 2022

Hi!

I'm sorry about that, its called crop but inside were envmaps, I cant recall why I named it this way. But you can see that the code continues with the creation of the actual crop from that envmap, so just replace the folder 'crop' to match your dataset structure and it should work.

Hi!

I'm sorry about that, its called crop but inside were envmaps, I cant recall why I named it this way. But you can see that the code continues with the creation of the actual crop from that envmap, so just replace the folder 'crop' to match your dataset structure and it should work.

Hi!
I am come back again.
I found that you mutiply crop with 'reexpose_scale_factor' and save it as crop in your 'gen_hdr_crops.py', is it a process to convert crop from hdr image to ldr image? Why you don't save '_' returned from 'genLDRimage(..)' directly?

EMLight's author tone crop and envmaps in all his 'data.py' code, which i think is to convert crop from hdr to ldr. So why you use 'genLDRimage(..)' in your 'gen_hdr_crops.py'(a Repeat the operation)?

code:

crop = extractImage(envmap_data.data, [elevation, azimuth], cropHeight, vfov=vfov, output_width = cropWidth)
_, reexpose_scale_factor = genLDRimage(crop, putMedianIntensityAt=0.45, returnIntensityMultiplier=True, gamma=gamma)
# save the cropped envmap
imwrite(os.path.join(output_folder, os.path.basename(input_file)), crop * reexpose_scale_factor)

Hope for your reply!

@weberhen
Copy link
Author

Hi!

re-exposing is not the same as tonemap: re-expose simply maps the range of a given HDR image to another range, which makes the images more similar. I do that since the dataset has some pretty dark HDR images and some super bright, so I just apply this (invertible) operation to make them be in a similar range. So this script is for that, generate (re-exposed) HDR crops :)

@AplusX
Copy link

AplusX commented Nov 2, 2022

Once you have access to the depth and HDR data from Laval HDR dataset, you can use the function depth_for_anchors_calc in my forked code: https://github.com/weberhen/Illumination-Estimation/blob/main/RegressionNetwork/data.py

Please let me know if you find some bug :)

Hi!
Could you share the depth maps from the Laval HDR dataset? I find the download link (http://indoor.hdrdb.com/UlavalHDR-depth.tar.gz) is broken. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants