-
Notifications
You must be signed in to change notification settings - Fork 249
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Depth map creation for monocular depth estimation #146
base: master
Are you sure you want to change the base?
Conversation
Cool! Thanks for your contribution. We'll take a look at this. |
Thanks for the quick reply! This was a quick implementation, let me know if you are interested in integrating, I can clean it up and follow CONTRIBUTING.md to give a proper pull request. |
Hi @TilakD, thanks very much for this contribution. This looks like something we would like to integrate. Do you mind adding comments, type hints, and also follow I'm also curious about the visualization format. Do you think there is a more clear way to show the depth map? Would be nice also to make the RGB and depth images identical in size. |
Hi @TilakD, a few other small comments:
Thanks very much @TilakD. |
Hello @johnwlambert @James-Hays I have made some improvements and cleanup and addressed few of your code comments, please take a look at the notebook once. @johnwlambert I am trying to follow kitti dataset and I have replicated how they save their depth images and images. To your comment on precision, I am multiplying depth value (meters) with 256 |
Hi @TilakD , thanks very much for these improvements. A few more small comments:
Thanks again for this contribution! |
Fixed typos Pep8 import standard Few more comments
Hi @johnwlambert I have fixed 1 through 4 of your suggestions. Visualize - I intended to keep it as an independent entity, so that user will be able to provide image name and logID of their interest and see the depth maps. Genererate rgb to depth mapping for model training - In monocular depth estimation model training, people normally use a text file for model training and validation. Say your depth data is at |
Thanks @TilakD, looking good 👍! Ok I understand now about the last cell. Having a txt file with the paths is certainly useful for a Pytorch dataloader. Maybe we can just clarify the language a bit further: For your training dataloader, you will likely find it helpful to read image paths from a .txt file. In this final cell, we explicitly write to a .txt file all rgb image paths that have a corresponding sparse ground truth depth file. Just a few more miscellaneous things:
Thanks again. 👍 |
A few more tweaks for the text:
|
Another text tweak: |
|
I noticed we are not catching |
I slightly prefer to make closer objects more "hot" in the colormap, i.e. I would prefer the "inferno" colormap in matplotlib instead of "jet", and to show inverse depth after 30 iterations of dilation:
|
These are amazing set if suggestions, I have modified the code accordingly, I have modified accordingly, and improved on top of it. I'll commit the changes in sometime. inferno, does look good! Thanks again for the amazing review! |
@johnwlambert Updated notebook based on your suggestions, please take a look... Note: I have integrated txt generation in Lidar2depth class. |
Thanks very much @TilakD. Looking good. Inferno visualizations look great on that log, much clearer.
Instead of:
the following is preferred (With type hint in the function signature)
|
hi @johnwlambert Regarding repeated logs. I verified and re-verified(Extracted each one of them to totally different directories). Below logs are still repeated. Don't know what the issue is..
|
Hey @TilakD the log splits of Argoverse 1.0 and Argoverse1.1 are different. We didn't see the issue in Argoverse 1.1 data. Maybe you extracted 1.0 and 1.1 into the same directory? |
Hi @alliecc you are absolutely right! I had downloaded 1.0 Training part 3 instead of 1.1 🤦♂️ Sorry for the trouble @johnwlambert @alliecc |
hi @johnwlambert sorry for the delay. I was little busy with work. Changes and fixes,
Let me know your thoughts. |
@johnwlambert I was experimenting with the above created dataset on a sample monocular depth estimation model. I stitched up results from val and test dataset and created a video, you can check it out front center cam and other ring camera. It is amazing! PS: There seems to be a halo effect on nearby car roof tops. Don't know if this is dataset issue or model issue, I need to investigate it... |
Hi @TilakD, those videos look great, thanks for sharing. No worries about the dataset version mixup. The code is looking really good. Thanks for your patience here, and for checking the max range in the val set. Just a few more small changes before we merge:
to
~and the same for ~
~to ~
becomes
Many thanks @TilakD. |
@johnwlambert Done😊 I'm learning a lot through these suggestions! Thanks a lot! Thanks for your patience!!! Regarding saving focal length. I thought it would be helpful to get accurate focal length corresponding to that image, if in case it is needed while training (There are few papers which use focal length for model training). If the user doesn't find it useful, they can edit that function. |
Fantastic, looks great @TilakD . Ok, good to know about the focal length being useful for mono-depth methods. I will admit I liked the previous images in the notebook a bit more (the ones you put in A few more type hint suggestions:
I would I think this type should be:
instead of:
Can you do one last check with |
@johnwlambert Done. |
Hi @TilakD, thanks very much. Ok I ran the tutorial by a few others and we have just a few more requests for revision:
Thanks again for this great contribution! |
Will be replacing this with an updated notebook (argoverse_monocular_depth_map_tutorial.ipynb).
@johnwlambert Done! Looking forward to the stereo tutorial! Thanks😀. |
Updated notebook path
This looks great! I was wondering why it has not been merged to master. Is this no longer a feature of interest? It seems to be that this will be useful for the community. |
Hi @TilakD! Thanks again for the great contribution and sorry for the slow response.
inv_depth_map = np.divide(1.0, depth_map, where=depth!=0)
inv_depth_map_scaled = np.uint16(inv_depth_map * 256.0) Change text to accommodate the inverse depth representation. Also added some comments for the README file. Please let us know if you have any questions or suggestions :). |
modification done based on the PR
updated notebook hyperlink
Hi @jhonykaesemodel I have provided changes to the notebook and the readme. Please take a look... |
Hi,
I have created a tutorial notebook which shows how to create depth map for frames from ring cameras using the API. This will help researchers to train and test their monocular depth estimation models.