Repository for the Deep Learning course project:
- Milad Samimifar
- Pooria Ashrafian
Object detection method that can simultaneously estimate the positions and depth of the objects from images. (Based on NYU-Depth V2 dataset)
The NYU-Depth V2 data set is comprised of video sequences from a variety of indoor scenes as recorded by both the RGB and Depth cameras from the Microsoft Kinect.
Joint object detection and depth estimation web app using streamlit (recomended): Object_detection_web_app_streamlit repository
- Download yolov3 weights and put it in Object_detection_web_app_streamlit folder:
wget -nc https://pjreddie.com/media/files/yolov3.weights -P Object_detection_web_app_streamlit
- Download our Unet depth model weights and put it in ./models_depth/Unet folder:
-
Weighs: https://drive.google.com/file/d/1fvzPVqKj46WjaYw6OUr1a38KblWkGC4W/view?usp=sharing
-
Model (Json file): https://drive.google.com/file/d/1g685-v1qGv6NBE7nhY0LSQDgUemJXarc/view?usp=sharing
Shell commmand:
gdown --id 1fvzPVqKj46WjaYw6OUr1a38KblWkGC4W -O ./models_depth/Unet/2.h5 # weights
gdown --id 1g685-v1qGv6NBE7nhY0LSQDgUemJXarc -O ./models_depth/Unet/2.json # model(json file)
- install requirements:
pip install streamlit opencv-python black
- Execute:
cd Object_detection_web_app_streamlit
streamlit run ./src/app.py
Joint object detection and depth estimation web app using django: Object_detection_web_app repository
- Download yolov3 weights and put it in Object_detection_web_app folder:
wget -nc https://pjreddie.com/media/files/yolov3.weights -P Object_detection_web_app
- Install requirements:
pip install -r ./Object_detection_web_app/requirements.txt
- Execute the code below: (Only once)
cd Object_detection_web_app
python manage.py collectstatic
- Execute:
python manage.py runserver
Joint_notebook.ipynb:
- Download dataset and preprocessing
- Train Unet model for depth estimation [1]
- Yolov3 Object detection
- Joint model and test
[1] Ibraheem Alhashim and Peter Wonka. High quality monocular depth estimation via transfer learning. arXiv preprint arXiv:1812.11941, 2018.