Skip to content

Commit

Permalink
Merge pull request #3 from SeokjuLee/SeokjuLee-patch-2
Browse files Browse the repository at this point in the history
Update README.md
  • Loading branch information
SeokjuLee authored Mar 11, 2021
2 parents ad1b11b + 4eb5218 commit 95fac7f
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 8 deletions.
14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,19 +6,19 @@

This is the official PyTorch implementation for the system proposed in the paper :

>Learning Monocular Depth in Dynamic Scenes via Instance-Aware Projection Consistency
>**Learning Monocular Depth in Dynamic Scenes via Instance-Aware Projection Consistency**
>
>[Seokju Lee](https://sites.google.com/site/seokjucv/), Sunghoon Im, Stephen Lin, In So Kweon
>[**Seokju Lee**](https://sites.google.com/site/seokjucv/), [Sunghoon Im](https://sunghoonim.github.io/), [Stephen Lin](https://www.microsoft.com/en-us/research/people/stevelin/), and [In So Kweon](http://rcv.kaist.ac.kr/index.php?mid=rcv_faculty)
>
>**AAAI-21** [[PDF](https://sites.google.com/site/seokjucv/)] [[Project](https://sites.google.com/site/seokjucv/)]
>**AAAI-21** [[PDF](https://arxiv.org/abs/2102.02629)] [[Project](https://sites.google.com/site/seokjucv/home/instadm)]

<p align="center">
<img src="./misc/demo_sample.gif"/>
</p>

<p align="center">
&Longrightarrow; <strong><em>Unified Visual Odometry</em></strong> : Our holistic visualization of depth and motion estimation from self-supervised monocular training.
&Longrightarrow; <strong><em>Unified Visual Odometry</em></strong> : Our holistic visualization of depth and motion estimation from self-supervised monocular training.
</p>


Expand All @@ -28,7 +28,7 @@ This is the official PyTorch implementation for the system proposed in the paper
@inproceedings{lee2021learning,
title={Learning Monocular Depth in Dynamic Scenes via Instance-Aware Projection Consistency},
author={Lee, Seokju and Im, Sunghoon and Lin, Stephen and Kweon, In So},
booktitle= {Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)},
year={2021}
}
```
Expand Down Expand Up @@ -80,7 +80,7 @@ pip3 install torch-scatter torch-sparse -f https://pytorch-geometric.com/whl/tor

## Datasets

We provide our KITTI-VIS and Cityscapes-VIS dataset ([download link](https://bosch.frameau.xyz/index.php/s/JQ7PFjGsfJABdYk)), which is composed of pre-processed images, auto-annotated instance segmentation, and optical flow.
We provide our KITTI-VIS and Cityscapes-VIS dataset ([download link](https://drive.google.com/drive/folders/1tgQKHDj3tf97LZoQqXxcOF4LVHoIKXcs?usp=sharing)), which is composed of pre-processed images, auto-annotated instance segmentation, and optical flow.

- Images are pre-processed with [SC-SfMLearner](https://github.com/JiawangBian/SC-SfMLearner-Release/blob/master/scripts/run_prepare_data.sh).

Expand Down Expand Up @@ -123,7 +123,7 @@ sh scripts/train_resnet_256_cs.sh

Please indicate the location of the dataset with `$TRAIN_SET`.

The hyperparameters (batch size, learning rate, loss weight, etc.) are defined in each script file and [default arguments](train.py) in `train.py`. Please also check our [main paper](https://sites.google.com/site/seokjucv/).
The hyperparameters (batch size, learning rate, loss weight, etc.) are defined in each script file and [default arguments](train.py) in `train.py`. Please also check our [main paper](https://arxiv.org/abs/2102.02629).

During training, checkpoints will be saved in `checkpoints/`.

Expand Down
2 changes: 1 addition & 1 deletion scripts/train_resnet_256_kt.sh
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
TRAIN_SET=/seokju/KITTI/kitti_256


### Cityscapes model ###
### KITTI model ###
PRETRAINED=checkpoints/models-release/KITTI
# PRETRAINED=checkpoints/models-release/CS+KITTI

Expand Down

0 comments on commit 95fac7f

Please sign in to comment.