-
Pull SegNeRF repo.
git clone https://github.com/Jason-YJ/SegNeRF.git
得手动新建个colmap文件夹然后去git clone https://github.com/Jason-YJ/colmap 然后再按官方步骤安装colmap
-
Install python packages with anaconda.
conda create -n SegNeRF python=3.7 conda activate SegNeRF conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 -c pytorch pip install -r requirements.txt
-
Install Mseg
cd to root path first. Then
ipython !git clone https://github.com/mseg-dataset/mseg-api.git !cd mseg-api && sed -i '12s/.*/MSEG_DST_DIR="\/dummy\/path"/' mseg/utils/dataset_config.py !cd mseg-api && pip install -e . !cd mseg-api && wget --no-check-certificate -O "mseg-3m.pth" "https://github.com/mseg-dataset/mseg-semantic/releases/download/v0.1/mseg-3m-1080p.pth" !cd mseg-api && git clone https://github.com/mseg-dataset/mseg-semantic.git !cd mseg-api/mseg-semantic && pip install -r requirements.txt !cd mseg-api/mseg-semantic && pip install -e .
-
We use COLMAP to calculate poses and sparse depths. However, original COLMAP does not have fusion mask for each view. Thus, we add masks to COLMAP and denote it as a submodule. Please follow https://colmap.github.io/install.html to install COLMAP in
./colmap
folder (Note that do not cover colmap folder with the original version).
- Download 8 ScanNet scene data used in the paper here and put them under
./data
folder. - Run SegNeRF
The whole procedure takes about 3.5 hours on one NVIDIA GeForce RTX 2080 GPU, including COLMAP, depth priors training, NeRF training, filtering and evaluation. COLMAP can be accelerated with multiple GPUs.You will get per-view depth maps in
sh run.sh $scene_name
./logs/$scene_name/filter
. Note that these depth maps have been aligned with COLMAP poses. COLMAP results will be saved in./data/$scene_name
while others will be preserved in./logs/$scene_name
- Place your data with the following structure:
NerfingMVS |───data | |──────$scene_name | | | train.txt | | |──────images | | | | 001.jpg | | | | 002.jpg | | | | ... |───configs | $scene_name.txt | ...
train.txt
contains names of all the images. Images can be renamed arbitrarily and '001.jpg' is just an example. You also need to imitate ScanNet scenes to create a config file in./configs
. Note thatfactor
parameter controls the resolution of output depth maps. You also should adjustdepth_N_iters, depth_H, depth_W
inoptions.py
accordingly.
Our code is based on the pytorch implementation of NeRF: NeRF-pytorch. We also refer to mannequin challenge.