Skip to content

A toolbox of compositional scene representation learning methods and benchmark datasets.

Notifications You must be signed in to change notification settings

FudanVI/compositional-scene-representation-toolbox

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Compositional Scene Representation Toolbox

This is an accompanied toolbox for the survey article: Compositional Scene Representation Learning via Reconstruction: A Survey [Yuan et al., IEEE TPAMI 2023]. The toolbox contains code for synthesizing multiple datasets that could be used for benchmarking compositional scene representation learning methods, and collects the implementations of the following papers:

The README.md file in each folder contains the instructions on how to run the code.

Submodules

Initialize submodules using the following command.

git submodule update --init --recursive

Create Benchmark Datasets

Change the current working directory to compositional-scene-representation-datasets and follow the instructions described in README.md to create benchmark datasets.

Evaluate Performance on Benchmark Datasets

AIR

Change the current working directory to air-unofficial/experiments_benchmark and run run.sh and run_nc.sh.

cd air-unofficial/experiments_benchmark
./run.sh
./run_nc.sh
cd ../..

Run air-unofficial/experiments_benchmark/evaluate.ipynb to evaluate the trained models.

N-EM

Change the current working directory to nem-unofficial/experiments_benchmark and run run.sh and run_nc.sh.

cd nem-unofficial/experiments_benchmark
./run.sh
./run_nc.sh
cd ../..

Run nem-unofficial/experiments_benchmark/evaluate.ipynb to evaluate the trained models.

IODINE

Change the current working directory to iodine-unofficial/experiments_benchmark and run run.sh and run_nc.sh.

cd iodine-unofficial/experiments_benchmark
./run.sh
./run_nc.sh
cd ../..

Run iodine-unofficial/experiments_benchmark/evaluate.ipynb to evaluate the trained models.

GMIOO

Change the current working directory to infinite-occluded-objects/experiments_benchmark and run run.sh and run_nc.sh.

cd infinite-occluded-objects/experiments_benchmark
./run.sh
./run_nc.sh
cd ../..

Run infinite-occluded-objects/experiments_benchmark/evaluate.ipynb to evaluate the trained models.

MONet

Change the current working directory to monet-unofficial/experiments_benchmark and run run.sh.

cd monet-unofficial/experiments_benchmark
./run.sh
cd ../..

Change the current working directory to monet-unofficial_nc/experiments_benchmark and run run.sh.

cd monet-unofficial_nc/experiments_benchmark
./run.sh
cd ../..

Run monet-unofficial/experiments_benchmark/evaluate.ipynb to evaluate the trained models.

SPACE

Change the current working directory to SPACE/src and run run.sh.

cd SPACE/src
./run.sh
cd ../..

Change the current working directory to SPACE_nc/src and run run.sh.

cd SPACE_nc/src
./run.sh
cd ../..

Run SPACE/evaluate.ipynb to evaluate the trained models.

Slot Attention

Change the current working directory to slot-attention-unofficial/experiments_benchmark and run run.sh and run_nc.sh.

cd slot-attention-unofficial/experiments_benchmark
./run.sh
./run_nc.sh
cd ../..

Run slot-attention-unofficial/experiments_benchmark/evaluate.ipynb to evaluate the trained models.

EfficientMORL

Change the current working directory to EfficientMORL and run run.sh and run_nc.sh.

cd EfficientMORL
./run.sh
./run_nc.sh
cd ..

Run EfficientMORL/evaluate.ipynb to evaluate the trained models.

GENESIS and GENESIS-V2

Change the current working directory to genesis and run run.sh.

cd genesis
./run.sh
cd ..

Change the current working directory to genesis_nc and run run.sh.

cd genesis_nc
./run.sh
cd ..

Run genesis/evaluate.ipynb to evaluate the trained models.

About

A toolbox of compositional scene representation learning methods and benchmark datasets.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •