This is a fork of BlenderProc. It is used to generate the BL30K dataset. The original functions of BlenderProc are mostly crippled by my (hacky) modifications to this repo.
BL30K is a synthetic dataset rendered using Blender with ShapeNet's data. We break the dataset into six segments, each with approximately 5K videos. The videos are organized in a similar format as DAVIS and YouTubeVOS, so dataloaders for those datasets can be used directly. Each video is 160 frames long, and each frame has a resolution of 768*512. There are 3-5 objects per video, and each object has a random smooth trajectory -- we tried to optimize the trajectories in a greedy fashion to minimize object intersection (not guaranteed), with occlusions still possible (happen a lot in reality). See MiVOS for details.
You can download it manually below. Note that each segment is about 115GB in size -- 700GB in total.
Google Drive is much faster in my experience. Your mileage might vary.
Manual download: [Google Drive] [OneDrive]
Examples:
Image | Annotation |
---|---|
- First download all required data and generate a list of yaml files. Instructions here.
- Run the following command:
python pool_run.py --models <path_to/ShapeNetCore.v2> --textures <path_to/Texture> --yaml <path_to/yaml> --output <output directory> -d <GPU ID> -N <Number of parallel processes>
Please cite our paper (and the original BlenderProc) if you find this repo/data useful!
@inproceedings{MiVOS_2021,
title={Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion},
author={Cheng, Ho Kei and Tai, Yu-Wing and Tang, Chi-Keung},
booktitle={CVPR},
year={2021}
}
Contact: [email protected]