Skip to content

sdu-huanhq/linemod_dataset_processing

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Introduction

The Linemod dataset is an old object pose estimation dataset, but until now there was no dataset preprocessing method similar to the coco dataset, so I made a linemod dataset preprocessing method based on pvnet. This method can be applied to the preprocessing of pose estimation algorithm based on linemod dataset, and generate coco format annotation files for training, test and validation sets

Download(Reference PVnet)

  1. Linemod
  2. Linemod_orig
  3. Linemod_occlusion

tips: We will only be using some of these three datasets, not all of them, so you only need to focus on the parts we use

Introduction to the Dataset

This section explains only the parts that need to be used. The number of objects in the three datasets is not the same, and the specific statistics are as follows(Glue and Eggbox are symmetric ):

lm-orig lm lm-o
Ape
Benchvise ×
Bowl × ×
Cam ×
Can
Cat
Cup × ×
Driller
Duck
Eggbox
Glue
Holepuncher
Iron ×
Lamp ×
Phone ×
total 15 13 8

Environment

numpy
PyOpenGL==3.1.1a1
tqdm
json
pillpw==6.2.2
argparse

Detailed Description

Linemod

The data we use in linemod has the following parts: 3D models, RGB images, object standard mask, masking with noise, file names for the training and test sets(txt), bounding box coordinates(2D and 3D) and keypoint coordinates(2D and 3D) of the 3D model, If you need model keypoints generated by the fps algorithm, you can read the pvnet source code.

Linemod_orig and Linemod_occlusion datasets also have 3D model, but their center coordinates are different from linemod dataset, so we only use the 3D model in linemod

Linemod_orig

In Linemod_orig we only use depth information, other information is not used

Linemod_occlusion

Linemod_occlusion is used for evaluating model. he data we use in linemod has the following parts: object pose information, RGB images, depth images

The images in the Linemod-O dataset are consistent with the benchvise class of the linemod dataset, but the annotation information is different

We strongly encourage you to read the source code for a better understanding

Usage

File Structure

    ├── data
    │   ├── Linemod
    │   ├── Linemde_orig
    │   ├── Linemod_occlusion

Create Environment

conda create -n lm_process python=3.7
conda activate lm_process
pip install -r requirements.txt

Examples of Use

If you want to generate a training set for the duck class, run:

python prepare_data.py --data_root ./data --cls duck --split train

If you want to generate a testing set for the ape class, run:

python prepare_data.py --data_root ./data --cls ape --split test

If you want to generate a evaluating set for the cat class, run:

python prepare_data.py --data_root ./data --cls cat --split occ

Citation

@inproceedings{peng2019pvnet, title={PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation}, author={Peng, Sida and Liu, Yuan and Huang, Qixing and Zhou, Xiaowei and Bao, Hujun}, booktitle={CVPR}, year={2019} }

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages