Skip to content

Latest commit

 

History

History

classification

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 

Train Model for Classifying Vehicle and Non-vehicle

How it works

0.Dependencies

Our code has been test on Ubuntu 14.04. You should have pytorch and tensorbordX on server, and tensorboardX is optional, which you can ignore in train.py. By the way, we test our code on pytorch 0.3.1, so we are not not sure about lower version.

1. Prepare data.

Put all your images (including vehicle and non_vehicle) in the data/vehicle. The data used to train our VGGNet comes from the following sources:

  • ImageNet (categories: car, truck, sign, road, snow and traffic light)
  • UIUC Car Detection
  • GTI dataset
  • Cars Dataset
  • Images of vehicles and non-vehicles randomly captured from the Extra-Video. Which is can be find here.

For vehicle images, we crop the bounding boxes if provided and randomly crop 80% of the original size during training. For non-vehicle images, we make a crop of random size (10%, 30%, 50%, 100%) of the original size during training. All training images are resized to 64x64 and are rotated a certain degree randomly chosen from (-10, 10).

2.Get train.txt and test.txt

Run python GetTxt.py. Then, you will get train.txt' and 'test.txt separately, which are used during training.

3.Train Model

Our network structures are saved in models.py. You can change the model used in train.py. Run python train.py, you will find your model in './logs'.