Skip to content

Anderson-Wu/Object-Detection-Inference-On-PYNQ-Z2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Introduction

Object Detection is widely applied in many areas of computer vision, including security, automatic vehicle systems, traffic monitoring, inventory management, and so on.

Folder structure

README.md
build/                  # build scripts – bitstream generation
docs/                   # Documentation files – ppt, pdf, md
images/                 # images for README.md
src/                    # all source code, include kernel & host - code, cpp, hpp, other include files
    baseline/           # baseline code
    finn/               # finn model training code
    host/               # host code
    nms/                # nms sw & hls code
impl_result/            # bitstream (.bit, .hwh) & Vivado IPs
    bitstream_finn/     # generated XwXa bitstream & driver.py for finn only
    bitstream_finn_nms/ # generated & modified 4w4a bitstream & driver.py for stitched finn and nms (failed)
    finn_ip/            # for Vivado IP flow
    nms_ip/             # for Vivado IP flow
LICENSE

How to Stitch NMS/FINN IPs on Vivado

Create Project Choose Board Add IP finn Add IP nms Create Block Design Add ZYNQ Add DMA Add NMS Add FINN Add DWC Add DWC Run Block Automation Configure ZYNQ PS Configure DMA Run Connection Automation Manually Connect & Run Connection Automation Configure first DWC Configure second DWC Create HDL Wrapper Generate Bitstream

Build Setup

Step 1. Training Model
execute

python bnn_pynq_train.py --network CNV_XWXA

under src/finn/Model_training

Step 2. Export Model
execute

python bnn_pynq_train.py --evaluate --network CNV_XWXA --resume ./experiments/CNV_XWXA/checkpoints/best.tar

under src/finn/Model_training, best.onnx will be generated

Step 3. Generate bit stream and driver
Rent a U50 card on BOLEDU server and unzip FINN under build folder, add best.onnx under build folder in FINN, and then execute

./run-docker.sh build_custom ./build/

this will generate intermediate onnx files, bit string file, meta file and drive under build folder.(If generate bit string file fails, changing SIMD and PE in folding_config.json) Download folder with header pynq_driver_xxxxx, bitfile folder, best_streamlined.onnx to your computer, if you want to attach finn with other customized IP, you can download IP in stitched_ip folder.

Step 4. Get the scaling parameter and shift parameter. Use any application that can open onnx file to open best_streamlined.onnx. Go to the last two layers(MUL and ADD) of model and record them.

Step 5. Inference on PYNQ-Z2 Rent a PYNQ board on BOLEDU server and upload pynq_driver_xxxxx, bitfile folder, testing image and inference.ipynb file in src/host folder, and input the parameters gotten from Step 4, after execution all notebook, the result will store in output directory.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published