Skip to content

Latest commit

 

History

History
36 lines (22 loc) · 1.75 KB

File metadata and controls

36 lines (22 loc) · 1.75 KB

Unity-Object-Detection-and-Localization-with-VR

Detect and localize objects from the front-facing camera image of a VR Headset in a 3D scene in Unity using Yolo and Barracuda.

unityscene

Update March 2023

This project has been extended to track moving objects. A constant velocity Kalman filter has been implemented.

Currently the two systems, clustering or Kalman filter, work separately. Only one system works at a time.

How to use this project

  • Open a preset scene Clustering VR or Clustering Webcam for the Clustering algorithm or Kalman Simulator or Kalman Video for the Kalman Filter algorithm

  • Select your (VR) webcam or video from the dropdown list in the Object Detection Manager of the GameManager GameObject

  • (optional) change other settings in the GameManager

    • change the Localisation Method to Clustering or Kalman
    • Set the filter for the objects to find ("Labels To Find")
    • enable Stereo Image for stereo image webcams (like Valve Index)
    • Set GameObjects to display detected objects (in Clustering-Mode) in the Object Model Manager

The Kalman Simulator scene implements a simulation of people moving around. Useful for testing the Object Detection and the Kalman filter performance.

System requirements

  • Unity 2020.3 LTS or later (the project currently uses Unity 2021.3.15f1)

YoloV4TinyBarracuda

YoloV4TinyBarracuda is an implementation of the YOLOv4-tiny object detection model on the Unity Barracuda neural network inference library. Developed and provided by keijiro: YoloV4TinyBarracuda