Skip to content

This example will help you deploy a streaming camera feed with realtime people detection using the Coral Edge TPU for on-device ML inferencing.

License

Notifications You must be signed in to change notification settings

soungno/coral-streaming-object-detector

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Streaming Object Detector with Coral and BalenaCloud

This guide will help you deploy a streaming camera feed with realtime people detection using the Coral Edge TPU for on-device ML inferencing. This example is designed to work with the Coral Dev Board, but should work with other balena-compatible devices that have an Edge TPU.

Demo Streaming Person Detector

Table of Contents:

Setup a fleet of devices

In balenaCloud, code is deployed to groups of devices called "applications". To deploy code to your device(s) remotely, you need to first sign up for an account on balenaCloud. Once you are signed in, create a new application and call it something like edge-ai.

Now you are ready to add devices to your fleet. To get your Coral Dev Board online and connected to balenaCloud, follow our walkthrough to Get started with Google Coral Dev Board and Python (you can return here after you provision your device).

When your device is provisioned, you should see your device listed as shown here:

Device added to balenaCloud Dashboard

Deploy the demo code

Now let's deploy some code. Make sure you have the balena CLI installed and open up your favorite console terminal. In the console, make sure you are logged into your balenaCloud account:

$ balena login

Grab the code from this repository either by git cloning or downloading the zip and navigate to the root of this newly downloaded repo. From within the root of the project we can now deploy our code using the following:

$ balena push <MY_APP_NAME>

Replace <MY_APP_NAME> with the name you selected when you created your application in step 1. With this single command, the balena CLI will initiate a build on the cloud for the correct architecture and every device in your fleet will start running this code. Once the code is deployed you should see some device log output like the following, and if you navigate to the devices IP address or balenaCloud public Device URL, you should see a website with your video stream and blue rectangles indicating the detected objects:

[Logs]    [6/22/2020, 5:23:24 PM] [edge-logic] INFO:root:#############################################################
[Logs]    [6/22/2020, 5:23:24 PM] [edge-logic] INFO:root:Authorization is disabled.
[Logs]    [6/22/2020, 5:23:24 PM] [edge-logic] INFO:root:Anyone can access your balenaCam, using the device's URL!
[Logs]    [6/22/2020, 5:23:24 PM] [edge-logic] INFO:root:Set the username and password environment variables to enable authorization.
[Logs]    [6/22/2020, 5:23:24 PM] [edge-logic] INFO:root:For more info visit: https://github.com/balena-io-playground/balena-cam
[Logs]    [6/22/2020, 5:23:24 PM] [edge-logic] INFO:root:#############################################################
[Logs]    [6/22/2020, 5:23:24 PM] [edge-logic] 
[Logs]    [6/22/2020, 5:23:24 PM] [edge-logic] ======== Running on http://0.0.0.0:80 ========
[Logs]    [6/22/2020, 5:23:24 PM] [edge-logic] (Press CTRL+C to quit)

Configure Input Feed

This demo can either stream from a video file or from a camera device. By default the demo will stream from the MP4 video file at edge-logic/video/construction.mp4. To configure webcam streaming simply define an Environment Variable called CAMERA either from the balenaCloud Dashboard or add ENV CAMERA=0. By default the camera will use /dev/video0 but if you have multiple cameras you can specify the CAMERA env var with the integer value of the video device, e.g.: CAMERA = 3 would use /dev/video3.

Update your model

Okay, so now we know how to deploy and update our code easily, but how do we update or change our model? In this project, you will notice that the code is split into edge-logic and models. These containers can be happily updated without affecting one another too much. If you want to deploy a new model to your fleet, simply drop your new model and its labels into the "models" folder and make sure to name them model.tflite and labels.txt.

You can find some great pre-trained models that are compatible with the Coral Edge TPU at coral.ai/models. For compatibility with this demo, be sure to select one of the "detection" models. Or if you want to create your own model, see the Coral docs about model compatibility.

With your new model added, perform another balena push to your application and you should see the model service update and begin running. Super easy!

Additional Configuration

Log Verbosity

It is possible to change the logging level of the python code by setting an environment variable called LOGLEVEL. By default it is set as INFO as described in python3 Basic Logging Tutorial.

Authenticated Login for Video Stream

To protect your balenaCam devices using a username and a password set the following environment variables.

Key Value
username yourUserNameGoesHere
password yourPasswordGoesHere

💡 Tips: 💡

About

This example will help you deploy a streaming camera feed with realtime people detection using the Coral Edge TPU for on-device ML inferencing.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 37.2%
  • JavaScript 26.6%
  • CSS 20.0%
  • HTML 14.9%
  • Shell 1.3%