TensorFlow Object Detection Model Zoo (https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) contains many pre-trained models on the coco dataset. Ssd_inception_v2_coco and faster_rcnn_inception_v2_coco performed good as compared to rest of the models, but, in this project, faster_rcnn_inception_v2_coco is used which is fast in detecting people with less errors. Intel openVINO already contains extensions for custom layers used in TensorFlow Object Detection Model Zoo.
Downloading the model from the GitHub repository of Tensorflow Object Detection Model Zoo by the following command:
'''wget http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_v2_coco_2018_01_28.tar.gz''' Extracting the tar.gz file by the following command:
tar -xvf faster_rcnn_inception_v2_coco_2018_01_28.tar.gz
'''cd faster_rcnn_inception_v2_coco_2018_01_28''' The model can't be the existing models provided by Intel. So, converting the TensorFlow model to Intermediate Representation (IR) or OpenVINO IR format. The command used is given below:
python /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model faster_rcnn_inception_v2_coco_2018_01_28/frozen_inference_graph.pb --tensorflow_object_detection_api_pipeline_config pipeline.config --reverse_input_channels --tensorflow_use_custom_operations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support.json
Model-1: Ssd_inception_v2_coco_2018_01_28
Converted the model to intermediate representation using the following command. Further, this model lacked accuracy as it didn't detect people correctly in the video. Made some alterations to the threshold for increasing its accuracy but the results were not fruitful.
python /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model ssd_inception_v2_coco_2018_01_28/frozen_inference_graph.pb --tensorflow_object_detection_api_pipeline_config pipeline.config --reverse_input_channels --tensorflow_use_custom_operations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json
#Model-2: Faster_rcnn_inception_v2_coco_2018_01_28
Converted the model to intermediate representation using the following command. Model -2 i.e. Faster_rcnn_inception_v2_coco, performed really well in the output video. After using a threshold of 0.4, the model works better than all the previous approaches.
python /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model faster_rcnn_inception_v2_coco_2018_01_28/frozen_inference_graph.pb --tensorflow_object_detection_api_pipeline_config pipeline.config --reverse_input_channels --tensorflow_use_custom_operations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support.json
#Comparison: Comparing the two models i.e. ssd_inception_v2_coco and faster_rcnn_inception_v2_coco in terms of latency and memory, several insights were drawn. It could be clearly seen that the Latency (microseconds) and Memory (Mb) decreases in case of OpenVINO as compared to plain Tensorflow model which is very useful in case of OpenVINO applications.
Model/Framework Latency (microseconds) Memory (Mb) ssd_inception_v2_coco (plain TF) 222 538 ssd_inception_v2_coco (OpenVINO) 155 329 faster_rcnn_inception_v2_coco (plain TF) 1281 562 faster_rcnn_inception_v2_coco (OpenVINO) 889 281
Edge Computing is regarded as ideal for operations with extreme latency concerns. Thus, medium scale companies that have budget limitations can use edge computing to save financial resources. Cloud Computing is more suitable for projects and organizations which deal with massive data storage.
This application could keep a check on the number of people in a particular area and could be helpful where there is restriction on the number of people present in a particular area. Further, with some updations, this could also prove helpful in the current COVID-19 scenario i.e. to keep a check on the number of people in the frame.
Various insights could be drawn on the model by testing it with different videos and analyzing the model performance on low light input videos. This would be an important factor in determining the best model for the given scenario.
After converting the downloaded model to the OpenVINO IR, all the three servers can be started on separate terminals i.e.
MQTT Mosca server Node.js* Web server FFmpeg server Setting up the environment Configuring the environment to use the Intel® Distribution of OpenVINO™ toolkit one time per session by running the following command:
source /opt/intel/openvino/bin/setupvars.sh -pyver 3.5
Step 1 - Start the Mosca server
node ./server.js```
# The following message is displayed, if successful:
Mosca server started.
Step 2 - Start the GUI
Opening new terminal and executing below commands:
```cd webservice/ui
npm run dev```
The following message is displayed, if successful:
webpack: Compiled successfully
Step 3 - FFmpeg Server
Opening new terminal and executing below command:
sudo ffserver -f ./ffmpeg/server.conf
Step 4 - Run the code
Opening new terminal and executing below command:
python main.py -i resources/Pedestrian_Detect_2_1_1.mp4 -m faster_rcnn_inception_v2_coco_2018_01_28/frozen_inference_graph.xml -l /opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/libcpu_extension_sse4.so -d CPU -pt 0.4 | ffmpeg -v warning -f rawvideo -pixel_format bgr24 -video_size 768x432 -framerate 24 -i - http://0.0.0.0:3004/fac.ffm```