Skip to content
This repository has been archived by the owner on Sep 30, 2024. It is now read-only.

Commit

Permalink
Merge pull request #3 from intel-iot-devkit/document_revise
Browse files Browse the repository at this point in the history
Document revise
  • Loading branch information
maozhong1 authored Apr 8, 2020
2 parents 2e964dd + 49d27fb commit 68e4fd5
Show file tree
Hide file tree
Showing 9 changed files with 175 additions and 4 deletions.
78 changes: 77 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1 +1,77 @@
# concurrent-video-anlalytic-pipeline-optimzation-sample
# Concurrent Video Analytic Pipeline Optimzation Sample
Support users to quickly setup and adjust the core concurrent video analysis workload through configuration file to obtain the best performance of video codec, post-processing and inference based on Intel® integrated GPU according to their product requirements.
Users can use the sample application video_e2e_sample to complete runtime performance evaluation or as a reference for debugging core video workload issues.

## Typical workloads
Sample par files can be found in par_files directory. Verfied on i7-8559U. Performance differs on other platforms.
* 16 1080p H264 decoding, scaling, face detection inference, rendering inference results, composition, saving composition results to local H264 file, and display
* 4 1080p H264 decoding, scaling, human pose estimation inference, rendering inference results, composition and display
* 4 1080p H264 decoding, scaling, vehicle and vehicle attributes detection inference, rendering inference results, composition and display
* 16 1080p RTSP H264 stream decoding, scaling, face detection inference, rendering inference results, composition and display.
* 16 1080p H264 decoding, scaling, face detection inference, rendering inference results, composition and display. Plus 16 1080p H264 decoding, composition and showing on second display.

# Dependencies
The sample application depends on [Intel® Media SDK](https://github.com/Intel-Media-SDK/), [Intel® OpenVINO™](https://software.intel.com/en-us/openvino-toolkit) and [FFmpeg](https://www.ffmpeg.org/)

# FAQ
See doc/FAQ.md

# Table of contents

* [License](#license)
* [How to contribute](#how-to-contribute)
* [Documentation](#documentation)
* [System requirements](#system-requirements)
* [How to build](#how-to-build)
* [Build steps](#build-steps)
* [Known limitations](#know-limitations)

# License
The sample application is licensed under MIT license. See [LICENSE](./LICENSE) for details.

# How to contribute
See [CONTRIBUTING](./doc/CONTRIBUTING.md) for details. Thank you!

# Documentation
See [user guide](./doc/concurrent_video_analytic_sample_application_user_guide_2020.1.0.pdf)

# System requirements

**Operating System:**
* Ubuntu 18.04.02

**Software:**
* [MediaSDK 19.4.0](https://github.com/Intel-Media-SDK/MediaSDK/releases/tag/intel-mediasdk-19.4.0)
* [OpenVINO™ 2019 R3](https://software.intel.com/en-us/openvino-toolkit)

**Hardware:**
* Intel® platforms supported by the MediaSDK 19.4.0 and OpenVINO 2019 R3.
* For Media SDK, the major platform dependency comes from the back-end media driver. https://github.com/intel/media-driver
* For OpenVINO™, see details from here: https://software.intel.com/en-us/openvino-toolkit/documentation/system-requirements

# How to build

Run build_and_install.sh to install dependent software packages and build sample application video_e2e_sample.

Please refer to ”Installation Guide“ in [user guide](./doc/concurrent_video_analytic_sample_application_user_guide_2020.1.0.pdf) for details.

## Build steps

Get sources with the following git command:
```sh
git clone https://github.com/intel-iot-devkit/concurrent-video-analytic-pipeline-optimization-sample-l.git cva_sample
```

```sh
cd cva_sample
./build_and_install.sh
```
This script will install the dependent software packages by running command "apt install". So it will ask for sudo password. Then it will download libva, libva-util, media-driver and MediaSDK source code and install these libraries. It might take 10 to 20 minutes depending on the network bandwidth.

After the script finishing, the sample application video_e2e_sample can be found under ./bin. Please refer to "Run sample application" in [user guide](./doc/concurrent_video_analytic_sample_application_user_guide_2020.1.0.pdf) for details.

# Known limitations

The sample application has been validated on Intel® platforms Skylake(i7-6770HQ), Coffee Lake(i7-8559U i7-8700) and Whiskey Lake(i7-8665UE).


2 changes: 1 addition & 1 deletion build_and_install.sh
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ then
echo 'export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/opt/intel/mediasdk/lib"' >> ~/.bashrc
fi

echo "VAAS sample application building has completed!"
echo "Sample application building has completed!"
echo "Please use ./bin/video_e2e_sample for testing"
fi

Expand Down
58 changes: 58 additions & 0 deletions doc/CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
We welcome community contributions to SVET sample application. Thank you for your time!

Please note that review and merge might take some time at this point.

SVET sample application is licensed under MIT license. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.

Steps:
- In the commit message, explain what bug is fixed or what new feature is added in details.
- Validate that your changes don't break a build. [build instruction](../README.md#how-to-build)
- Pass [testing](#testing)
- Wait while your patchset is reviewed and tested by our internal validation cycle

# Testing

## Requirements

Hardware Requirements: Coffee Lake or Whiskey Lake
Software Requirements: Ubuntu 18.04, MediaSDK 19.4.0 and OpenVINO 2019 R3

## How to test your changes

### 1. Build the SVET sample applicaton

```sh
./build_and_install.sh
```
Apply the changes to MediaSDK/samples/video_e2e_sample/. Then build video_e2e_sample:
```sh
cp msdk_build.sh MediaSDK
cd MediaSDK
./msdk_build.sh
```

### 2. Run tests to make sure below tests can run without error

Simple display test:
```sh
./bin/video_e2e_sample -par par_file/par_file_name.par
```
Basic video decoding and inference tests:
n16_1080p_1080p_dp_noinfer.par
n16_1080p_1080p_dp.par
n16_1080p_4k_dp.par
n4_1080p_1080p_dp.par
n4_vehical_detect_1080p.par
n64_d1_1080p_dp.par

RTSP play and saving tests:
rtsp_dump_only.par
n16_1080p_rtsp_simu.pa
n4_1080p_rtsp_simu_dp.par
n16_1080p_rtsp_simu_dump.par

Multiple display tests:
```sh
./bin/video_e2e_sample -par par_file/n16_1080p_1080p_dp_noinfer.par -par par_file/n16_1080p_1080p_dp.par
```

35 changes: 35 additions & 0 deletions doc/FAQ.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Frequently asked questions (SVET sample application)

## Where can I find the descritpion of options used in par file?
See chapter 2.4 in doc/svet_sample_application_user_guide_2020.1.0.pdf
Running the SVET sample applicton with option "-?" can show the usage of options.

## Why does the system need to be switched to text mode before running the sample application
The sample application uses libDRM to render the video directly to display, so it needs to act as master of the DRM display, which isn't allowed when X server is running.
If the par file doesn't include display session, there is no need to switch to text mode.

## Why it needs "su -p" to switch to root user before running the sample application
To become DRM master, it needs root privileges. With option "-p", it will preserve environment variables, like LIBVA_DRIVERS_PATH, LIBVA_DRIVER_NAME and LD_LIBRARY_PATH. If without "-p", these environment variables will be reset and the sample application will run into problems.

## The loading time of 16-channel face detection demo is too long
Please enable cl_cache by running command "export cl_cache_dir=/tmp/cl_cache" and "mkdir -p /tmp/cl_cache". Then after the first running of 16-channel face detection demo, the compiled OpenCL kernles are cached and the model loading time of next runnings of 16-channel face detection demo will only take about 10 seconds.
More details about cl_cache can be found at https://github.com/intel/compute-runtime/blob/master/opencl/doc/FAQ.md

## Can sources number for "-vpp_comp_only" or "-vpp_comp" be different from number of decoding sessions?
No. The sources number for "-vpp_comp_only" or "-vpp_comp" must be equal to the numer of decoding sessions. Otherwise, the sample application will fail during pipeline initialization or running.

## How to limit the fps of whole pipeline to 30?
Add "-fps 30" to every decoding session.

## How to limit the frame number of input to 1000?
Add "-n 1000" to every decoding dessions. However this option won't work if both "-vpp_comp_only" and "-vpp_comp" are set.

## Where can I find tutorials for inference engine?
Please refer to https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html

## Where can I find information for the models?
Please refer to https://github.com/opencv/open_model_zoo/tree/master/models/intel. The names of models used in sample application are
face-detection-retail-0004, human-pose-estimation-0001, vehicle-attributes-recognition-barrier-0039, vehicle-license-plate-detection-barrier-0106.

## Can I use other OpenVINO version rather than 2019 R3?
Yes, but you have to modify some code due to interfaces changing. And also you need to download the IR files and copy them to ./model manually. Please refer to script/download_and_copy_models.sh for how to download the IR files.
Binary file not shown.
Binary file not shown.
Binary file not shown.
4 changes: 2 additions & 2 deletions script/install_binary.sh
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
#!/bin/bash

#use should use sudo to run this binaries
#This script is supposed to run under directory svet_e2e_sample_l which is generated by script pack_binary.sh
#Please use "sudo" to run this script for installing libva, media-driver and MediaSDK library binaries.

echo "If libva/media-driver/MediaSDK have been installed before, their libraies will be overwrote!"

Expand Down
2 changes: 2 additions & 0 deletions script/pack_binary.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
#!/bin/bash
#This script is used to pack all libva, media-driver and MediaSDK binaries to directory svet_e2e_sample_l.
#Copy whole directory svet_e2e_sample_l to another device and run install_binary.sh to install the binaries

root_path=$PWD
release_folder=$root_path/svet_e2e_sample_l
Expand Down

0 comments on commit 68e4fd5

Please sign in to comment.