TIM-VX is a software integration module provided by VeriSilicon to facilitate deployment of Neural-Networks on Verisilicon ML accelerators. It serves as the backend binding for runtime frameworks such as Android NN, Tensorflow-Lite, MLIR, TVM and more.
Main Features
- Over 150 operators with rich format support for both quantized and floating point
- Simplified C++ binding API calls to create Tensors and Operations Guide
- Dynamic graph construction with support for shape inference and layout inference
- Built-in custom layer extensions
- A set of utility functions for debugging
- Tensorflow-Lite (External Delegate)
- Tengine (Official)
- TVM (Fork)
- Paddle-Lite (Official)
- OpenCV (Offical)
- ONNXRuntime (Official)
Feel free to raise a github issue if you wish to add TIM-VX for other frameworks.
TIM-VX supports both bazel and cmake.
To build TIM-VX for x86 with prebuilt:
mkdir host_build
cd host_build
cmake ..
make -j8
make install
All install files (both headers and *.so) is located in : host_build/install
cmake options:
option name | Summary | Default |
---|---|---|
TIM_VX_ENABLE_TEST |
Enable unit test case for public APIs and ops | OFF |
TIM_VX_ENABLE_LAYOUT_INFER |
Build with tensor data layout inference support | ON |
TIM_VX_USE_EXTERNAL_OVXLIB |
Replace internal with a prebuilt libovxlib library | OFF |
OVXLIB_LIB |
full path to libovxlib.so include so name, required if TIM_VX_USE_EXTERNAL_OVXLIB =ON |
Not set |
OVXLIB_INC |
ovxlib's include path, required if TIM_VX_USE_EXTERNAL_OVXLIB =ON |
Not set |
EXTERNAL_VIV_SDK |
Give external vivante openvx driver libraries | Not set |
TIM_VX_BUILD_EXAMPLES |
Build example applications | OFF |
TIM_VX_ENABLE_40BIT |
Enable large memory (over 4G) support in NPU driver | OFF |
TIM_VX_ENABLE_PLATFORM |
Enable multi devices support | OFF |
TIM_VX_ENABLE_PLATFORM_LITE |
Enable lite multi-device support, only work when TIM_VX_ENABLE_PLATFORM =ON |
OFF |
VIP_LITE_SDK |
full path to VIPLite sdk, required when TIM_VX_ENABLE_PLATFORM_LITE =ON |
Not set |
TIM_VX_ENABLE_GRPC |
Enable gPRC support, only work when TIM_VX_ENABLE_PLATFORM =ON |
OFF |
TIM_VX_DBG_ENABLE_TENSOR_HNDL |
Enable built-in tensor from handle | ON |
TIM_VX_ENABLE_TENSOR_CACHE |
Enable tensor cache for const tensor, check OpenSSL build notes | OFF |
Run unit test:
cd host_build/src/tim
export LD_LIBRARY_PATH=`pwd`/../../../prebuilt-sdk/x86_64_linux/lib:<path to libgtest_main.so>:$LD_LIBRARY_PATH
export VIVANTE_SDK_DIR=`pwd`/../../../prebuilt-sdk/x86_64_linux/
export VSIMULATOR_CONFIG=<hardware name should get from chip vendor>
# if you want to debug wit gdb, please set
export DISABLE_IDE_DEBUG=1
./unit_test
cd <wksp_root>
git clone --depth 1 -b release-1.10.0 [email protected]:google/googletest.git
cd <root_tim_vx>/build/
cmake ../ -DTIM_VX_ENABLE_TEST=ON -DFETCHCONTENT_SOURCE_DIR_GOOGLETEST=<wksp_root/googletest> <add other cmake define here>
- prepare toolchain file follow cmake standard
- make sure cross build low-level driver with toolchain separately, we need the sdk from the low-level driver
- add
-DEXTERNAL_VIV_SDK=<low-level-driver/out/sdk>
to cmake definitions, also remember-DCMAKE_TOOLCHAIN_FILE=<Toolchain_Config>
- or for using a buildroot toolchain with extrnal VIV-SDK add:
-DCONFIG=BUILDROOT -DCMAKE_SYSROOT=${CMAKE_SYSROOT} -DEXTERNAL_VIV_SDK=${BUILDROOT_SYSROOT}
- then make
If you want to build tim-vx as a static library, and link it to your shared library or application, please be carefull with the linker, "-Wl,--whole-archive" is required.
@see samples/lenet/CMakeLists.txt for reference
Install bazel to get started.
TIM-VX needs to be compiled and linked against VeriSilicon OpenVX SDK which provides related header files and pre-compiled libraries. A default linux-x86_64 SDK is provided which contains the simulation environment on PC. Platform specific SDKs can be obtained from respective SoC vendors.
To build TIM-VX:
bazel build libtim-vx.so
To run sample LeNet:
# set VIVANTE_SDK_DIR for runtime compilation environment
export VIVANTE_SDK_DIR=`pwd`/prebuilt-sdk/x86_64_linux
bazel build //samples/lenet:lenet_asymu8_cc
bazel run //samples/lenet:lenet_asymu8_cc
To build and run Tensorflow-Lite with TIM-VX, please see README
To build and run TVM with TIM-VX, please see TVM README
Chip | Vendor | References | Success Stories |
---|---|---|---|
i.MX 8M Plus | NXP | ML Guide, BSP | SageMaker with 8MP |
A311D | Khadas - VIM3 | A311D datasheet, BSP | Paddle-lite demo |
S905D3 | Khadas - VIM3L | S905D3 , BSP |
Create issue on github or email to ML_Support at verisilicon dot com