From 2b715d93a099e6085918505134ab9c24617ef700 Mon Sep 17 00:00:00 2001 From: DefTruth Date: Mon, 18 Mar 2024 22:59:13 +0800 Subject: [PATCH] Bump to 0.2.0+ort1.17.1+ocv4.9.0 --- README.md | 83 ++++++------------------------------------------------- 1 file changed, 9 insertions(+), 74 deletions(-) diff --git a/README.md b/README.md index 21161da5..f5e897e8 100644 --- a/README.md +++ b/README.md @@ -111,7 +111,7 @@ add_executable(lite_yolov5 examples/test_lite_yolov5.cpp) target_link_libraries(lite_yolov5 ${lite.ai.toolkit_LIBS}) ```
- πŸ”‘οΈ Supported Models Matrix + πŸ”‘οΈ Supported Models Matrix!Click here! ## Supported Models Matrix
@@ -228,6 +228,9 @@ target_link_libraries(lite_yolov5 ${lite.ai.toolkit_LIBS})
+
+ πŸ”‘οΈ Model Zoo!Click here! + ## Model Zoo.
@@ -253,8 +256,7 @@ target_link_libraries(lite_yolov5 ${lite.ai.toolkit_LIBS}) docker pull qyjdefdocker/lite.ai.toolkit-tnn-hub:v0.1.22.02.02 # (217M) + YOLO5Face ``` -
- πŸ”‘οΈ How to download Model Zoo from Docker Hub? +### πŸ”‘οΈ How to download Model Zoo from Docker Hub? * Firstly, pull the image from docker hub. ```shell @@ -290,11 +292,12 @@ target_link_libraries(lite_yolov5 ${lite.ai.toolkit_LIBS}) cp -rf mnn/cv share/ ``` -
### Model Hubs The pretrained and converted ONNX files provide by lite.ai.toolkit are listed as follows. Also, see [Model Zoo](#lite.ai.toolkit-Model-Zoo) and [ONNX Hub](https://github.com/DefTruth/lite.ai.toolkit/tree/main/docs/hub/lite.ai.toolkit.hub.onnx.md), [MNN Hub](https://github.com/DefTruth/lite.ai.toolkit/tree/main/docs/hub/lite.ai.toolkit.hub.mnn.md), [TNN Hub](https://github.com/DefTruth/lite.ai.toolkit/tree/main/docs/hub/lite.ai.toolkit.hub.tnn.md), [NCNN Hub](https://github.com/DefTruth/lite.ai.toolkit/tree/main/docs/hub/lite.ai.toolkit.hub.ncnn.md) for more details. +
+ ## Examples. @@ -975,81 +978,13 @@ auto *segment = new lite::cv::segmentation::FaceParsingBiSeNet(onnx_path); // 50 auto *segment = new lite::cv::segmentation::FaceParsingBiSeNetDyn(onnx_path); // Dynamic Shape Inference. ``` -## License. +## License
The code of [Lite.Ai.ToolKit](#lite.ai.toolkit-Introduction) is released under the GPL-3.0 License. - -## References. - -
- -Many thanks to these following projects. All the Lite.AI.ToolKit's models are sourced from these repos. - -* [RobustVideoMatting](https://github.com/PeterL1n/RobustVideoMatting) (πŸ”₯πŸ”₯πŸ”₯new!!↑) -* [nanodet](https://github.com/RangiLyu/nanodet) (πŸ”₯πŸ”₯πŸ”₯↑) -* [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX) (πŸ”₯πŸ”₯πŸ”₯new!!↑) -* [YOLOP](https://github.com/hustvl/YOLOP) (πŸ”₯πŸ”₯new!!↑) -* [YOLOR](https://github.com/WongKinYiu/yolor) (πŸ”₯πŸ”₯new!!↑) -* [ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4) (πŸ”₯πŸ”₯πŸ”₯↑) -* [insightface](https://github.com/deepinsight/insightface) (πŸ”₯πŸ”₯πŸ”₯↑) -* [yolov5](https://github.com/ultralytics/yolov5) (πŸ”₯πŸ”₯πŸ’₯↑) -* [TFace](https://github.com/Tencent/TFace) (πŸ”₯πŸ”₯↑) -* [YOLOv4-pytorch](https://github.com/argusswift/YOLOv4-pytorch) (πŸ”₯πŸ”₯πŸ”₯↑) -* [Ultra-Light-Fast-Generic-Face-Detector-1MB](https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB) (πŸ”₯πŸ”₯πŸ”₯↑) - -
- Expand for More References. - -* [headpose-fsanet-pytorch](https://github.com/omasaht/headpose-fsanet-pytorch) (πŸ”₯↑) -* [pfld_106_face_landmarks](https://github.com/Hsintao/pfld_106_face_landmarks) (πŸ”₯πŸ”₯↑) -* [onnx-models](https://github.com/onnx/models) (πŸ”₯πŸ”₯πŸ”₯↑) -* [SSR_Net_Pytorch](https://github.com/oukohou/SSR_Net_Pytorch) (πŸ”₯↑) -* [colorization](https://github.com/richzhang/colorization) (πŸ”₯πŸ”₯πŸ”₯↑) -* [SUB_PIXEL_CNN](https://github.com/niazwazir/SUB_PIXEL_CNN) (πŸ”₯↑) -* [torchvision](https://github.com/pytorch/vision) (πŸ”₯πŸ”₯πŸ”₯↑) -* [facenet-pytorch](https://github.com/timesler/facenet-pytorch) (πŸ”₯↑) -* [face.evoLVe.PyTorch](https://github.com/ZhaoJ9014/face.evoLVe.PyTorch) (πŸ”₯πŸ”₯πŸ”₯↑) -* [center-loss.pytorch](https://github.com/louis-she/center-loss.pytorch) (πŸ”₯πŸ”₯↑) -* [sphereface_pytorch](https://github.com/clcarwin/sphereface_pytorch) (πŸ”₯πŸ”₯↑) -* [DREAM](https://github.com/penincillin/DREAM) (πŸ”₯πŸ”₯↑) -* [MobileFaceNet_Pytorch](https://github.com/Xiaoccer/MobileFaceNet_Pytorch) (πŸ”₯πŸ”₯↑) -* [cavaface.pytorch](https://github.com/cavalleria/cavaface.pytorch) (πŸ”₯πŸ”₯↑) -* [CurricularFace](https://github.com/HuangYG123/CurricularFace) (πŸ”₯πŸ”₯↑) -* [face-emotion-recognition](https://github.com/HSE-asavchenko/face-emotion-recognition) (πŸ”₯↑) -* [face_recognition.pytorch](https://github.com/grib0ed0v/face_recognition.pytorch) (πŸ”₯πŸ”₯↑) -* [PFLD-pytorch](https://github.com/polarisZhao/PFLD-pytorch) (πŸ”₯πŸ”₯↑) -* [pytorch_face_landmark](https://github.com/cunjian/pytorch_face_landmark) (πŸ”₯πŸ”₯↑) -* [FaceLandmark1000](https://github.com/Single430/FaceLandmark1000) (πŸ”₯πŸ”₯↑) -* [Pytorch_Retinaface](https://github.com/biubug6/Pytorch_Retinaface) (πŸ”₯πŸ”₯πŸ”₯↑) -* [FaceBoxes](https://github.com/zisianw/FaceBoxes.PyTorch) (πŸ”₯πŸ”₯↑) - -
- - -## Compilation Options. - -In addition, [MNN](https://github.com/alibaba/MNN), [NCNN](https://github.com/Tencent/ncnn) and [TNN](https://github.com/Tencent/TNN) support for some models will be added in the future, but due to operator compatibility and some other reasons, it is impossible to ensure that all models supported by [ONNXRuntime C++](https://github.com/microsoft/onnxruntime) can run through [MNN](https://github.com/alibaba/MNN), [NCNN](https://github.com/Tencent/ncnn) and [TNN](https://github.com/Tencent/TNN). So, if you want to use all the models supported by this repo and don't care about the performance gap of *1~2ms*, just let [ONNXRuntime](https://github.com/microsoft/onnxruntime) as default inference engine for this repo. However, you can follow the steps below if you want to build with [MNN](https://github.com/alibaba/MNN), [NCNN](https://github.com/Tencent/ncnn) or [TNN](https://github.com/Tencent/TNN) support. - -* change the `build.sh` with `DENABLE_MNN=ON`,`DENABLE_NCNN=ON` or `DENABLE_TNN=ON`, such as -```shell -cd build && cmake \ - -DCMAKE_BUILD_TYPE=MinSizeRel \ - -DINCLUDE_OPENCV=ON \ # Whether to package OpenCV into lite.ai.toolkit, default ON; otherwise, you need to setup OpenCV yourself. - -DENABLE_MNN=ON \ # Whether to build with MNN, default OFF, only some models are supported now. - -DENABLE_NCNN=OFF \ # Whether to build with NCNN, default OFF, only some models are supported now. - -DENABLE_TNN=OFF \ # Whether to build with TNN, default OFF, only some models are supported now. - .. && make -j8 -``` -* use the MNN, NCNN or TNN version interface, see [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_nanodet.cpp), such as -```C++ -auto *nanodet = new lite::mnn::cv::detection::NanoDet(mnn_path); -auto *nanodet = new lite::tnn::cv::detection::NanoDet(proto_path, model_path); -auto *nanodet = new lite::ncnn::cv::detection::NanoDet(param_path, bin_path); -``` -## 10. Contribute +## Contribute
How to add your own models and become a contributor? See [CONTRIBUTING.zh.md](https://github.com/DefTruth/lite.ai.toolkit/issues/191).