Skip to content

supervisely-ecosystem/template-serve-nn-detection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

50 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Serve Custom Detection Model Template

OverviewPreparationHow To DevelopHow To Add Model As AppHow To Run

GitHub release (latest SemVer) views runs

Overview

Template Serve NN Detection app is designed for developers and can be used as a starting point for creating an application for serving your own detection NN models on Supervisely.

Preparation

Step 1. Make a fork from this repository

Step 2. Clone repository to your computer

Step 3. Open repo directory and create python virtual environment by running the following command from the application root directory in terminal:

python -m venv venv

Step 4. Activate virtual environment:

source venv/bin/activate

Step 5. Install requirements.txt:

pip install -r requirements.txt

Note: we provide a docker image with cuda runtime and it's dependencies, but if you need to use something specific, add it to the requirements.txt, or use your own docker image, please contact supervisely technical support for details

Note 2: you can change application name in config.json.

How To Develop

Note: recommended Python version >= 3.8

Details: By default template app generates demo predictions to demonstrate the functionality. In order to implement your custom model, you will need to edit main.py file only.

main.py - contains 4 functions with commentaries to help you implement your custom nn model:

  • get_classes_and_tags() - constructs ProjectMeta object with specified model classes and tags.
  • get_session_info() - generates model info dict with any parameters (see recommended parameters in file). You will see this parameters when you will connect to your model from other apps.
  • inference(image_path) - this functions gets input image path and return model predictions on this image. See predictions format in file. Inference results will be automatically converted to supervisely annotation format.
  • deploy_model(model_weights_path) - function initializes model to be ready to get input data for inference.

Step 1. Make sure you've edited main.py, without edits it will generate demo predictions

Step 2. Run main.py from terminal or by using your IDE interface:

python main.py

Step 3. When your model is ready, add additional modules and packages that are required to run your served model to requirements.txt

Step 4. Add your model as private app to Supervisely Ecosystem

How To Add Model As App [Enterprise Edition only]

Step 1. Go to Ecosystem page and click on private apps

Step 2. Click + Add private app button

Step 3. Copy and paste repository url and generated github/gitlab personal token to modal window

Video SLY_EMBEDED_VIDEO_LINK

How To Run:

Step 1. Upload your model to Team Files

Step 2. Add app with implemented custom nn model to your team from Ecosystem

Step 3. Run the application from the context menu of .pth file. If you are running application from file with different than .pth extension, app will use demo model

Step 4. Press the Run button in the modal window

Step 5. Add one of the related apps to your team from Ecosystem and run it.

Step 6. Open running applier app and connect to app session with served model

NN Image Labeling Apply NN to Images Project

Step 7. Your served model is ready to apply.

Once you integrated serving app for your model, you can use any available inference interfaces in Ecosystem: