Skip to content

Commit

Permalink
update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
almazgimaev committed Oct 24, 2024
1 parent 889db16 commit 099027e
Showing 1 changed file with 23 additions and 4 deletions.
27 changes: 23 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,15 @@

The Evaluator for Model Benchmark is a versatile application designed to assess the performance of various machine learning models in a consistent and reliable manner. This app provides a streamlined process for evaluating models and generating comprehensive reports to help you learn different metrics and make informed decisions.

The Evaluator app offers a range of evaluation metrics, including precision, recall, F1 score, mAP, and more. The app also includes a **Model Comparison** feature that allows you to compare the performance of multiple models side by side.

**Changelog:**

- **v0.1.0** – Public release (for object detection task type)
- **v0.1.2** – Support for instance segmentation task type
- **v0.1.4** – Speedtest benchmark added
- **v0.1.15** – Model Comparison feature added

## Preparation

Before running the Evaluator for Model Benchmark, please ensure that you have the following:
Expand All @@ -31,10 +40,20 @@ Before running the Evaluator for Model Benchmark, please ensure that you have th

## How To Run

**Step 1:** Open the app from the Supervisely Ecosystem.
**Step 1:** Open and launch the app from the Supervisely Ecosystem.

**Step 2**:

- _Model Evaluation_:

**Step 2.1:** Select the Ground Truth project and the model you want to evaluate.

**Step 2.2:** Press the “Evaluate” button to start the evaluation process. After the evaluation is complete, you can find a link to the report in the app’s interface.

- _Model Comparison:_

**Step 2:** Select the project you wish to evaluate.
**Step 2.1:** Select the folder with the Ground Truth project name.

**Step 3:** Choose the model you want to evaluate from the list of served models.
**Step 2.1:** Select one or more evaluation folders with the model name.

**Step 4:** Start the evaluation process by clicking the “Run” button. The app will process the data and evaluate the model(s) based on the selected benchmarks. You can monitor the progress in the app’s interface.
**Step 2.2:** Press the “Compare” button to start the comparison process. After the comparison is complete, you can find a link to the report in the app’s interface.

0 comments on commit 099027e

Please sign in to comment.