From 764b6f338c6d0a421658351e881dfef8424c1c19 Mon Sep 17 00:00:00 2001 From: ChristopherSpelt Date: Wed, 20 Mar 2024 10:15:57 +0100 Subject: [PATCH] Add VerifyML to shortlist and reorder shortlist --- docs/Projects/TAD/tools.md | 22 +++++++++++++--------- 1 file changed, 13 insertions(+), 9 deletions(-) diff --git a/docs/Projects/TAD/tools.md b/docs/Projects/TAD/tools.md index 79c06ea7..ee0dd360 100644 --- a/docs/Projects/TAD/tools.md +++ b/docs/Projects/TAD/tools.md @@ -19,19 +19,18 @@ Links: ## To investigate further -### AI Assessment Tool Belgium +### VerifyML -**What is it?** The tool is based on the -[ALTAI recommendations](https://digital-strategy.ec.europa.eu/en/library/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment) -published by the European Commission. +**What is it?** VerifyML is an opinionated, open-source toolkit and workflow to help companies implement +human-centric AI practices. It seems pretty much equivalent to AI Verify. -**Why interesting?** Although it only includes questionnaires it does give an interesting way -of reporting the end results. Also this project can still be expanded with technical tests. +**Why interesting?** The functionality of this toolkit seems to match closely with those of AI Verify. +It has a "git and code first approach" and has automatic generation of model cards. -**Remarks** Does not include any technical tests at this point. +**Remarks** The code seems to be last updated 2 years ago. -Links: [ALTAI ai4belgium Homepage](https://altai.ai4belgium.be/), -[Altai Github](https://github.com/AI4Belgium/ai-assessment-tool). +Links: [VerifyML](https://www.verifyml.com/), +[VerifyML GitHub](https://github.com/cylynx/verifyml) ### IBM Research 360 Toolkit @@ -67,6 +66,11 @@ Links: ## Interesting to mention +* [AI Assessment Tool Belgium](https://altai.ai4belgium.be/). The tool is based on the +[ALTAI recommendations](https://digital-strategy.ec.europa.eu/en/library/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment) +published by the European Commission. Although it only includes questionnaires it does give an interesting way +of reporting the end results. Does not include any technical tests at this point. + * [What-if](https://github.com/pair-code/what-if-tool). Provides interface for expanding understanding of a black-box classifaction or regression ML model. Can be accessed through TensorBoard or as an extension in a Jupyter or Colab notebook. Does not seem to be an active codebase.