Skip to content

Commit

Permalink
Updated docs
Browse files Browse the repository at this point in the history
  • Loading branch information
ctuning-admin committed Jan 29, 2024
1 parent 8d0562d commit f0b30ca
Show file tree
Hide file tree
Showing 12 changed files with 374 additions and 9 deletions.
19 changes: 19 additions & 0 deletions cm-mlops/script/app-image-classification-onnx-py/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
* [ Run this script via Docker (beta)](#run-this-script-via-docker-(beta))
* [Customization](#customization)
* [ Variations](#variations)
* [ Input description](#input-description)
* [ Script flags mapped to environment](#script-flags-mapped-to-environment)
* [ Default environment](#default-environment)
* [Script workflow, dependencies and native scripts](#script-workflow-dependencies-and-native-scripts)
Expand Down Expand Up @@ -114,6 +115,18 @@ ___
</details>


#### Input description

* --**input** Path to JPEG image to classify
* --**output** Output directory (optional)
* --**j** Print JSON output

**Above CLI flags can be used in the Python CM API as follows:**

```python
r=cm.access({... , "input":...}
```

#### Script flags mapped to environment
<details>
<summary>Click here to expand this section.</summary>
Expand Down Expand Up @@ -159,6 +172,10 @@ ___
* `if (USE_CUDA == True)`
* CM names: `--adr.['cuda']...`
- CM script: [get-cuda](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-cuda)
* get,cudnn
* `if (USE_CUDA == True)`
* CM names: `--adr.['cudnn']...`
- CM script: [get-cudnn](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-cudnn)
* get,dataset,imagenet,image-classification,original
- CM script: [get-dataset-imagenet-val](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-dataset-imagenet-val)
* get,dataset-aux,imagenet-aux,image-classification
Expand All @@ -174,9 +191,11 @@ ___
- CM script: [get-generic-python-lib](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-generic-python-lib)
* get,generic-python-lib,_onnxruntime
* `if (USE_CUDA != True)`
* CM names: `--adr.['onnxruntime']...`
- CM script: [get-generic-python-lib](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-generic-python-lib)
* get,generic-python-lib,_onnxruntime_gpu
* `if (USE_CUDA == True)`
* CM names: `--adr.['onnxruntime']...`
- CM script: [get-generic-python-lib](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-generic-python-lib)
1. ***Run "preprocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-image-classification-onnx-py/customize.py)***
1. Read "prehook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-image-classification-onnx-py/_cm.yaml)
Expand Down
20 changes: 17 additions & 3 deletions cm-mlops/script/app-mlperf-inference-cpp/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,9 +106,11 @@ ___
<details>
<summary>Click here to expand this section.</summary>

* `_batch-size.#`
- Environment variables:
- *CM_MLPERF_LOADGEN_MAX_BATCHSIZE*: `#`
* `_multistream,resnet50`
- Workflow:
* `_multistream,retinanet`
- Workflow:
* `_offline,resnet50`
- Workflow:
* `_resnet50,multistream`
- Workflow:
Expand All @@ -120,6 +122,18 @@ ___
</details>


* Group "**batch-size**"
<details>
<summary>Click here to expand this section.</summary>

* `_batch-size.#`
- Environment variables:
- *CM_MLPERF_LOADGEN_MAX_BATCHSIZE*: `#`
- Workflow:

</details>


* Group "**device**"
<details>
<summary>Click here to expand this section.</summary>
Expand Down
1 change: 1 addition & 0 deletions cm-mlops/script/app-mlperf-inference-reference/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -493,6 +493,7 @@ ___
* `_sdxl`
- Environment variables:
- *CM_MODEL*: `stable-diffusion-xl`
- *CM_NUM_THREADS*: `1`
- Workflow:
1. ***Read "deps" on other CM scripts***
* get,generic-python-lib,_package.diffusers
Expand Down
2 changes: 2 additions & 0 deletions cm-mlops/script/app-mlperf-inference/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,6 +132,7 @@ ___
<summary>Click here to expand this section.</summary>

* `_cpp`
- Aliases: `_mil`
- Environment variables:
- *CM_MLPERF_CPP*: `yes`
- *CM_MLPERF_IMPLEMENTATION*: `cpp`
Expand Down Expand Up @@ -745,6 +746,7 @@ ___
#### New environment keys auto-detected from customize

* `CM_MLPERF_ACCURACY_RESULTS_DIR`
* `CM_MLPERF_LOADGEN_COMPLIANCE_TEST`
___
### Maintainers

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
* [ Run this script via Docker (beta)](#run-this-script-via-docker-(beta))
* [Customization](#customization)
* [ Variations](#variations)
* [ Script flags mapped to environment](#script-flags-mapped-to-environment)
* [ Default environment](#default-environment)
* [Script workflow, dependencies and native scripts](#script-workflow-dependencies-and-native-scripts)
* [Script output](#script-output)
Expand Down Expand Up @@ -48,12 +49,14 @@ ___

#### Run this script from command line

1. `cm run script --tags=benchmark,run,natively,all,any,mlperf,mlperf-implementation,implementation,mlperf-models[,variations] `
1. `cm run script --tags=benchmark,run,natively,all,any,mlperf,mlperf-implementation,implementation,mlperf-models[,variations] [--input_flags]`

2. `cmr "benchmark run natively all any mlperf mlperf-implementation implementation mlperf-models[ variations]" `
2. `cmr "benchmark run natively all any mlperf mlperf-implementation implementation mlperf-models[ variations]" [--input_flags]`

* `variations` can be seen [here](#variations)

* `input_flags` can be seen [here](#script-flags-mapped-to-environment)

#### Run this script from Python

<details>
Expand Down Expand Up @@ -88,7 +91,7 @@ Use this [online GUI](https://cKnowledge.org/cm-gui/?tags=benchmark,run,natively

#### Run this script via Docker (beta)

`cm docker script "benchmark run natively all any mlperf mlperf-implementation implementation mlperf-models[ variations]" `
`cm docker script "benchmark run natively all any mlperf mlperf-implementation implementation mlperf-models[ variations]" [--input_flags]`

___
### Customization
Expand Down Expand Up @@ -188,13 +191,36 @@ ___
#### Default variations

`_performance-only`

#### Script flags mapped to environment
<details>
<summary>Click here to expand this section.</summary>

* `--backends=value` &rarr; `BACKENDS=value`
* `--category=value` &rarr; `CATEGORY=value`
* `--devices=value` &rarr; `DEVICES=value`
* `--division=value` &rarr; `DIVISION=value`
* `--models=value` &rarr; `MODELS=value`
* `--power_server=value` &rarr; `POWER_SERVER=value`
* `--power_server_port=value` &rarr; `POWER_SERVER_PORT=value`

**Above CLI flags can be used in the Python CM API as follows:**

```python
r=cm.access({... , "backends":...}
```

</details>

#### Default environment

<details>
<summary>Click here to expand this section.</summary>

These keys can be updated via `--env.KEY=VALUE` or `env` dictionary in `@input.json` or using script flags.

* DIVISION: `open`
* CATEGORY: `edge`

</details>

Expand All @@ -218,7 +244,7 @@ ___

___
### Script output
`cmr "benchmark run natively all any mlperf mlperf-implementation implementation mlperf-models[,variations]" -j`
`cmr "benchmark run natively all any mlperf mlperf-implementation implementation mlperf-models[,variations]" [--input_flags] -j`
#### New environment keys (filter)

#### New environment keys auto-detected from customize
Expand Down
146 changes: 146 additions & 0 deletions cm-mlops/script/create-patch/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,146 @@
<details>
<summary>Click here to see the table of contents.</summary>

* [About](#about)
* [Summary](#summary)
* [Reuse this script in your project](#reuse-this-script-in-your-project)
* [ Install CM automation language](#install-cm-automation-language)
* [ Check CM script flags](#check-cm-script-flags)
* [ Run this script from command line](#run-this-script-from-command-line)
* [ Run this script from Python](#run-this-script-from-python)
* [ Run this script via GUI](#run-this-script-via-gui)
* [ Run this script via Docker (beta)](#run-this-script-via-docker-(beta))
* [Customization](#customization)
* [ Script flags mapped to environment](#script-flags-mapped-to-environment)
* [ Default environment](#default-environment)
* [Script workflow, dependencies and native scripts](#script-workflow-dependencies-and-native-scripts)
* [Script output](#script-output)
* [New environment keys (filter)](#new-environment-keys-(filter))
* [New environment keys auto-detected from customize](#new-environment-keys-auto-detected-from-customize)
* [Maintainers](#maintainers)

</details>

*Note that this README is automatically generated - don't edit!*

### About

#### Summary

* CM GitHub repository: *[mlcommons@ck](https://github.com/mlcommons/ck/tree/master/cm-mlops)*
* GitHub directory for this script: *[GitHub](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/create-patch)*
* CM meta description for this script: *[_cm.yaml](_cm.yaml)*
* CM "database" tags to find this script: *create,patch*
* Output cached? *False*
___
### Reuse this script in your project

#### Install CM automation language

* [Installation guide](https://github.com/mlcommons/ck/blob/master/docs/installation.md)
* [CM intro](https://doi.org/10.5281/zenodo.8105339)

#### Pull CM repository with this automation

```cm pull repo mlcommons@ck```


#### Run this script from command line

1. `cm run script --tags=create,patch [--input_flags]`

2. `cmr "create patch" [--input_flags]`

* `input_flags` can be seen [here](#script-flags-mapped-to-environment)

#### Run this script from Python

<details>
<summary>Click here to expand this section.</summary>

```python

import cmind

r = cmind.access({'action':'run'
'automation':'script',
'tags':'create,patch'
'out':'con',
...
(other input keys for this script)
...
})

if r['return']>0:
print (r['error'])

```

</details>


#### Run this script via GUI

```cmr "cm gui" --script="create,patch"```

Use this [online GUI](https://cKnowledge.org/cm-gui/?tags=create,patch) to generate CM CMD.

#### Run this script via Docker (beta)

`cm docker script "create patch" [--input_flags]`

___
### Customization


#### Script flags mapped to environment
<details>
<summary>Click here to expand this section.</summary>

* `--new=value` &rarr; `CM_CREATE_PATCH_NEW=value`
* `--old=value` &rarr; `CM_CREATE_PATCH_OLD=value`

**Above CLI flags can be used in the Python CM API as follows:**

```python
r=cm.access({... , "new":...}
```

</details>

#### Default environment

<details>
<summary>Click here to expand this section.</summary>

These keys can be updated via `--env.KEY=VALUE` or `env` dictionary in `@input.json` or using script flags.


</details>

___
### Script workflow, dependencies and native scripts

<details>
<summary>Click here to expand this section.</summary>

1. Read "deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/create-patch/_cm.yaml)
1. ***Run "preprocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/create-patch/customize.py)***
1. Read "prehook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/create-patch/_cm.yaml)
1. ***Run native script if exists***
1. Read "posthook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/create-patch/_cm.yaml)
1. ***Run "postrocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/create-patch/customize.py)***
1. Read "post_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/create-patch/_cm.yaml)
</details>

___
### Script output
`cmr "create patch" [--input_flags] -j`
#### New environment keys (filter)

#### New environment keys auto-detected from customize

___
### Maintainers

* [Open MLCommons taskforce on automation and reproducibility](https://github.com/mlcommons/ck/blob/master/docs/taskforce.md)
Loading

0 comments on commit f0b30ca

Please sign in to comment.