Skip to content

Commit

Permalink
Merge pull request #78 from Catking14/mlperf-inference-results-scc24
Browse files Browse the repository at this point in the history
MLPerf inference results scc24 NTHU
  • Loading branch information
arjunsuresh authored Nov 18, 2024
2 parents 84417a0 + f0deba5 commit 4936f06
Show file tree
Hide file tree
Showing 66 changed files with 28,398 additions and 0 deletions.
1 change: 1 addition & 0 deletions open/NTHU/code/stable-diffusion-xl/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
TBD
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
| Model | Scenario | Accuracy | Throughput | Latency (in ms) |
|---------------------|------------|-----------------------|--------------|-------------------|
| stable-diffusion-xl | offline | (16.40356, 235.76925) | 0.529 | - |
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
{
"starting_weights_filename": "https://github.com/mlcommons/inference/tree/master/text_to_image#download-model",
"retraining": "no",
"input_data_types": "fp32",
"weight_data_types": "fp32",
"weight_transformations": "no"
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
This experiment is generated using the [MLCommons Collective Mind automation framework (CM)](https://github.com/mlcommons/cm4mlops).

*Check [CM MLPerf docs](https://docs.mlcommons.org/inference) for more details.*

## Host platform

* OS version: Linux-6.8.0-47-generic-x86_64-with-glibc2.35
* CPU version: x86_64
* Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0]
* MLCommons CM version: 3.1.0

## CM Run Command

See [CM installation guide](https://docs.mlcommons.org/inference/install/).

```bash
pip install -U cmind

cm rm cache -f

cm pull repo mlcommons@cm4mlops --checkout=9bdfff592f2d9607660fb20125ec620ebedc3758

cm run script \
--tags=run-mlperf,inference,_r4.1-dev,_short,_scc24-base \
--model=sdxl \
--implementation=reference \
--framework=pytorch \
--category=datacenter \
--scenario=Offline \
--execution_mode=test \
--device=cuda \
--quiet \
--precision=float16 \
--clean
```
*Note that if you want to use the [latest automation recipes](https://docs.mlcommons.org/inference) for MLPerf (CM scripts),
you should simply reload mlcommons@cm4mlops without checkout and clean CM cache as follows:*

```bash
cm rm repo mlcommons@cm4mlops
cm pull repo mlcommons@cm4mlops
cm rm cache -f

```

## Results

Platform: Kuai-Kuai_702-reference-gpu-pytorch-v2.5.1-scc24-base_cu124

Model Precision: fp32

### Accuracy Results
`CLIP_SCORE`: `16.40356`, Required accuracy for closed division `>= 31.68632` and `<= 31.81332`
`FID_SCORE`: `235.76925`, Required accuracy for closed division `>= 23.01086` and `<= 23.95008`

### Performance Results
`Samples per second`: `0.528528`

Large diffs are not rendered by default.

Loading

0 comments on commit 4936f06

Please sign in to comment.