Skip to content

Commit

Permalink
[mlperf][pkgci] Update punet-fp8 with reduction dim as last dim (iree…
Browse files Browse the repository at this point in the history
…-org#19316)

We have changes in sharktank that converts reduction dim of the custom
attention to be the fastest dimension. This makes it more uniform with
the FP16 and canonical attention form and hopefully makes optimization
gets called more easily down the line with this.

Additionally, this is to prefetch S.T we do not break the coming
sharktank/mlperf bots and runs.

Signed-off-by: Stanley Winata <[email protected]>
  • Loading branch information
raikonenfnu authored Nov 27, 2024
1 parent 615e7ff commit 7adf8c1
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ transform.named_sequence @match_attention_f8(%attention: !transform.any_op {tran
transform.iree.match.cast_compatible_type %in0 = tensor<?x?x?x?xf8E4M3FNUZ> : !transform.any_value

%config = transform.param.constant #iree_codegen.compilation_info<
lowering_config = #iree_gpu.lowering_config<{workgroup = [1, 1, 64, 0, 0, 0], reduction=[0, 0, 0, 0, 64, 0], promote_operands = [1, 2]}>,
lowering_config = #iree_gpu.lowering_config<{workgroup = [1, 1, 64, 0, 0, 0], reduction=[0, 0, 0, 0, 0, 64], promote_operands = [1, 2]}>,
translation_info = #iree_codegen.translation_info<pipeline = LLVMGPUVectorDistribute
workgroup_size = [64, 4]
subgroup_size = 64 ,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@
)

sdxl_punet_int8_fp8_mlir = fetch_source_fixture(
"https://sharkpublic.blob.core.windows.net/sharkpublic/sai/sdxl-punet/11-8-2024/punet_fp8.mlir",
"https://sharkpublic.blob.core.windows.net/sharkpublic/stan/sdxl-punet/11-26-2024/punet_fp8.mlir",
group="sdxl_punet_int8_fp8",
)

Expand Down

0 comments on commit 7adf8c1

Please sign in to comment.