Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ConvolutionBackwardModule2DStrided_basic fails on the current codebase #32

Open
chudur-budur opened this issue Mar 8, 2024 · 0 comments

Comments

@chudur-budur
Copy link

ConvolutionBackwardModule2DStrided_basic fails on the current codebase.

Similar to: issue #31

(torch-mlir) $USER@$HOST:/localdisk/work/$USER/torch-mlir/projects/pt1|(cpu-proto)> python -m e2e_testing.main -c "cpuproto" -f "ConvolutionBackwardModule2DStrided_basic" -s -v --enable-timer
in shape:  torch.Size([1, 128, 128])
 w shape:  (8192, 16384)
 b shape:  (8192,)
golden_trace: elapsed 0.1507 ms
Compiling ConvolutionBackwardModule2DStrided_basic...
compile: elapsed 0.0023 ms
Running ConvolutionBackwardModule2DStrided_basic...
JIT: elapsed 21.2993 ms
TorchDynamoTestConfig.run(): elapsed 21.3016 ms
run: elapsed 21.3029 ms
FAIL - "ConvolutionBackwardModule2DStrided_basic"

Unexpected outcome summary: (cpuproto)

****** Failed tests - 1 tests
    FAIL - "ConvolutionBackwardModule2DStrided_basic"
        Runtime error: Traceback (most recent call last):
          File "/localdisk/work/$USER/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/framework.py", line 384, in compile_and_run_test
            trace = config.run(compiled, golden_trace)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
          File "/localdisk/work/$USER/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/configs/torchdynamo.py", line 157, in run
            module = jit(artifact,
                     ^^^^^^^^^^^^^
          File "/localdisk/work/$USER/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/configs/torchdynamo.py", line 137, in jit
            return _lower_mlir_module(verbose, output_type, mlir_module, ir_file)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
          File "/localdisk/work/$USER/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir/__init__.py", line 291, in _lower_mlir_module
            run_pipeline_with_repro_report(
          File "/localdisk/work/$USER/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir/compiler_utils.py", line 92, in run_pipeline_with_repro_report
            raise TorchMlirCompilerError(trimmed_message) from None
        torch_mlir.compiler_utils.TorchMlirCompilerError: Lowering Torch Backend IR -> Linalg-on-Tensors Backend IR failed with the following diagnostics:


        python exception: Failure while executing pass pipeline:
        error: "/localdisk/work/$USER/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/configs/torchdynamo.py":121:0: 'linalg.transpose' op size of permutation 2 does not match the argument rank 4
        note: "/localdisk/work/$USER/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/configs/torchdynamo.py":121:0: see current operation:
        %185 = "linalg.transpose"(%34, %184) <{permutation = array<i64: 1, 0>}> ({
        ^bb0(%arg3: f32, %arg4: f32):
        "linalg.yield"(%arg3) : (f32) -> ()
        }) : (tensor<1x2x8x8xf32>, tensor<2x1x8x8xf32>) -> tensor<2x1x8x8xf32>

        For Torch-MLIR developers, the error can be reproduced with:
        $ torch-mlir-opt -pass-pipeline='builtin.module(torch-backend-to-linalg-on-tensors-backend-pipeline)' /tmp/ConvolutionBackwardModule2DStrided.mlir
        Add '-mlir-print-ir-after-all -mlir-disable-threading' to get the IR dump for debugging purpose.



Summary:
    Failed: 1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant