Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AutoBump] Merge with 513d89c1 (May 17, needs LLVM bump) (38) #271

Merged
merged 25 commits into from
Sep 6, 2024

Conversation

mgehre-amd
Copy link
Collaborator

@mgehre-amd mgehre-amd commented Aug 27, 2024

Needs LLVM to be Xilinx/llvm-project#314 or later

aartbik and others added 21 commits May 13, 2024 15:34
Downstream MLIR sparsifier has some (rudimentary) support for ReLU now,
and this test can now be enabled with correct end-to-end behavior.

Also see discussion at:

https://discourse.llvm.org/t/min-max-abs-relu-recognition-starter-project/78918
This commit adds the OnnxToTorch support for ReduceLogSumExp op
As mentioned in issue llvm#3290 , the difference between onnx.Transpose in
versions 1 and 13 is minimal, and therefore should be supported with the
same conversion pattern.
In constant folding progress, a new constant op will be created
according to the origin op's result type.

See the code in TorchDialect.cpp.

```cpp
Operation *TorchDialect::materializeConstant(OpBuilder &builder,
                                             Attribute value, Type type,
                                             Location loc) {
  if (auto integerType = dyn_cast<Torch::IntType>(type))
    return builder.create<Torch::ConstantIntOp>(loc, cast<IntegerAttr>(value));

  if (auto floatType = dyn_cast<Torch::FloatType>(type))
    return builder.create<Torch::ConstantFloatOp>(loc, cast<FloatAttr>(value));

  if (auto numberType = dyn_cast<Torch::NumberType>(type)) {
    if (auto floatValue = dyn_cast<mlir::FloatAttr>(value)) {
      return builder.create<Torch::ConstantNumberOp>(loc, floatValue);
    } else if (auto intValue = dyn_cast<mlir::IntegerAttr>(value)) {
      return builder.create<Torch::ConstantNumberOp>(loc, intValue);
    }
  }

  if (isa<Torch::BoolType>(type)) {
    return builder.create<Torch::ConstantBoolOp>(loc, cast<IntegerAttr>(value));
  }

  if (isa<Torch::NoneType>(type))
    return builder.create<ConstantNoneOp>(loc);

  if (auto stringAttr = dyn_cast<StringAttr>(value))
    return builder.create<ConstantStrOp>(loc, stringAttr);

  if (auto elementsAttr = dyn_cast<ElementsAttr>(value)) {
    // Only !torch.vtensor can be constant folded. !torch.tensor has
    // non-trivial aliasing semantics which prevent deduplicating it.
    assert(isa<ValueTensorType>(type) && "should be a vtensor type!");
    return builder.create<ValueTensorLiteralOp>(loc, elementsAttr);
  }

  return nullptr;
}
```
So when the op has a tensor result type, it must be "ValueTensorType"
due to the **assert** statement. However, many fold methods in
TorchOps.cpp only have a judgment of "BaseTensorType".
…#3348)

* not to decompose `aten.amax` on `stablehlo` backend. Because it could
be lowering to `stablehlo.reduce` directly.
* lowering `aten.max.dim` to `stablehlo.reduce apply max` when
`AtenMaxDimOp.getIndices()` doesn't have users. It's more simple.
Similar to llvm#2824, we were seeing
some assertion failures after the addition checks around folders were
tightened up in LLVM: llvm/llvm-project#75887 .
This PR essentially moves the logic that used to be applied at the LLVM
level into the folder, which seems to be the suggested fix.
- Refactors OnnxBackend to be generic and consume any Torch backend.

---------

Signed-off-by: Suraj Sudhir <[email protected]>
@mgehre-amd
Copy link
Collaborator Author

TODO: Should work with llvm branch bump_to_5901d400

@mgehre-amd mgehre-amd changed the title [AutoBump] Merge with 513d89c1 (May 17) (38) [AutoBump] Merge with 513d89c1 (May 17, needs LLVM bump) (38) Aug 28, 2024
Base automatically changed from bump_to_08355be5 to feature/backport_ea1_ops September 4, 2024 05:01
An error occurred while trying to automatically change base from bump_to_08355be5 to feature/backport_ea1_ops September 4, 2024 05:01
@jorickert jorickert merged commit a51723a into feature/backport_ea1_ops Sep 6, 2024
4 checks passed
@jorickert jorickert deleted the bump_to_513d89c1 branch September 6, 2024 08:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.