Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FXML-3548] Bump torch mlir #141

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
219 commits
Select commit Hold shift + click to select a range
7c6961b
[Torch Dialect] Support aten.cuda and add canonicalizer for aten.cuda…
qingyunqu Jun 14, 2023
bba0f58
[Stablehlo] add conversion for AtenFlipOp (#2163)
qingyunqu Jun 15, 2023
ab8b23e
build: manually update PyTorch version
vivekkhandelwal1 Jun 15, 2023
45e2188
update PyTorch version to 2.1.0.dev20230615 (#2238)
silvasean Jun 15, 2023
6f42001
TorchToTosa: Cast float constants to correct type to support bfloat16…
mgehre-amd Jun 16, 2023
f6a6cfe
[MLIR][TORCH] Add support for negative index values for index.Tensor …
vivekkhandelwal1 Jun 16, 2023
145055b
update PyTorch version to 2.1.0.dev20230617 (#2241)
silvasean Jun 17, 2023
9b4e369
update PyTorch version to 2.1.0.dev20230618 (#2244)
silvasean Jun 18, 2023
860a2d4
update PyTorch version to 2.1.0.dev20230619 (#2245)
silvasean Jun 19, 2023
ebda611
[build] Update llvm tag to 3f8d8c1a
Jun 20, 2023
0244f54
Add typeids to CAPI. (#2253)
makslevental Jun 21, 2023
a0d2789
[MLIR][TORCH] Add e2e support for aten.alias
Abhishek-Varma Jun 20, 2023
c91c67e
update PyTorch version to 2.1.0.dev20230621 (#2247)
silvasean Jun 21, 2023
4fd4477
[Torch Dialect] require hasSizes when decompose aten.amax (#2248)
qingyunqu Jun 22, 2023
96b14e9
[Torch Dialect] Support aten.device.with_index (#2254)
qingyunqu Jun 22, 2023
6f2bf31
Fix single-element tuple construction in abstract interp library (#2258)
ramiro050 Jun 22, 2023
39201a4
[Torch Dialect] avoid assertion failure when PrimNumToTensorScalarOp'…
qingyunqu Jun 23, 2023
64afc08
[Torch Dialect] add missing one_hot dtype function (#2143)
qingyunqu Jun 23, 2023
fbb5ed5
update PyTorch version to 2.1.0.dev20230623 (#2260)
silvasean Jun 23, 2023
ec98ce2
update PyTorch version to 2.1.0.dev20230624 (#2261)
silvasean Jun 24, 2023
f4e7344
update PyTorch version to 2.1.0.dev20230625 (#2263)
silvasean Jun 25, 2023
0548e2e
[Stablehlo] fix promoteType() when input doesn't have DefiningOp (#2262)
qingyunqu Jun 25, 2023
38fb99d
update PyTorch version to 2.1.0.dev20230626 (#2266)
silvasean Jun 26, 2023
a52a2b5
Update LLVM (#2267)
silvasean Jun 26, 2023
1ea2b57
[Torch Dialect] add folder for aten.add (#2264)
qingyunqu Jun 27, 2023
859885c
[Torch Dialect] Support aten.native_dropout (#2259)
qingyunqu Jun 27, 2023
1eb63f3
update PyTorch version to 2.1.0.dev20230627 (#2269)
silvasean Jun 27, 2023
ddd0c06
[TORCH] Fix recompose off by -1 error (#2271)
Jun 27, 2023
8281935
update PyTorch version to 2.1.0.dev20230628 (#2272)
silvasean Jun 28, 2023
449cfb8
[Torch Dialect] add more scalar op folders (#2265)
qingyunqu Jun 29, 2023
facce24
[Bazel] Fix broken Bazel build (#2252)
sjain-stanford Jun 29, 2023
c7fa42b
[Torch Dialect] Add canonicalizer for aten.to.other op (#2273)
Vremold Jun 30, 2023
db1a42d
update PyTorch version to 2.1.0.dev20230630 (#2274)
silvasean Jun 30, 2023
112a2ce
update PyTorch version to 2.1.0.dev20230701 (#2276)
silvasean Jul 1, 2023
157e5e5
update PyTorch version to 2.1.0.dev20230701 (#2278)
silvasean Jul 2, 2023
8c87057
update PyTorch version to 2.1.0.dev20230704 (#2282)
silvasean Jul 4, 2023
7f4084b
update PyTorch version to 2.1.0.dev20230705 (#2284)
silvasean Jul 5, 2023
6c9ba4c
[Torch-to-Linalg] Add dynamic dimension support for BroadcastTo op (#…
Abhishek-Varma Jul 7, 2023
6ac85ee
update PyTorch version to 2.1.0.dev20230707 (#2290)
silvasean Jul 7, 2023
2fdfa04
update PyTorch version to 2.1.0.dev20230708 (#2292)
silvasean Jul 8, 2023
05920f9
update PyTorch version to 2.1.0.dev20230709 (#2293)
silvasean Jul 9, 2023
6a072d4
[Stablehlo] AtenEmptyMemoryFormat remove device cpu check (#2288)
zhekunz2 Jul 10, 2023
1766939
update PyTorch version to 2.1.0.dev20230710 (#2296)
silvasean Jul 10, 2023
bbd3094
update PyTorch version to 2.1.0.dev20230711 (#2299)
silvasean Jul 11, 2023
c23a61f
DecomposeComplexOps: Use static shape if available (#2289)
mgehre-amd Jul 12, 2023
91c6454
Filter out empty strings while generting function signature
nithinsubbiah Jul 12, 2023
f8e75f6
Add make_fx_tosa variant to end2end tests (#2240)
mgehre-amd Jul 13, 2023
50f5b65
update PyTorch version to 2.1.0.dev20230713 (#2303)
silvasean Jul 13, 2023
7f6b72a
[Torch Dialect] add runtime.assert to check constraint when recomposi…
qingyunqu Jul 14, 2023
4838355
TorchToTosa: Legalization for torch.aten.sqrt (#2234)
ttjost Jul 14, 2023
bcbfeec
update PyTorch version to 2.1.0.dev20230714 (#2308)
silvasean Jul 14, 2023
2745550
update PyTorch version to 2.1.0.dev20230715 (#2311)
silvasean Jul 15, 2023
d69b6bd
update PyTorch version to 2.1.0.dev20230716 (#2312)
silvasean Jul 16, 2023
06c9bd0
lib/Conversion/TorchToTosa/TorchToTosa.cpp: Fix legalization of compa…
mgehre-amd Jul 17, 2023
ba24a46
update PyTorch version to 2.1.0.dev20230717 (#2315)
silvasean Jul 17, 2023
5706697
[TOSA] Add aten._index_put_impl support (#2031)
Jul 17, 2023
718f53f
Fix handling of `!torch.number` in abstract interpretation library (#…
ramiro050 Jul 17, 2023
0c17997
Don't crash when the input to aten.copy is unranked (#2307)
mgehre-amd Jul 18, 2023
3b56f97
update PyTorch version to 2.1.0.dev20230718 (#2318)
silvasean Jul 18, 2023
a308a54
Fixes Windows DLL crash (#2321)
AyaanShah2204 Jul 19, 2023
c9add6b
update PyTorch version to 2.1.0.dev20230719 (#2323)
silvasean Jul 19, 2023
3f843c8
[torch-dialect] fix aten.type_as op's folder (#2283)
Vremold Jul 20, 2023
0650efe
Conform to Python custom exception api
makslevental Jul 19, 2023
64d7626
Fixes for split tensor and slice (#2314)
mgehre-amd Jul 20, 2023
9535be7
[Torch-Dialect] emit aten.narrow.Tensor op and decompose it to aten.n…
Vremold Jul 20, 2023
91a9baa
update PyTorch version to 2.1.0.dev20230720 (#2326)
silvasean Jul 20, 2023
4847563
Clean up verification of calling conventions.
Jul 18, 2023
a20422c
Support `DerefineOp` in `RefinePublicReturn`.
Jul 18, 2023
1e468e8
Fix canonicalization of `torch.prim.TupleUnpack`.
Jul 18, 2023
3ca35b4
TorchToTosa: aten.embedding: Allow indices with any rank (#2327)
mgehre-amd Jul 21, 2023
fb4c54f
update PyTorch version to 2.1.0.dev20230721 (#2331)
silvasean Jul 21, 2023
f0d8b62
update PyTorch version to 2.1.0.dev20230722 (#2333)
silvasean Jul 22, 2023
dd0e91b
update PyTorch version to 2.1.0.dev20230723 (#2335)
silvasean Jul 23, 2023
026e8db
[Stablehlo] add converter for aten.scatter.src op (#2295)
Vremold Jul 24, 2023
238c050
fix cmake torch-mlir-capi linking and bazel build (#2336)
qingyunqu Jul 24, 2023
ef11a77
update PyTorch version to 2.1.0.dev20230724 (#2339)
silvasean Jul 24, 2023
4a96e71
Use `register_buffer` to make `Add_Module` test work on lazy tensor (…
ramiro050 Jul 24, 2023
d57f67e
[Torch Dialect] emit aten.nonzero, aten.nonzero_numpy, aten.nonzero_s…
Vremold Jul 25, 2023
31ef08b
[Stablehlo]Add support for AvgPool1dOp (#2268)
JianzheXiao Jul 25, 2023
c56cb53
Ignore constants in the legality error (#2328)
mgehre-amd Jul 25, 2023
0a67411
test/CAPI/CMakeLists.txt: Depend on FileCheck (#2329)
mgehre-amd Jul 25, 2023
398fa0e
build: update llvm tag to 4592543a01609fe
Shukla-Gaurav Jul 21, 2023
c9f2e83
update PyTorch version to 2.1.0.dev20230725 (#2341)
silvasean Jul 25, 2023
991eba2
update PyTorch version to 2.1.0.dev20230726 (#2348)
silvasean Jul 26, 2023
c7c59b5
[Stablehlo] support dynamic shape when convert aten.fill.Scalar (#2349)
qingyunqu Jul 27, 2023
6aeb1f1
update PyTorch version to 2.1.0.dev20230727 (#2352)
silvasean Jul 27, 2023
c19fda4
update PyTorch version to 2.1.0.dev20230728 (#2353)
silvasean Jul 28, 2023
0109bf7
[MLIR][TORCH] Fix aten.cumsum lowering for int32 input (#2351)
vivekkhandelwal1 Jul 28, 2023
16923fd
[Stablehlo] Add converter to stablehlo for aten.(Int,Float,Bool).Tens…
Vremold Jul 29, 2023
fbdcf1e
update PyTorch version to 2.1.0.dev20230729 (#2354)
silvasean Jul 29, 2023
5be26a7
update PyTorch version to 2.1.0.dev20230730 (#2356)
silvasean Jul 30, 2023
4c24472
update PyTorch version to 2.1.0.dev20230731 (#2359)
silvasean Jul 31, 2023
fb52a73
LTC->MLIR Debug Info support (#1922)
GlebKazantaev Aug 2, 2023
a374c39
build: update llvm tag to 41895843
vivekkhandelwal1 Jul 26, 2023
ca1f015
update PyTorch version to 2.1.0.dev20230802 (#2366)
silvasean Aug 2, 2023
68f5201
Change Python version from 3.10 to 3.11 in installation instructions …
ramiro050 Aug 2, 2023
22cb006
Add CITATION file (#2371)
ramiro050 Aug 2, 2023
48f4e8f
Add packaging as an install dependency (#2369)
JackWolfard Aug 3, 2023
6db92d1
[Torch Dialect] emit aten.masked_scatter and aten.masked_scatter_ op …
Vremold Aug 3, 2023
8d4f4cb
update PyTorch version to 2.1.0.dev20230803 (#2372)
silvasean Aug 3, 2023
c0d8248
Prevent failed stable CI job from cancelling nightly jobs (#2373)
ramiro050 Aug 3, 2023
20a2b68
[Torch Dialect] emit aten.tile op and decompose it into aten.repeat (…
Vremold Aug 4, 2023
598d690
update PyTorch version to 2.1.0.dev20230804 (#2377)
silvasean Aug 4, 2023
2fbb4c7
Fix "unused variable 'outType'" warning (#2378)
ramiro050 Aug 4, 2023
38b049e
[Torch Dialect] add support for adaptive_avgpool_1d (#2342)
JianzheXiao Aug 4, 2023
9a1fae9
update PyTorch version to 2.1.0.dev20230805 (#2379)
silvasean Aug 5, 2023
d13b753
update PyTorch version to 2.1.0.dev20230806 (#2380)
silvasean Aug 6, 2023
d9e3553
update PyTorch version to 2.1.0.dev20230807 (#2382)
silvasean Aug 7, 2023
4c12ace
[Torch-Dialect] add canonicalizer for prim::ListConstruct op (#2306)
Vremold Aug 8, 2023
ee6c87e
[MLIR][TORCH] Add support for dtype arg for softmax.int op
vivekkhandelwal1 Aug 4, 2023
5744506
update PyTorch version to 2.1.0.dev20230808 (#2385)
silvasean Aug 8, 2023
3854e9e
[build] Update llvm tag to f580901d
ramiro050 Aug 9, 2023
f0a8f27
build: manually update PyTorch version
vivekkhandelwal1 Aug 10, 2023
e61ef1e
[MLIR][TORCH] Add support for aten._unsafe_index_put.hacked_twin op
vivekkhandelwal1 Jul 14, 2023
23d7821
update PyTorch version to 2.1.0.dev20230811 (#2392)
silvasean Aug 11, 2023
ff76210
Add handling of namespaces to library generator (#2391)
ramiro050 Aug 11, 2023
17f1300
update PyTorch version to 2.1.0.dev20230812 (#2393)
silvasean Aug 12, 2023
98e75af
update PyTorch version to 2.1.0.dev20230813 (#2394)
silvasean Aug 13, 2023
e0f0c5d
update PyTorch version to 2.1.0.dev20230814 (#2396)
silvasean Aug 14, 2023
60bad54
[Torch Dialect] replace none-index in aten.Index.Tensor's param by m…
Vremold Aug 15, 2023
94f7593
update PyTorch version to 2.1.0.dev20230815 (#2399)
silvasean Aug 15, 2023
41bafe1
[build] Update llvm tag to a3f2751f (#2397)
ramiro050 Aug 15, 2023
efe69eb
update PyTorch version to 2.1.0.dev20230816 (#2400)
silvasean Aug 16, 2023
594a1fa
update PyTorch version to 2.1.0.dev20230817 (#2401)
silvasean Aug 17, 2023
d77b9cf
[TOSA] Fix conversion for depthwise convolutions (#2398)
Aug 18, 2023
6648ad9
Per request, swap Sean Silva for Stella Laurenzo in code owners. (#2403)
stellaraccident Aug 18, 2023
822353f
update PyTorch version to 2.1.0.dev20230819 (#2405)
silvasean Aug 19, 2023
aa007da
update PyTorch version to 2.1.0.dev20230820 (#2406)
silvasean Aug 20, 2023
5743b6d
LTC multi-output operations support (#2362)
GlebKazantaev Aug 20, 2023
8ffe5d1
Add Sean Silva to code owners as emeritus.
stellaraccident Aug 21, 2023
b636e0c
[Stablehlo Dialect] fix lowering batch_norm with mixed types (#2383)
qingyunqu Aug 21, 2023
3dd29f9
Update Torch ODS list with new ops (#2361)
GlebKazantaev Aug 21, 2023
4c9d234
revert canonicalizer for PrimListConstructOp (#2408)
Vremold Aug 22, 2023
b552d4e
[Torch Dialect] Fix small bugs in decompose-complex-ops pass, e.g. mi…
Vremold Aug 22, 2023
8f28d93
CI: disable LTC e2e tests in stable PyTorch builds (#2414)
ashay Aug 23, 2023
4339c00
[Torch Dialect][stablehlo] emit aten.rand op and add converter to sta…
Vremold Aug 27, 2023
12eadcc
add e2e support for aten.elu
123epsilon Aug 22, 2023
610d836
impl aten.elu as decomposition
123epsilon Aug 23, 2023
5138148
update passing test sets
123epsilon Aug 23, 2023
a80bc42
dtype test case
123epsilon Aug 23, 2023
8855fa3
amend dtype function
123epsilon Aug 24, 2023
bc6bba9
add nondefault test case, add to illegal ops in backend contract
123epsilon Aug 25, 2023
ca34b9c
add max_pool3d (#2386)
davidgens-cerebras Aug 28, 2023
5282324
[Importer] fix has value semantic return type (#2404)
zhekunz2 Aug 29, 2023
c42d2be
[MLIR][TORCH] add E2E support for aten.min op (#2422)
123epsilon Aug 29, 2023
1682b54
Prototype passes for lowering quantized group matmul (#2402)
jinchen62 Aug 30, 2023
17d0281
[Torch Dialect] add folder for aten.any.bool (#2388)
JianzheXiao Aug 30, 2023
6b02e9a
[LTC] Tensor[]? support operands type support using partial codegen (…
GlebKazantaev Aug 30, 2023
aa15f0d
build: manually update PyTorch version
vivekkhandelwal1 Aug 25, 2023
5c43daa
[MLIR][TORCH] Add e2e support for aten.pow.Scalar op
vivekkhandelwal1 Aug 31, 2023
f83ee83
update PyTorch version to 2.1.0.dev20230831
Aug 31, 2023
729386c
build: manually update PyTorch version
vivekkhandelwal1 Sep 1, 2023
34a0897
[MLIR][TORCH] add E2E support for aten.rand (#2424)
123epsilon Sep 1, 2023
1fc4314
Add folder for aten.broadcast_to on unchanged static shapes (#2421)
qedawkins Sep 1, 2023
cd1c7df
[MLIR][TORCH] Add E2E support for view_as_real op (#2419)
brucekimrokcmu Sep 2, 2023
e9ab8ce
[Torch Dialect] support aten.split_with_sizes (#2431)
qingyunqu Sep 4, 2023
30510f8
[stablehlo] add AtenScalarImplicitOp's reverter to stablehlo backend …
Vremold Sep 4, 2023
a3ac451
build_tools/python_deploy/build_linux_packages.sh: Disable dynamo tes…
mgehre-amd Sep 4, 2023
d62045f
emit aten.max.other op (#2436)
Vremold Sep 5, 2023
3841fe3
[MLIR][TORCH] Add StableHLO lowering for embedding_bag.padding_idx op
vivekkhandelwal1 Sep 5, 2023
c93c697
[stablehlo] add dtype conversion when converting AtenScalarImplicitOp…
Vremold Sep 5, 2023
9cb5d38
[MLIR][TORCH] Add E2E `torch.aten.prod_dim_int` (#2423)
jerinphilip Sep 5, 2023
29fdc38
Fix GCC warning recommending parens.
stellaraccident Sep 6, 2023
fcb3b71
Properly guard clang-specific pragma.
stellaraccident Sep 6, 2023
b411a40
[Torch Dialect] emit aten.__or__Tensor Op (#2437)
Vremold Sep 6, 2023
67ae97f
build: manually update PyTorch version
vivekkhandelwal1 Sep 6, 2023
a8fd275
Fix build issue on MSVC by not having a conditional on disjoint types.
stellaraccident Sep 7, 2023
27b55b1
implemented complex tensor aten mul (#2444)
brucekimrokcmu Sep 7, 2023
cfd70df
Update merge-rollpytorch.yml with approved actors (#2446)
powderluv Sep 8, 2023
4dac1a5
update PyTorch version to 2.2.0.dev20230908 (#2433)
stellaraccident Sep 8, 2023
cc4a5d9
update PyTorch version to 2.2.0.dev20230909 (#2450)
stellaraccident Sep 9, 2023
1f20b72
[Torch Dialect] add canonicalize for aten.min.other (#2452)
qingyunqu Sep 11, 2023
5895b9f
fix compile warning (#2453)
qingyunqu Sep 12, 2023
c1f379f
build: manually update PyTorch version
vivekkhandelwal1 Sep 12, 2023
23b7224
[MLIR][TORCH] Add different dtype support for aten.bmm op
vivekkhandelwal1 Sep 11, 2023
82456ee
[MLIR][TORCH] add E2E support for aten.new_full (#2425)
123epsilon Sep 12, 2023
97bec86
[MLIR][TORCH] Add E2E support for aten.empty_strided decomposition op…
brucekimrokcmu Sep 12, 2023
106b585
Revert "[MLIR][TORCH] Add E2E support for aten.empty_strided decompos…
ramiro050 Sep 12, 2023
a00a0d4
Integrate llvm-project and mlir-hlo. (#2454)
stellaraccident Sep 12, 2023
078d1e1
Remove mlir-hlo (replace with stablehlo). (#2460)
stellaraccident Sep 13, 2023
107ed0d
Fix two CMake issues that were causing Windows compilation failures. …
stellaraccident Sep 13, 2023
4b4c38d
build: manually update PyTorch version
vivekkhandelwal1 Sep 13, 2023
40913a3
[MLIR][TORCH] Add E2E support for aten.empty_strided decomposition op…
brucekimrokcmu Sep 13, 2023
3d974ed
[Bazel] Replace mlir-hlo with stablehlo (#2463)
sjain-stanford Sep 14, 2023
7a7be60
Fix python package install instructions (#2464)
sogartar Sep 14, 2023
b03efdf
build: manually update PyTorch version
vivekkhandelwal1 Sep 19, 2023
278c41e
Bump llvm-project to f66cd9e9556a53142a26a5c21a72e21f1579217c. (#2466)
stellaraccident Sep 19, 2023
20ea1c9
Revert accidental change to submodule origin. (#2477)
stellaraccident Sep 20, 2023
023fc90
[Torch Dialect] add avg_pool 2d and 3d op variants (#2473)
davidgens-cerebras Sep 20, 2023
b9847b1
Fixing implicit double to float casts. (#2476)
benvanik Sep 20, 2023
059041e
[LTC] Support torch.ones/zeros/arange ops (#2440)
GlebKazantaev Sep 21, 2023
6699cbc
build: manually update PyTorch version (#2480)
vivekkhandelwal1 Sep 22, 2023
5f772e8
CI: reconcile differences between RollPyTorch and pre-merge checks (#…
ashay Sep 23, 2023
a520d39
[MLIR][TORCH] Add device "cpu" support for aten.to.dtype_layout op (…
brucekimrokcmu Sep 25, 2023
c9fd789
[NFC] Clean-up `ConvertAtenViewOp` in linalg backend (#2470)
ramiro050 Sep 26, 2023
ff7f8b2
update llvm-project to d13da154a7c7eff77df8686b2de1cfdfa7cc7029 (#2483)
dan-garvey Sep 26, 2023
15acd5d
[FXML-3548] Bump torch mlir
TinaAMD Nov 13, 2023
64baff4
Clean up merge artifacts, add no supported tests
TinaAMD Nov 13, 2023
7d07a39
Update auto-generated files
TinaAMD Nov 13, 2023
e3571de
Clean up merge of ltc xfail set
TinaAMD Nov 14, 2023
767d22d
Drop decomposition in torchdynamo
TinaAMD Nov 14, 2023
3042071
Align LTC_CRASHING_SET to upstream
TinaAMD Nov 15, 2023
48c8a98
Update xfail_sets for make_fx_tosa
TinaAMD Nov 15, 2023
96f6d3f
XFail test for aten pow (broken lowering to tosa)
TinaAMD Nov 17, 2023
2455779
Make submodule point to our bumped LLVM
TinaAMD Nov 17, 2023
242baa4
Add additional pass in make_fx_tosa
TinaAMD Nov 22, 2023
93f2864
Re-enable decompostion of aten.index_select
TinaAMD Nov 22, 2023
5d1979c
Remove some IndexSelect tests from pass set
TinaAMD Nov 22, 2023
89a4d36
Move test from fail to crashing
TinaAMD Nov 23, 2023
7e81c59
Remove 0-size dimension slice from pass set
TinaAMD Nov 23, 2023
ddf98a9
build: manually update PyTorch version
vivekkhandelwal1 Sep 27, 2023
ca2d294
Merge branch 'feature/backport_ea1_ops' into tina.FXML-3548-bump-torc…
TinaAMD Jan 10, 2024
51bbd2d
Merge branch 'feature/backport_ea1_ops' into tina.FXML-3548-bump-torc…
TinaAMD Jan 26, 2024
48f7d88
Update to newer version of LLVM bump
TinaAMD Jan 26, 2024
da585a4
Add additional passes to make_fx pass set
TinaAMD Jan 29, 2024
79b2fa8
Revert "Add additional passes to make_fx pass set"
TinaAMD Jan 29, 2024
81dae2a
Update new requirements file for stable version
TinaAMD Jan 29, 2024
e3a430e
Rewrite torch back to the llvm fused ops branch
TinaAMD Feb 1, 2024
2e32089
Merge commit 'ff7f8b21dcc842a4f70209a6d255d54c4ef6e39b' into tina.FXM…
mgehre-amd Feb 2, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 15 additions & 4 deletions .github/workflows/RollPyTorch.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,21 @@ jobs:
- name: Get torch-mlir
uses: actions/checkout@v3
with:
submodules: 'true'
submodules: 'false'
token: ${{ secrets.WORKFLOW_INVOCATION_TOKEN }}

- name: Get LLVM and StableHlo submodules
run: |
set -eo pipefail
cd ${GITHUB_WORKSPACE}

# Fetching the submodules concurrently may cause problems, so we fetch
# them one after another.
rm -f .git/modules/externals/llvm-project/index.lock
rm -f .git/modules/externals/stablehlo/index.lock
git submodule update --init --recursive externals/llvm-project
git submodule update --init --recursive externals/stablehlo

- name: Setup ccache
uses: ./.github/actions/setup-build
with:
Expand Down Expand Up @@ -71,15 +83,14 @@ jobs:
echo "PTVISION_RELEASE=${VISION_RELEASE}" >> ${GITHUB_ENV}
echo "PT_HASH_CHANGED=${PT_HASH_CHANGED}" >> ${GITHUB_ENV}

- name: Build and test (in-tree), also update ODS and abstract interpretation library
- name: Build and test (out-of-tree), also update ODS and abstract interpretation library
if: env.PT_HASH_CHANGED != '0'
run: |
cd ${GITHUB_WORKSPACE}
TM_PACKAGES="in-tree" TM_USE_PYTORCH_BINARY="OFF" \
TM_PACKAGES="out-of-tree" TM_USE_PYTORCH_BINARY="OFF" \
TORCH_MLIR_SRC_PYTORCH_BRANCH="${{ env.PT_HASH }}" \
TORCH_MLIR_SRC_PYTORCH_RELEASE="${{ env.PT_RELEASE }}" \
TM_UPDATE_ODS_AND_ABSTRACT_INTERP_LIB="ON" \
TM_PYTHON_VERSIONS="cp311-cp311" \
./build_tools/python_deploy/build_linux_packages.sh

- name: Post issue comment on build failure
Expand Down
20 changes: 10 additions & 10 deletions .github/workflows/bazelBuildAndTest.yml
Original file line number Diff line number Diff line change
Expand Up @@ -58,33 +58,33 @@ jobs:
-t torch-mlir:ci \
.

- name: Bazel build torch-mlir
- name: Verify buildifier was run (bazel lint)
run: |
docker run --rm \
-v "$(pwd)":"/opt/src/torch-mlir" \
-v "${HOME}/.cache/bazel":"/root/.cache/bazel" \
torch-mlir:ci \
bazel build @torch-mlir//:torch-mlir-opt
bazel run @torch-mlir//:buildifier
if [ -n "$(git status --porcelain)" ]; then
echo "Please 'bazel run @torch-mlir//:buildifier' and commit changes."
exit 1
fi

- name: Bazel test torch-mlir (lit tests)
- name: Bazel build torch-mlir
run: |
docker run --rm \
-v "$(pwd)":"/opt/src/torch-mlir" \
-v "${HOME}/.cache/bazel":"/root/.cache/bazel" \
torch-mlir:ci \
bazel test @torch-mlir//test/...
bazel build @torch-mlir//:torch-mlir-opt

- name: Verify buildifier was run (bazel lint)
- name: Bazel test torch-mlir (lit tests)
run: |
docker run --rm \
-v "$(pwd)":"/opt/src/torch-mlir" \
-v "${HOME}/.cache/bazel":"/root/.cache/bazel" \
torch-mlir:ci \
bazel run @torch-mlir//:buildifier
if [ -n "$(git status --porcelain)" ]; then
echo "Please 'bazel run @torch-mlir//:buildifier' and commit changes."
exit 1
fi
bazel test @torch-mlir//test/...

# Switch back bazel cache directory to user ownership
# to allow GHA post-cache step to save cache without
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/merge-rollpytorch.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ jobs:
runs-on: ubuntu-latest
if: |
github.repository == 'llvm/torch-mlir' &&
github.event.workflow_run.actor.login == 'silvasean' &&
github.event.workflow_run.actor.login == 'stellaraccident' &&
github.event.workflow_run.conclusion == 'success'

steps:
Expand Down
10 changes: 5 additions & 5 deletions .gitmodules
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[submodule "externals/llvm-project"]
path = externals/llvm-project
url = https://github.com/Xilinx/llvm-project.git
branch = misc_fixes
[submodule "externals/mlir-hlo"]
path = externals/mlir-hlo
url = https://github.com/tensorflow/mlir-hlo.git
url = https://github.com/Xilinx/llvm-project.git
branch = feature/fused-ops
[submodule "externals/stablehlo"]
path = externals/stablehlo
url = https://github.com/openxla/stablehlo.git
19 changes: 19 additions & 0 deletions CITATION.cff
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
cff-version: 1.2.0
title: Torch-MLIR
message: >-
If you use this software, please cite it using the
metadata from this file.
type: software
authors:
- name: LLVM
repository-code: 'https://github.com/llvm/torch-mlir'
abstract: >-
The Torch-MLIR project aims to provide first class support
from the PyTorch ecosystem to the MLIR ecosystem.
keywords:
- Compiler
- PyTorch
- MLIR
license:
- Apache-2.0 with LLVM Exceptions
- BSD
24 changes: 16 additions & 8 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -118,14 +118,7 @@ else()
endif()

if (TORCH_MLIR_ENABLE_STABLEHLO)
set(STABLEHLO_BUILD_EMBEDDED ON)
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/externals/mlir-hlo
${CMAKE_CURRENT_BINARY_DIR}/mlir-hlo
EXCLUDE_FROM_ALL)
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/externals/mlir-hlo/include)
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/externals/mlir-hlo)
include_directories(${CMAKE_CURRENT_BINARY_DIR}/mlir-hlo/include)
include_directories(${CMAKE_CURRENT_BINARY_DIR}/mlir-hlo)
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/externals/stablehlo)
endif()

set(TORCH_MLIR_SOURCE_DIR "${CMAKE_CURRENT_SOURCE_DIR}")
Expand Down Expand Up @@ -229,3 +222,18 @@ if (NOT LLVM_INSTALL_TOOLCHAIN_ONLY)
COMPONENT torch-mlir-headers)
endif()
endif()

# Important: If loading StableHLO in this fashion, it must come last,
# after all of our libraries and test targets have been defined.
# It seems that they both abuse upstream CMake macros that accumulate
# properties.
# Getting this wrong results in building large parts of the stablehlo
# project that we don't actually depend on. Further some of those parts
# do not even compile on all platforms.
if (TORCH_MLIR_ENABLE_STABLEHLO)
set(STABLEHLO_BUILD_EMBEDDED ON)
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/externals/stablehlo
${CMAKE_CURRENT_BINARY_DIR}/stablehlo
EXCLUDE_FROM_ALL)
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/externals/stablehlo)
endif()
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,25 +43,25 @@ We have few paths to lower down to the Torch MLIR Dialect.

## Install torch-mlir snapshot

At the time of writing, we release pre-built snapshot of torch-mlir for Python 3.10 on Linux and macOS.
At the time of writing, we release pre-built snapshot of torch-mlir for Python 3.11 on Linux and macOS.

If you have Python 3.10, the following commands initialize a virtual environment.
If you have Python 3.11, the following commands initialize a virtual environment.
```shell
python3.10 -m venv mlir_venv
python3.11 -m venv mlir_venv
source mlir_venv/bin/activate
```

Or, if you want to switch over multiple versions of Python using conda, you can create a conda environment with Python 3.10.
Or, if you want to switch over multiple versions of Python using conda, you can create a conda environment with Python 3.11.
```shell
conda create -n torch-mlir python=3.10
conda create -n torch-mlir python=3.11
conda activate torch-mlir
python -m pip install --upgrade pip
```

Then, we can install torch-mlir with the corresponding torch and torchvision nightlies.
```
pip install --pre torch-mlir torchvision \
-f https://llvm.github.io/torch-mlir/package-index/
-f https://llvm.github.io/torch-mlir/package-index/ \
--extra-index-url https://download.pytorch.org/whl/nightly/cpu
```

Expand Down
3 changes: 2 additions & 1 deletion build_tools/autogen_ltc_backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -467,7 +467,8 @@ def gen_fallback_code(*args, **kwargs):
node_base="torch::lazy::TorchMlirNode",
node_base_hdr=str(self.backend_path.joinpath("mlir_node.h")),
tensor_class=self.tensor_class,
tensor_class_hdr="torch/csrc/lazy/core/tensor.h",
tensor_class_hdr="torch_mlir/csrc/base_lazy_backend/tensor.h",
create_aten_from_ltc_tensor="CreateFunctionalizedAtenFromLtcTensor",
shape_inference_hdr=str(self.generated_path.joinpath("shape_inference.h")),
lazy_ir_generator=GenMlirLazyIr,
)
Expand Down
56 changes: 17 additions & 39 deletions build_tools/autogen_ltc_backend.yaml
Original file line number Diff line number Diff line change
@@ -1,16 +1,7 @@
blacklist:
# List of unsupported ops in LTC autogen because of some error
- _index_put_impl_ # Error: TODO not sure if there are other valid types to handle here
- _index_put_impl # Error: TODO not sure if there are other valid types to handle here
- empty_like # Error: TODO add support for type BaseType(name=<BaseTy.MemoryFormat: 12>)
- index.Tensor # Error: TODO not sure if there are other valid types to handle here
- index_put # Error: TODO not sure if there are other valid types to handle here
- index_put_ # Error: TODO not sure if there are other valid types to handle here

# Ops with list of tensors output
- split.Tensor
- unbind.int
- chunk
# Disabled in favour of `aten::index_put` which supports optional indices via `hacked_twin` JIT hack.
# It also doesn't have confusing `unsafe` argument.
- _index_put_impl

# Additional ops which autogen is supported for but don't compile yet
- _convolution
Expand All @@ -21,48 +12,34 @@ blacklist:

# Disabled for consistency with TS backend
- lift_fresh_copy
- new_empty
- rsub
- slice.Tensor # Disabled in favour of slice_copy.Tensor
- zeros
- ones
- arange
- arange.start
- arange.start_step
- fill.Scalar
- scalar_tensor

# Disabled in favour of functionalized alternatives
- _reshape_alias
- expand
- permute
- select.int
- squeeze
- squeeze.dim
- t
- transpose.int
- expand
- squeeze
- unsqueeze
- view
- slice.Tensor
- split.Tensor
- split_with_sizes
- unbind.int

whitelist:
# Enabled for consistency with TS backend
- arange.start_out

# List of ops to autogen even if not supported by Torch-MLIR explicitly
#- split_copy.Tensor
#- split_with_sizes_copy
#- unbind_copy.int

# List of supported ops that we don't want to do the full codegen for
supported:
# - bernoulli
# - bernoulli_
- _to_copy
- clone
- empty.memory_format
- empty_strided
- fill_.Scalar
- _unsafe_view
- unbind_copy.int
- split_copy.Tensor
- split_with_sizes_copy
- index.Tensor
- index_put

# ops required for functionalization
- lift
Expand All @@ -83,20 +60,21 @@ supported:
- _trilinear
- linalg_pinv.atol_rtol_tensor
- logsumexp.out
- t

# List of ops that will take in symints for the size instead of ints
symint:
- empty.memory_format
- new_empty_strided
- expand_copy
- narrow_copy
- slice_backward
- slice_copy.Tensor
- split_copy.Tensor
- slice_scatter
- view
- view_copy
- as_strided_copy
- as_strided_scatter
- split_with_sizes_copy


additional_ops:
Expand Down
Loading
Loading