Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add fused GEMM + SiLU kernel #180

Open
wants to merge 12 commits into
base: main
Choose a base branch
from
Open

Commits on Sep 24, 2024

  1. Reorder iter args to match ordering of init args and outputs (iree-or…

    …g#161)
    
    This PR modifies the insertion point for iter args to ensure that the
    iter args are in the same order as the init args and outputs. This
    simplifies the mapping between init args, iter args and outputs.
    
    Signed-off-by: Harsh Menon <[email protected]>
    harsh-nod authored Sep 24, 2024
    Configuration menu
    Copy the full SHA
    0327398 View commit details
    Browse the repository at this point in the history
  2. [ExportedProgram] Add mutable attribute to buffer (iree-org#123)

    Fixes iree-org#85
    
    PR based on the work of @maxbartel 
    
    Requires changes in torch-mlir:
    [llvm/torch-mlir/#3688](llvm/torch-mlir#3688)
    
    Adds the mutable modifier to a global buffer and lifts said buffer to a
    global if there is a store-producer node associated with it.
    
    Signed-off-by: Christopher McGirr <[email protected]>
    Co-authored-by: Maximilian Bartel <[email protected]>
    chrsmcgrr and maxbartel authored Sep 24, 2024
    Configuration menu
    Copy the full SHA
    4e95351 View commit details
    Browse the repository at this point in the history
  3. Configuration menu
    Copy the full SHA
    65eb532 View commit details
    Browse the repository at this point in the history
  4. [TKW] Fix indexing of Reduction and GetResult to enable post-tile op. (

    …iree-org#162)
    
    This PR introduces changes to handle elementwise or general arithmetic
    operations after we did some tiled-loop-reduction ("Reduction")
    operation.
    
    The main problem with the current stack is indexing_dims information for
    Reduction relies on the user. This would work if it's user/consumer is
    tkw.write, but in other cases such as BinaryPyOp or UnaryPyOp, it will
    lack such information.
    
    To make matters worst BinaryPyOp/UnaryPyOp depends on it's src/producer
    for indexing dim, while Reduction op depends on it's dst/consumer for
    its' indexing dim information. This would ended up causing infinite loop
    between UnaryPyOp/BinaryPyOp <-> Reduction.
    
    This PR fixes the indexing dimension logic Reduction and GetResult
    (required for expanded Reduction) to be based on it's reduction axis(for
    Reduction) and it's source/consumer information.
    
    ---------
    
    Signed-off-by: Stanley Winata <[email protected]>
    raikonenfnu authored Sep 24, 2024
    Configuration menu
    Copy the full SHA
    909411a View commit details
    Browse the repository at this point in the history

Commits on Sep 26, 2024

  1. Get GEMMs working without minimize_global_loads (iree-org#167)

    This PR removes the need for propagating indices using
    post expansion. The new approach propagates the MMA
    indices to the MMA dimensions of all tensors (rather
    than just MMA nodes) and then specializes them depending
    on whether they lie within the backward slices of the
    LHS and RHS or forward slices of the ACC.
    
    ---------
    
    Signed-off-by: Harsh Menon <[email protected]>
    harsh-nod authored Sep 26, 2024
    Configuration menu
    Copy the full SHA
    d37c6a4 View commit details
    Browse the repository at this point in the history
  2. Add first draft of introduction (iree-org#168)

    This PR adds more documentation about tkw. Specifically, it provides a
    first draft of the introduction and adds a section on memory access
    patterns.
    
    Signed-off-by: Harsh Menon <[email protected]>
    harsh-nod authored Sep 26, 2024
    Configuration menu
    Copy the full SHA
    04a4ba5 View commit details
    Browse the repository at this point in the history
  3. [TKW] igemm shared mem tests (iree-org#171)

    Signed-off-by: Ivan Butygin <[email protected]>
    Hardcode84 authored Sep 26, 2024
    Configuration menu
    Copy the full SHA
    7686157 View commit details
    Browse the repository at this point in the history

Commits on Sep 27, 2024

  1. [TKW] Implement support for multiple iter args on Reduction (iree-or…

    …g#166)
    
    The main motivation behind this PR is to enable multiple induction
    variable/iterArg on the same tiled "Reduction" loop. To enable above we
    did a couple things:
    
    1. Enable lowering/expansion on `operator.getitem` (the op that extract
    multiple results in python i.e `res0, res1 = fn`) by templating it
    on`GetResult(CustomOp)` since they have the same args and interface and
    can reuse most of the indexing/expansion helper.
    
    2. Introduce `res_idx`, a variable to represent which result index of an
    op we are referring to, during expansion and context map. This is useful
    for ops that has more than one results / variables as outputs.
    
    3. bug fix in expand_reduction, where we hoist out iterating and
    expanding of `reduction.init_args` out of the loop that iterates and
    expands over the `yield`/`return_val` of the reduction loop. It is
    expected that the size of `init_args` is the same as size of
    `yield`/`return_val`. Hence if we had N iter_args/yields, we ended up
    expanding the `init_args` N x N time instead of N times. We haven't seen
    it thus far because we have been only playing with 1 init_arg/iterArg,
    and 1x1 == 1.
    
    4. Introduce a canonicalization pattern to fold chains of GetResult.
    this is because GetResult by semantic/design is only expected to extract
    and have one result. Hence a chain of GetResult should just be replaced
    by itself. This help clean up the IR.
    
    num.4 also helps circumvent issue where Reduction and GetResult is
    expanded completely by itself not following the DFS structure per
    dimension like the rest of the expansion code. This becomes especially
    problematic for multiple IterArg since Getitem is not expecting its'
    source value to be expanded without it.
    
    ---------
    
    Signed-off-by: Stanley Winata <[email protected]>
    raikonenfnu authored Sep 27, 2024
    Configuration menu
    Copy the full SHA
    0e16d54 View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    192a786 View commit details
    Browse the repository at this point in the history

Commits on Sep 30, 2024

  1. [TKW] Rework vector mask generation (iree-org#172)

    Instead of generating individual element comparisons and doing
    `vector.insertelement` generate the whole mask using vector ops.
    
    Add support for vector codegen when generating MLIR IR from sympy
    expressions. Add method `IndexingContext.iota` to generate special
    symbols which map to `(1,2 ... n-1)` vec expressions. `gen_sympy_index`
    will start to generate vector ops when encountering such symbols,
    inserting proper `splat`'s between scalar vals when necessary.
    
    ---------
    
    Signed-off-by: Ivan Butygin <[email protected]>
    Hardcode84 authored Sep 30, 2024
    Configuration menu
    Copy the full SHA
    92ad900 View commit details
    Browse the repository at this point in the history
  2. Enable import_symbolic_shape_expressions in the FxImporter. (iree-org…

    …#179)
    
    * Adds an option to `aot.export(import_symbolic_shape_expressions=True)`
    to enable emission of torch-mlir symbolic shape constraints. This is
    currently set to False until IREE is ready to ingest these by default.
    
    Rough sequence of work in IREE proper:
    
    * Custom lowering of `torch.symbolic_int` and
    `torch.bind_symbolic_shape` ops to IREE util "assume" ops. Note that we
    are only planning to lower "terminal" bindings (basically function
    arguments and a couple of other such categories).
    * Canonicalizations to ensure that assume equalities are == 0 (versus
    the native form from torch where they assume a non zero equality).
    * Fusion will clone corresponding bindings on dependent dims into
    dispatch regions.
    * Existing linalg shape analysis extended and queryable by codegen.
    
    ---------
    
    Signed-off-by: Stella Laurenzo <[email protected]>
    stellaraccident authored Sep 30, 2024
    Configuration menu
    Copy the full SHA
    621cbe1 View commit details
    Browse the repository at this point in the history

Commits on Oct 1, 2024

  1. Add fused GEMM + SiLU kernel

    Signed-off-by: Harsh Menon <[email protected]>
    harsh-nod committed Oct 1, 2024
    Configuration menu
    Copy the full SHA
    7de2e0b View commit details
    Browse the repository at this point in the history