Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dialects: (vector) Add for vector.transfer_read and vector.transfer_write operations #3650

Open
wants to merge 19 commits into
base: main
Choose a base branch
from

Conversation

watermelonwolverine
Copy link
Contributor

Added stubs for "vector.transfer_read" and "vector.transfer_write".
Verification is missing but that also seems to be a lot of work.

Also fixed a small bug in TensorOrMemrefOf.verify (?)

@watermelonwolverine watermelonwolverine marked this pull request as draft December 17, 2024 23:23
Copy link

codecov bot commented Dec 17, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 91.26%. Comparing base (668176f) to head (aee9476).

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3650      +/-   ##
==========================================
- Coverage   91.27%   91.26%   -0.01%     
==========================================
  Files         462      462              
  Lines       57680    57864     +184     
  Branches     5565     5590      +25     
==========================================
+ Hits        52645    52810     +165     
- Misses       3613     3626      +13     
- Partials     1422     1428       +6     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Collaborator

@compor compor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM modulo comments

if isinstance(attr, VectorType) or isinstance(attr, TensorType):
attr = cast(VectorType[Attribute] | TensorType[Attribute], attr)
if isinstance(attr, MemRefType) or isinstance(attr, TensorType):
attr = cast(MemRefType[Attribute] | TensorType[Attribute], attr)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you mind moving this to a separate PR with a test case?
We seem to be lacking any verification testing on this type.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seeing the changes, I'd still recommend doing this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will do that in the next few days

xdsl/dialects/vector.py Outdated Show resolved Hide resolved
xdsl/dialects/vector.py Outdated Show resolved Hide resolved
xdsl/dialects/vector.py Outdated Show resolved Hide resolved
@compor compor added the dialects Changes on the dialects label Dec 18, 2024
@compor compor changed the title dialects: (vector) Added stubs for transfer_read and transfer_write dialects: (vector) Add for vector.transfer_read and vector.transfer_write operations Dec 18, 2024
@watermelonwolverine
Copy link
Contributor Author

watermelonwolverine commented Dec 18, 2024

  • I started implementing the verification and finished most of it
    • Only verification for cases where the element_type of source is a vector type is missing
  • Problem: To fully implement verification as it is done in MLIR the VectorType needs to be changed first
    • It needs a bitmap (ArrayAttr[BoolAttr]) for the scalable dimensions instead of a single IntegerAttr for the number of scalable dimensions.
    • I could skip this verification step and do everything else
    • I did some of those changes so you can see for yourself what I mean.
  • I also had to add several functions to builtin classes like AffineMap and AffineExpr.
    • Should I do a separate PR for those, too?

@watermelonwolverine
Copy link
Contributor Author

@compor
Copy link
Collaborator

compor commented Dec 20, 2024

You can always verify what is currently here and document what deviates (as you have done), this is fine IMHO.

Thanks for the work so far!

Transferred more filecheck tests for transfer ops from MLIR
Added the (now mandatory) in_bounds property
@watermelonwolverine watermelonwolverine marked this pull request as ready for review December 28, 2024 17:48
Comment on lines 484 to 489
# WJOG9GVF: TODO fix: uncomment this when 7S4F0FZA has been fixed
# if mask_type:
# if mask_type != inferred_mask_type:
# raise VerifyException(
# f'"{op.name}" inferred mask type ({inferred_mask_type}) and mask operand type ({mask_type}) don\'t match'
# )
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as below. I will move it to a new pull request

xdsl/dialects/vector.py Outdated Show resolved Hide resolved
xdsl/dialects/vector.py Outdated Show resolved Hide resolved
xdsl/dialects/vector.py Outdated Show resolved Hide resolved
@superlopuh
Copy link
Member

superlopuh commented Feb 7, 2025

Hi @watermelonwolverine! I'd love to get this in, please let me know if you'd like some help with the PR. From my understanding, these bits of code that are currently in this PR would benefit from their own mini PRs:

  • is_function_of_dim + test
  • compose_with_values + test
  • used/unused_dims + test

Do you have time to have a go at these? I'm hoping it will be a fairly easy cherry-picking process, I'm happy to review each one very quickly :) After that, I think this PR should be ready to merge with minor changes, but it would be good to reduce it first.

@watermelonwolverine
Copy link
Contributor Author

This PR is actually pretty much done from my side.
There are a few verifications I had to remove because they depend on #3654 being fixed. I plan to create a new pull request afterwards to add the verifications back in.
There are a few TODOs in verification and testing, but but I think those deserve to be their own PRs.

@watermelonwolverine
Copy link
Contributor Author

I can split this PR into smaller PRs over the next few days if it is simply too big

@superlopuh
Copy link
Member

Yeah that would be great, thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dialects Changes on the dialects
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants