-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[mlir] Do not set lastToken in AsmParser's resetToken function #290
Conversation
I plan to upstream this |
The member lastToken is the last token that was consumed by the parser. Resetting the lexer position to a different position does not cause any token to be consumed, so lastToken should not be updated. Setting it to curToken can cause the scopeLoc.end location of OperationDefinition to be off-by-one, pointing to the first token after the operation. An example for an operation for which the scopeLoc.end location was wrong before is: %0 = torch.vtensor.literal(dense_resource<__elided__> : tensor<768xbf16>) : !torch.vtensor<[768],bf16> . Here the scope end loc always pointed to the next token
f7cc773
to
f357780
Compare
Can you please add a test case to show the improvement? |
LGTM, per here this |
@mgehre-amd Added a test |
e363f41
to
5193e67
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
This test checks the parsed ops' locations and scope locations. It uses the !llvm.array type in the test to check that locations are correct for ops that contain an extended-type. The parser for extended-types calls resetToken, which caused wrong locations in the past.
This getter is can easily be confused with the similar named allowUnregisteredDialects setter
5193e67
to
5f083d4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
The member lastToken is the last token that was consumed by the parser. Resetting the lexer position to a different position does not cause any token to be consumed, so lastToken should not be updated. Setting it to curToken can cause the end location of OperationDefinition to be off-by-one, pointing to the first token after the operation.
An example for an operation for which the scopeLoc.end location was wrong before is:
%0 = torch.vtensor.literal(dense_resource<elided> : tensor<768xbf16>)
: !torch.vtensor<[768],bf16> . Here the scope end loc always pointed to the next token