-
Notifications
You must be signed in to change notification settings - Fork 91
Multihead attention #199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multihead attention #199
Conversation
Hello Michael, I saw your pull requests and I think what you do is very interesting. Could you take a look at mine? milancurcic#2 What you have to look here is not the locally connected 1d layer but rather the reshape architecture I am trying to make. Your help would be very appreciated. Thanks |
48d93b2
to
86cd7c0
Compare
@OneAdder i see you are talking about a 2d handling. Would you like to make this together? I have to make this as well since I'm implementing a conv 1d layer |
Hi guys, thanks for pushing this forward. Today I'm finishing a lot of busywork with some proposals so next week I'm getting back more actively with neural-fortran work, and will be able to contribute to the reviews more actively |
@ricor07 Great idea! But we have a problem with generics again. The issues is that a |
Yes, I think we can make a generic predict. But I suggest you to create a new branch |
I think it's fine to make |
@milancurcic Done, here: #198 |
You can make it. I'll work on maxpool |
f9e7a7c
to
0900990
Compare
@milancurcic I think it's ready for review! I'll add the complicated example later. At this stage I added a simple example that converges nicely and doesn't require datasets and extra deps |
|
||
implicit none | ||
|
||
type, extends(multihead_attention_layer) :: cross_attention_layer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is intentional that there is no plumbing for this one yet. I suggest that we add it at later stage when we have more components for seq2seq models. At this stage it can be added like this: without any public access
logical :: res | ||
|
||
res = all(abs(x - y) <= (1e-06 + 1e-05 * abs(y))) | ||
end function allclose |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion for future: create nf_utils.f90
(or similar) and put this procedure there
end do | ||
end subroutine create_attention_matrix | ||
|
||
pure module subroutine normalize_attention_matrix(self, attention_mask) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
attention_mask
is not accessible to the users by design at this point. It will be used by transformer decoder later and I'll add corresponding logic later
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@OneAdder Nice set of PRs. I added a few suggestions. Feel free to ignore them.
Furthermore, I usually try to avoid reshape
and using pointers instead, mainly for performance reasons. I think this approach could be used at some places, but it could be for a next PR.
The answers to these questions will also guide which layers to include in the layers table in the README. |
Co-authored-by: Jeremie Vandenplas <[email protected]>
Co-authored-by: Jeremie Vandenplas <[email protected]>
Co-authored-by: Jeremie Vandenplas <[email protected]>
Co-authored-by: Jeremie Vandenplas <[email protected]>
…ran into multihead_attention
@jvdp1 Great idea! I think we should extend it even further and make smth like |
|
Thank you!! |
Hello, Milan! I hope, I'm not bothering you too much with my pull requests, but this is a good one. At this stage it is a draft of MultiHead Attention. It cannot be merged until work on
input2d_layer
andlinear2d_layer
is completed.Implementation of
dropout
would also help improve MHA, but it can be added later.MultiHead Attention
MultiHead Attention is the main component of Transformer architecture, which is the most advanced modern approach in the area of Natural Language Processing, as well as some other areas.
Here I propose an implementation based on the Transformer article. It works and its output conforms with SOTA implementation in PyTorch.
Python Reference
https://github.com/OneAdder/neural-fortran-references/blob/main/self_attention.py