Skip to content

Multihead Attention Fixes #209

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

OneAdder
Copy link
Collaborator

@OneAdder OneAdder commented Feb 25, 2025

Minor Changes to MHA

  1. Memory allocation optimization
  2. Making attention masks usable (with test)
  3. Minor cleanup (unused variables etc). BTW, perhaps, add some static analyzer to CI?
  4. A bit of reshuffling inside MHA forward and backward calls for better inheritance. Reason for those changes is that I'm making Llama Attention which is subclass of MHA. Source here: https://github.com/OneAdder/llm.f/blob/master/src/llama_attention.f90

@OneAdder OneAdder marked this pull request as draft February 25, 2025 20:09
@OneAdder
Copy link
Collaborator Author

I figured out that a little refactoring would benefit greatly. I'll add it and then this will be ready for review

end subroutine common_forward

pure module subroutine sdpa_forward(self, attention_mask)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Put Scaled Dot Product Attention into a separate method. This adds more flexibility.
In some cases we need to do manipulations with input projections, such as KV Caching for LLama and Qwen2.

@OneAdder
Copy link
Collaborator Author

Ready for review

@OneAdder OneAdder marked this pull request as ready for review February 28, 2025 13:47
Copy link
Collaborator

@jvdp1 jvdp1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

contains

procedure :: common_backward
procedure :: common_forward
procedure :: sdpa_forward
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was wondering what was sdpa until I found it in one of your comments below (that is, Scaled Dot Product Attention).
I suggest to add a comment to explain it.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will do!

@milancurcic milancurcic self-requested a review March 14, 2025 15:52
Copy link
Member

@milancurcic milancurcic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @OneAdder this looks good, see a few comments for further optimization that we've done before in dense % forward. Worth trying if you don't mind, or let me know if I can help.

Comment on lines +90 to +93
self % v_or_dv(:, :, head) = matmul(&
transpose(self % attention_matrix(:, :, head)),&
self % d_output(:, :, head)&
)
Copy link
Member

@milancurcic milancurcic Mar 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this PR is doing some optimizations, there's an opportunity here as well. Intrinsic matmul can often be slower than writing out the matrix multiplication by hand using loops, +, and *. transpose creates a temporary copy, and can be avoided if we're looping and assigning values explicitly. When @jvdp1 made this optimization in dense % forward, I think it cut the compute time in half.


! calculate delta for attention matrix
d_sdpa = matmul(d_output(:, :, head), transpose(v_heads(:, :, head)))
self % d_sdpa = matmul(self % d_output(:, :, head), transpose(self % v_heads(:, :, head)))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As above, similar could be done here.

Comment on lines +124 to +126
self % d_normalize(seq, :, head) = reshape(matmul(&
reshape(self % d_sdpa(seq, :), [1, self % sequence_length]),&
self % jacobian * self % scaling_factor&
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As above, in this case it may be even worse since we're doing reshape(matmul(reshape, ...))

Comment on lines +131 to +134
self % q_or_dq(:, :, head) = matmul(self % d_normalize(:, :, head), self % k_heads(:, :, head))

! calculate delta for key, attention matrix should be transposed unlike for query
dk(:, :, head) = matmul(transpose(d_normalize(:, :, head)), q_heads(:, :, head))
self % k_or_dk(:, :, head) = matmul(transpose(self % d_normalize(:, :, head)), self % q_heads(:, :, head))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As above

@milancurcic
Copy link
Member

Minor cleanup (unused variables etc). BTW, perhaps, add some static analyzer to CI?

I don't know how to do this and I'm too lazy to learn since I have this in my editor already :)

@OneAdder
Copy link
Collaborator Author

@milancurcic @jvdp1 Do you think replacing matmuls with custom do concurrent is a good idea? The issue I see here is that we are stripping the power of -fexternal-blas which is super fast. Perhaps, the pointers approach could be a better idea? What I was thinking: add view function that returns a pointer and works as reshape but without an extra copy. Similar to how PyTorch does it

@OneAdder
Copy link
Collaborator Author

Regarding the PR, I unfortunately have to slow down for a week or two. The xylophone valley startup I worked for has folded 🙈
So, I have my hands full with job interviews at the moment 🙃

@milancurcic
Copy link
Member

Thank you for all the work so far, @OneAdder, and good luck with the job search! Feel free to get back to this whenever you're ready and I'll resume with the reviews of other PRs.

@milancurcic
Copy link
Member

@milancurcic @jvdp1 Do you think replacing matmuls with custom do concurrent is a good idea? The issue I see here is that we are stripping the power of -fexternal-blas which is super fast. Perhaps, the pointers approach could be a better idea? What I was thinking: add view function that returns a pointer and works as reshape but without an extra copy.

Indeed, it's more complicated, as you write. Since the way you have it now is correct, any performance optimization can be left for a separate PR and we can investigate in more detail what approach we should take. So I'll just go ahead an merge this PR and we can revisit the optimizations later.

@milancurcic milancurcic merged commit 2ed7b6a into modern-fortran:main Mar 26, 2025
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants