-
Notifications
You must be signed in to change notification settings - Fork 144
Issues: lucidrains/performer-pytorch
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Find FastAttention is slower, and also with more GPU memory usage
#97
opened Nov 19, 2024 by
phyllispeng123
Separate Transformer Encoder & Decoder modules with linear attention?
#96
opened Aug 28, 2024 by
harshakmohan
Performer Pytorch Slower than Expected and Please Help with Understanding Parameter Count
#92
opened Dec 12, 2022 by
michaelweihaosong
Using replicating nn.MultiHeadAttention with multiple performer SelfAttention modules
#91
opened Dec 6, 2022 by
JGittles
Question: Is Performer order equivariant? (can it transform an unordered set of tensors)
#88
opened Jul 15, 2022 by
nmakes
How to test the performer architecture for training new models?
#83
opened Feb 3, 2022 by
ayan-iiitd
FastAttention doesn't give results in agreement with standard attention?
#69
opened May 28, 2021 by
simonaxelrod
Previous Next
ProTip!
Mix and match filters to narrow down what you’re looking for.