-
Notifications
You must be signed in to change notification settings - Fork 620
Issues: facebookresearch/xformers
memory_efficient_attention
: torch.compile
compatibility
#920
opened Nov 9, 2023 by
achalddave
Open
3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
visual studio 2022,cuda 12.4 ,but it still can't build the wheel, and return this
#1161
opened Nov 22, 2024 by
zslefour
Incompatibility Between xformers FA3 Torch Custom op Wrapper and recent
flashattn_hopper_cuda
#1159
opened Nov 21, 2024 by
ohwi
need to release xformers-0.0.28.post3.whl with manylinux2014_x86_64 os version.
#1155
opened Nov 19, 2024 by
controlRun
Cant use powershell command to build for Nightly PyTorch on Win - filename too long
#1132
opened Oct 22, 2024 by
Mescalamba
xformers 0.0.20 memoery efficient attent cutalss backward non deterministic
#1128
opened Oct 13, 2024 by
Bavesh-B
xformers.sparse.utils._coo_to_csr is incorrect when n > m
#1125
opened Oct 11, 2024 by
francois-rozet
Add support for CUDNN attention via CUDNN_FRONTEND Python API?
#1123
opened Oct 10, 2024 by
Skylion007
[Bug] Unexpected behavior of
memory_efficient_attention
with BlockDiagonalMask
#1122
opened Oct 10, 2024 by
xiangxu-google
Does memory efficient attention cutlass kernel support various seq len inputs for q/k/v + tensor bias?
#1120
opened Oct 4, 2024 by
ShijunK
Why xformers 0.0.28.post1 doesn't have pre-compiled wheel for Windows?
#1118
opened Oct 1, 2024 by
FurkanGozukara
I tried upgrade visual studio,cuda 12.4 and 12.6,but it still can't build the wheel,and return this
#1117
opened Sep 29, 2024 by
neutronslime
scaled_dot_product_attention output is different from memory_efficient_attention
#1114
opened Sep 25, 2024 by
aenoca
No operator found for
memory_efficient_attention_forward
with inputs:
#1109
opened Sep 20, 2024 by
brcisna
memory_efficient_attention
is slower than scaled_dot_product_attention
of PyTorch?
#1107
opened Sep 19, 2024 by
QinlongHuang
Previous Next
ProTip!
Follow long discussions with comments:>50.