-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can it downgrade for running on V100? #18
Comments
https://developer.nvidia.com/cuda-gpus (I would also want to see if it is possible to run on consumer-grade GPUs) |
You can try turning off this option |
Where can we find this option? |
Is it the flag |
Update: I turned both off, it doesn't help, the same error appears. If the dev didn't just leave while refusing to elaborate further, we would have a easier time. |
Issue #18 lower compute capability requirement.
I had run webui.py, but errors occured on generating 4 views.
otImplementedError: No operator found for
memory_efficient_attention_forward
with inputs:query : shape=(10, 6144, 1, 64) (torch.bfloat16)
key : shape=(10, 6144, 1, 64) (torch.bfloat16)
value : shape=(10, 6144, 1, 64) (torch.bfloat16)
attn_bias : <class 'NoneType'>
p : 0.0
decoderF
is not supported because:attn_bias type is <class 'NoneType'>
bf16 is only supported on A100+ GPUs
[email protected]
is not supported because:requires device with capability > (8, 0) but your GPU has capability (7, 0) (too old)
bf16 is only supported on A100+ GPUs
cutlassF
is not supported because:bf16 is only supported on A100+ GPUs
smallkF
is not supported because:max(query.shape[-1] != value.shape[-1]) > 32
dtype=torch.bfloat16 (supported: {torch.float32})
bf16 is only supported on A100+ GPUs
unsupported embed per head: 64
The text was updated successfully, but these errors were encountered: