Skip to content

Commit

Permalink
remove top level iport from flash attention utils
Browse files Browse the repository at this point in the history
  • Loading branch information
lipovsek-aws committed Dec 10, 2024
1 parent fac3131 commit f495742
Showing 1 changed file with 0 additions and 1 deletion.
1 change: 0 additions & 1 deletion axlearn/common/flash_attention/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@
from axlearn.common.attention import NEG_INF, MaskFn, causal_mask, softmax_with_biases
from axlearn.common.flash_attention.gpu_attention import cudnn_dot_product_attention
from axlearn.common.flash_attention.gpu_attention import flash_attention as gpu_flash_attention
from axlearn.common.flash_attention.neuron_attention import flash_attention as neuron_flash_attention
from axlearn.common.flash_attention.tpu_attention import tpu_flash_attention
from axlearn.common.utils import Tensor

Expand Down

0 comments on commit f495742

Please sign in to comment.