Resolved ALIBI bias regression due to porting flat PA #1319
Annotations
10 errors and 1 warning
Analysing the code with ruff:
vllm/attention/backends/hpu_attn.py#L162
vllm/attention/backends/hpu_attn.py:162:81: E501 Line too long (85 > 80)
|
Analysing the code with ruff:
vllm/attention/backends/hpu_attn.py#L169
vllm/attention/backends/hpu_attn.py:169:81: E501 Line too long (91 > 80)
|
Analysing the code with ruff:
vllm/attention/backends/hpu_attn.py#L278
vllm/attention/backends/hpu_attn.py:278:81: E501 Line too long (91 > 80)
|
Analysing the code with ruff:
vllm/attention/backends/hpu_attn.py#L279
vllm/attention/backends/hpu_attn.py:279:81: E501 Line too long (119 > 80)
|
Analysing the code with ruff:
vllm/attention/backends/hpu_attn.py#L281
vllm/attention/backends/hpu_attn.py:281:81: E501 Line too long (95 > 80)
|
Analysing the code with ruff:
vllm/attention/backends/hpu_attn.py#L282
vllm/attention/backends/hpu_attn.py:282:81: E501 Line too long (88 > 80)
|
Analysing the code with ruff:
vllm/attention/backends/hpu_attn.py#L285
vllm/attention/backends/hpu_attn.py:285:81: E501 Line too long (84 > 80)
|
Analysing the code with ruff:
vllm/attention/backends/hpu_attn.py#L290
vllm/attention/backends/hpu_attn.py:290:81: E501 Line too long (96 > 80)
|
Analysing the code with ruff:
vllm/attention/backends/hpu_attn.py#L291
vllm/attention/backends/hpu_attn.py:291:81: E501 Line too long (133 > 80)
|
Analysing the code with ruff:
vllm/attention/backends/hpu_attn.py#L293
vllm/attention/backends/hpu_attn.py:293:81: E501 Line too long (96 > 80)
|
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636
|
Loading