Skip to content

Actions: HabanaAI/vllm-fork

yapf

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
1,347 workflow runs
1,347 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Adds LoRA tests to vLLM CI pipeline
yapf #1434: Pull request #680 synchronize by rsshaik1
January 13, 2025 05:11 1m 56s CI_tests_LoRA
January 13, 2025 05:11 1m 56s
Adds LoRA tests to vLLM CI pipeline
yapf #1433: Pull request #680 synchronize by rsshaik1
January 13, 2025 04:26 1m 55s CI_tests_LoRA
January 13, 2025 04:26 1m 55s
Adds LoRA tests to vLLM CI pipeline
yapf #1432: Pull request #680 opened by rsshaik1
January 13, 2025 03:37 1m 49s CI_tests_LoRA
January 13, 2025 03:37 1m 49s
Add inc fp8 qunatization documentation
yapf #1431: Pull request #635 synchronize by nirda7
January 13, 2025 01:31 1m 45s fp8_quantization_documentation
January 13, 2025 01:31 1m 45s
Handle LoRA specific changes in MSS (#675)
yapf #1430: Commit c5975f8 pushed by vivekgoe
January 11, 2025 05:54 1m 45s habana_main
January 11, 2025 05:54 1m 45s
Jan 10 rebase
yapf #1428: Pull request #677 synchronize by kzawora-intel
January 10, 2025 15:00 1m 57s private/kzawora/jan_10_rebase
January 10, 2025 15:00 1m 57s
Jan 10 rebase
yapf #1427: Pull request #677 opened by kzawora-intel
January 10, 2025 14:42 2m 1s private/kzawora/jan_10_rebase
January 10, 2025 14:42 2m 1s
Handle LoRA specific changes in MSS
yapf #1423: Pull request #675 opened by SanjuCSudhakaran
January 10, 2025 07:48 2m 2s lora-mss
January 10, 2025 07:48 2m 2s
Add inc fp8 qunatization documentation
yapf #1422: Pull request #635 synchronize by nirda7
January 9, 2025 13:17 1m 57s fp8_quantization_documentation
January 9, 2025 13:17 1m 57s
[SW-197036] - use torch._scaled_mm with hpu (#660)
yapf #1421: Commit 73aaf71 pushed by nirda7
January 9, 2025 13:15 1m 56s habana_main
January 9, 2025 13:15 1m 56s
Add inc fp8 qunatization documentation
yapf #1420: Pull request #635 synchronize by nirda7
January 9, 2025 11:35 2m 4s fp8_quantization_documentation
January 9, 2025 11:35 2m 4s
Add inc fp8 qunatization documentation
yapf #1418: Pull request #635 synchronize by nirda7
January 9, 2025 11:27 1m 45s fp8_quantization_documentation
January 9, 2025 11:27 1m 45s
[SW-197036] - use torch._scaled_mm with hpu
yapf #1415: Pull request #660 synchronize by nirda7
January 9, 2025 09:49 1m 48s remove_scaled_mm_wa
January 9, 2025 09:49 1m 48s
Limit number of dummy cross attention blocks (#667)
yapf #1412: Commit fa9dbf2 pushed by michalkuligowski
January 8, 2025 17:07 1m 56s habana_main
January 8, 2025 17:07 1m 56s
[SW-197036] - use torch._scaled_mm with hpu
yapf #1411: Pull request #660 synchronize by nirda7
January 8, 2025 16:29 1m 56s remove_scaled_mm_wa
January 8, 2025 16:29 1m 56s
Use FusedSDPA for MllamaVisionSdpaAttention (#620)
yapf #1410: Commit cccf363 pushed by michalkuligowski
January 8, 2025 16:28 1m 49s habana_main
January 8, 2025 16:28 1m 49s