Skip to content

[ChatQnA] Switch to vLLM as default llm backend on Xeon #3137

[ChatQnA] Switch to vLLM as default llm backend on Xeon

[ChatQnA] Switch to vLLM as default llm backend on Xeon #3137

Triggered via pull request January 16, 2025 11:35
@wangkl2wangkl2
synchronize #1403
Status Cancelled
Total duration 20s
Artifacts

pr-docker-compose-e2e.yml

on: pull_request_target
get-test-matrix  /  Get-test-matrix
get-test-matrix / Get-test-matrix
Matrix: example-test
Waiting for pending jobs
Fit to window
Zoom out
Zoom in

Annotations

1 error
E2E test with docker compose
Canceling since a higher priority waiting request for 'E2E test with docker compose-1403' exists