Skip to content

[ChatQnA] Switch to vLLM as default llm backend on Xeon #3136

[ChatQnA] Switch to vLLM as default llm backend on Xeon

[ChatQnA] Switch to vLLM as default llm backend on Xeon #3136

Triggered via pull request January 16, 2025 10:25
@wangkl2wangkl2
synchronize #1403
Status Cancelled
Total duration 1h 11m 24s
Artifacts 3

pr-docker-compose-e2e.yml

on: pull_request_target
get-test-matrix  /  Get-test-matrix
7s
get-test-matrix / Get-test-matrix
Matrix: example-test
Fit to window
Zoom out
Zoom in

Annotations

6 errors and 2 warnings
example-test (ChatQnA, xeon) / run-test (test_compose_qdrant_on_xeon.sh)
Process completed with exit code 1.
example-test (ChatQnA, xeon) / run-test (test_compose_pinecone_on_xeon.sh)
Process completed with exit code 1.
example-test (ChatQnA, xeon) / run-test (test_compose_on_xeon.sh)
Process completed with exit code 1.
example-test (ChatQnA, xeon) / run-test (test_compose_without_rerank_on_xeon.sh)
Canceling since a higher priority waiting request for 'E2E test with docker compose-1403' exists
example-test (ChatQnA, xeon) / run-test (test_compose_tgi_on_xeon.sh)
Canceling since a higher priority waiting request for 'E2E test with docker compose-1403' exists
example-test (ChatQnA, xeon) / run-test (test_compose_tgi_on_xeon.sh)
The operation was canceled.
get-test-matrix / Get-test-matrix
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636
example-test (ChatQnA, xeon) / get-test-case
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636

Artifacts

Produced during runtime
Name Size
ChatQnA_test_compose_on_xeon.sh
169 KB
ChatQnA_test_compose_pinecone_on_xeon.sh
156 KB
ChatQnA_test_compose_qdrant_on_xeon.sh
154 KB