[ChatQnA] Switch to vLLM as default llm backend on Xeon #3135
Triggered via pull request
January 16, 2025 10:24
pre-commit-ci[bot]
synchronize
#1403
Status
Cancelled
Total duration
54s
Artifacts
–
pr-docker-compose-e2e.yml
on: pull_request_target
get-test-matrix
/
Get-test-matrix
10s
Matrix: example-test
Annotations
5 errors and 2 warnings
example-test (ChatQnA, xeon) / run-test (test_compose_pinecone_on_xeon.sh)
Canceling since a higher priority waiting request for 'E2E test with docker compose-1403' exists
|
example-test (ChatQnA, xeon) / run-test (test_compose_on_xeon.sh)
Canceling since a higher priority waiting request for 'E2E test with docker compose-1403' exists
|
example-test (ChatQnA, xeon) / run-test (test_compose_qdrant_on_xeon.sh)
Canceling since a higher priority waiting request for 'E2E test with docker compose-1403' exists
|
example-test (ChatQnA, xeon) / run-test (test_compose_without_rerank_on_xeon.sh)
Canceling since a higher priority waiting request for 'E2E test with docker compose-1403' exists
|
example-test (ChatQnA, xeon) / run-test (test_compose_tgi_on_xeon.sh)
Canceling since a higher priority waiting request for 'E2E test with docker compose-1403' exists
|
get-test-matrix / Get-test-matrix
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636
|
example-test (ChatQnA, xeon) / get-test-case
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636
|