[ChatQnA] Switch to vLLM as default llm backend on Xeon #3136
pr-docker-compose-e2e.yml
on: pull_request_target
get-test-matrix
/
Get-test-matrix
7s
Matrix: example-test
Annotations
6 errors and 2 warnings
Artifacts
Produced during runtime
Name | Size | |
---|---|---|
ChatQnA_test_compose_on_xeon.sh
|
169 KB |
|
ChatQnA_test_compose_pinecone_on_xeon.sh
|
156 KB |
|
ChatQnA_test_compose_qdrant_on_xeon.sh
|
154 KB |
|