Skip to content

[ChatQnA] Switch to vLLM as default llm backend on Xeon #3138

[ChatQnA] Switch to vLLM as default llm backend on Xeon

[ChatQnA] Switch to vLLM as default llm backend on Xeon #3138

Annotations

2 errors

example-test (ChatQnA, xeon)  /  run-test (test_compose_without_rerank_on_xeon.sh)

cancelled Jan 16, 2025 in 11m 12s