Skip to content

[ChatQnA] Switch to vLLM as default llm backend on Xeon #3138

[ChatQnA] Switch to vLLM as default llm backend on Xeon

[ChatQnA] Switch to vLLM as default llm backend on Xeon #3138

example-test (ChatQnA, xeon)  /  run-test (test_compose_on_xeon.sh)

succeeded Jan 16, 2025 in 13m 13s