Skip to content

[ChatQnA] Switch to vLLM as default llm backend on Xeon #3138

[ChatQnA] Switch to vLLM as default llm backend on Xeon

[ChatQnA] Switch to vLLM as default llm backend on Xeon #3138

Annotations

1 warning

example-test (ChatQnA, xeon)  /  get-test-case

succeeded Jan 16, 2025 in 4s