Skip to content

[ChatQnA] Switch to vLLM as default llm backend on Xeon #3135

[ChatQnA] Switch to vLLM as default llm backend on Xeon

[ChatQnA] Switch to vLLM as default llm backend on Xeon #3135

Annotations

1 warning

example-test (ChatQnA, xeon)  /  get-test-case

succeeded Jan 16, 2025 in 5s