Skip to content

[ChatQnA] Switch to vLLM as default llm backend on Xeon #1114

[ChatQnA] Switch to vLLM as default llm backend on Xeon

[ChatQnA] Switch to vLLM as default llm backend on Xeon #1114

Triggered via pull request January 16, 2025 11:35
Status Success
Total duration 4m 46s
Artifacts

check-online-doc-build.yml

on: pull_request
Fit to window
Zoom out
Zoom in

Annotations

4 errors
build
RPC failed; curl 56 GnuTLS recv error (-9): Error decoding the received TLS packet.
build
4682 bytes of body are still expected
build
early EOF
build
fetch-pack: invalid index-pack output