Skip to content

[ChatQnA] Update the default LLM to llama3-8B on cpu/gpu/hpu #3268

[ChatQnA] Update the default LLM to llama3-8B on cpu/gpu/hpu

[ChatQnA] Update the default LLM to llama3-8B on cpu/gpu/hpu #3268

Annotations

2 errors

example-test (ChatQnA, xeon)  /  run-test (test_compose_pinecone_on_xeon.sh)

cancelled Jan 20, 2025 in 3m 28s