Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add info about usage of our LLMs in docs #566

Open
cole-at-pieces opened this issue Sep 3, 2024 · 0 comments
Open

Add info about usage of our LLMs in docs #566

cole-at-pieces opened this issue Sep 3, 2024 · 0 comments
Assignees
Labels
internal only for Pieces team

Comments

@cole-at-pieces
Copy link
Contributor

We need to provide some updated information on which models we suggest people use within Pieces so they have better expectations of our local models. Brian outlined each local model and his thoughts here: https://docs.google.com/spreadsheets/d/1wOZ03a-Z_x95eVdGCLTN1cFfgWHKISuGkkCbkqr2wzU/edit?usp=sharing

As for cloud models, he said "Honestly pretty subjective on preference between cloud LLMs. They're all closed source so tough to get much info on them. This leaderboard is generally respected: https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard. I use GPT 4o, Gemini 1.5 Pro, and Claude 3.5 Sonet with Pieces (general QA, RAG, and Live Context) and I honestly can't notice much of a difference. Outside of those the performance will likely drop off"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
internal only for Pieces team
Projects
None yet
Development

No branches or pull requests

3 participants