Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: show available models list in settings #83

Merged
merged 11 commits into from
Jun 15, 2023

Conversation

adietrichs
Copy link
Contributor

Motivation

Currently, the list of supported models is hard-coded. That has several downsides:

Solution

  • add a new utils/models.ts with a getAvailableChatModels(apiKey) function
  • fetch available models every time the API key changes (including on app load)
  • update model setting if current model no longer available
  • show list of models on settings modal
  • consider model list load for isAnythingLoading

Screenshot of an exemplary model list:
Screenshot 2023-06-14 at 03 30 45

Checklist

  • Tested in Chrome
  • Tested in Safari

@vercel
Copy link

vercel bot commented Jun 14, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Updated (UTC)
flux ✅ Ready (Inspect) Visit Preview Jun 15, 2023 1:18am

@adietrichs
Copy link
Contributor Author

Note that I have little js/ts and basically no React experience, so while this looks like it works, it might not follow best practices.

@adietrichs adietrichs changed the title Fetch Available Models List Show Available Models List in Settings Jun 14, 2023
@adietrichs adietrichs changed the title Show Available Models List in Settings feat: show available models list in settings Jun 14, 2023
}
}, [apiKey]);

const isAnythingLoading = isSavingReactFlow || isSavingSettings || (availableModels === null);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure we should include availableModels === null here as this is more about things being saved

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like to have the spinner continue to spin until that request finishes. Added a distinction between isAnythingSaving and isAnythingLoading to address that.

@transmissions11
Copy link
Member

this looks fantastic, nice work. only notes:

  • should we maybe filter out the fixed version models (the ones that end with -[number])? kinda clogs up the UI and i dont know anyone who needs them outside of like big enterprise
Screenshot 2023-06-14 at 9 07 31 AM
  • still not picking up GPT-4-32k on my account, idk where they surface that its accessible

@transmissions11
Copy link
Member

should we maybe filter out the fixed version models (the ones that end with -[number])? kinda clogs up the UI and i dont know anyone who needs them outside of like big enterprise

I guess this is one case why. The latest capabilities arent on the stable gpt-4 endpoint, only on the fixed version atm

Screenshot 2023-06-14 at 9 12 36 AM

@adietrichs
Copy link
Contributor Author

should we maybe filter out the fixed version models (the ones that end with -[number])?

I guess this is one case why. The latest capabilities arent on the stable gpt-4 endpoint, only on the fixed version atm

I addressed that in the best way I could come up with, by having a hard-coded list of models to exclude. That way, usually only the stable models would be listed, but in moments like now, these additional new models show up automatically. I prefer the default to be for new versions to be shown, so app updates are only required to de-clutter, not to make new functionality accessible.

still not picking up GPT-4-32k on my account, idk where they surface that its accessible

This is odd. I am unfortunately still on the waitlist for that one, so can't test myself (of course feel free to dm me an API key "for testing" 🙂). But I did see inconsistent behavior around that yesterday myself, when it briefly listed gpt-4-32k-0613 as available for me. So maybe this is just a temporary problem on their end. Otherwise not sure what to do here.

@transmissions11
Copy link
Member

I addressed that in the best way I could come up with, by having a hard-coded list of models to exclude

hmm i appreciate this but probably too much of a hassle to maintain, can just leave as is imo

This is odd. I am unfortunately still on the waitlist for that one, so can't test myself (of course feel free to dm me an API key "for testing" 🙂). But I did see inconsistent behavior around that yesterday myself, when it briefly listed gpt-4-32k-0613 as available for me. So maybe this is just a temporary problem on their end. Otherwise not sure what to do here.

dming you a 32k key via twitter so you can take a look — if its an issue on their end can always just edit the stored model json in localstorage in the meantime i guess?

@adietrichs
Copy link
Contributor Author

hmm i appreciate this but probably too much of a hassle to maintain, can just leave as is imo

Okay, reverted back to the version without hidden models.

dming you a 32k key via twitter so you can take a look — if its an issue on their end can always just edit the stored model json in localstorage in the meantime i guess?

When testing with that key now, the gpt-4-32k model shows up correctly (see screenshot below). Can you check whether it still misbehaves for you? If so, maybe re-generating your key would help... But regarding your suggestion, right now editing the settings in localstorage would specifically not help, as the PR has logic to fall back to the default model in case the model from settings is not in the list.

Screenshot 2023-06-15 at 03 20 07

@transmissions11
Copy link
Member

Okay, reverted back to the version without hidden models.

🙏

When testing with that key now, the gpt-4-32k model shows up correctly (see screenshot below). Can you check whether it still misbehaves for you?

oh weird yeah works for me as well now... 🤷

But regarding your suggestion, right now editing the settings in localstorage would specifically not help, as the PR has logic to fall back to the default model in case the model from settings is not in the list.

mm right gotcha.

@transmissions11
Copy link
Member

alright this lgtm! great work @adietrichs

@transmissions11 transmissions11 merged commit 1dfc26f into paradigmxyz:main Jun 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants