You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
export interface IChatEndpointInfo {
/**
* The maximum number of tokens allowed in the model prompt.
*/
readonly modelMaxPromptTokens: number;
}
but in VS Code API when we get a model, it looks like this:
/**
* Represents a language model for making chat requests.
*
* @see {@link lm.selectChatModels}
*/
export interface LanguageModelChat {
// ...
/**
* The maximum number of tokens that can be sent to the model in a single request.
*/
readonly maxInputTokens: number;
// ...
}
So naturally in several places I have to make this silly object:
@joyceerhl@roblourens I think it makes sense to tweak this to match VS Code's API (such that interfaces are just assignable in most cases) now that it's all finalized. Any reason not to?
renderPrompt
uses anIChatEndpointInfo
:but in VS Code API when we get a model, it looks like this:
So naturally in several places I have to make this silly object:
and also pass in
model
for:so I pass in the same information twice... and I have to massage things to work but we own both APIs.
I think this should be cleaned up to take in only a
LanguageModelChat
.This also applies to
PromptRenderer
.The text was updated successfully, but these errors were encountered: