So chatgpt is still limited to the 4096 token context length? #649
-
I saw the code here https://github.com/acheong08/ChatGPT/blob/main/src/revChatGPT/Official.py#L282 and it seems to me that this is exactly the same as gpt-3 in the sense that you are still limited to past 4096 tokens of chat history, since you have to prepend the chat history to the prompt. So what exactly has changed, besides using a different model perhaps fine-tuned for using chat history as context? I also took a look at the API requests on the webpage, and it is not obvious to me that they sent the entire conversation per request. So how did you come up with this implementation? Did you actually find evidence of openAI doing this to their prompts, or you are just trying it out and it happened to work? |
Beta Was this translation helpful? Give feedback.
yes