You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the web interfaces for ChatGPT and (especially) Claude chats, the streaming feels super smooth, i.e. responses are pieced together in components even finer than tokens. It feels more "chunky" in pal. This may just be a function of responses from pal tending to be shorter than the few paragraphs typical of an unprompted ChatGPT or Claude, and thus being comprised of fewer total tokens, but I do think a really satisfying interface would make it feel like you're actually watching the model "type."
The text was updated successfully, but these errors were encountered:
This might be because right now, stream_async is implemented using a polling mechanism under the hood. We're actively working on changing this to a more efficient, less chunky mechanism based on select or similar.
In the web interfaces for ChatGPT and (especially) Claude chats, the streaming feels super smooth, i.e. responses are pieced together in components even finer than tokens. It feels more "chunky" in pal. This may just be a function of responses from pal tending to be shorter than the few paragraphs typical of an unprompted ChatGPT or Claude, and thus being comprised of fewer total tokens, but I do think a really satisfying interface would make it feel like you're actually watching the model "type."
The text was updated successfully, but these errors were encountered: