No clear v2 option for ConversationChain with ConversationSummaryBufferMemory, as RunnableWithMessageHistory has critical gap #24562
Replies: 5 comments 11 replies
-
To utilize
Here is an example code snippet demonstrating this integration: from langchain.memory.summary_buffer import ConversationSummaryBufferMemory
from langchain_core.chat_history import InMemoryChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_openai import ChatOpenAI
# Initialize ConversationSummaryBufferMemory
memory = ConversationSummaryBufferMemory(max_token_limit=2000)
# Store for session histories
store = {}
def get_session_history(session_id: str) -> InMemoryChatMessageHistory:
if session_id not in store:
store[session_id] = InMemoryChatMessageHistory()
return store[session_id]
# Example function to simulate a conversation within RunnableWithMessageHistory framework
def run_conversation(runnable, inputs, session_id):
# Load existing memory
context = memory.load_memory_variables(inputs)
# Run the conversation with the current context
outputs = runnable.run(inputs, context)
# Save the new context
memory.save_context(inputs, outputs)
return outputs
# Example usage
class ExampleRunnable:
def run(self, inputs, context):
# Simulate processing inputs with context
return {"response": "This is a response based on the context."}
# Create an instance of the runnable
runnable = ExampleRunnable()
# Simulate conversation inputs
inputs = {"user_input": "Hello, how are you?"}
# Run the conversation
outputs = run_conversation(runnable, inputs, session_id="1")
print(outputs) In this example:
This setup ensures that the conversation context is managed efficiently, with earlier parts of the conversation being summarized progressively while maintaining a verbatim context window for recent interactions. The |
Beta Was this translation helpful? Give feedback.
-
FWIW, |
Beta Was this translation helpful? Give feedback.
-
@chrislrobert, I was able to do something like it using the RunnableWithMessageHistory with DynamoDBChatMessageHistory with a small history_size, for example 10 or 20 messages. Something like this wouldnt it be enough? |
Beta Was this translation helpful? Give feedback.
-
No, there are two problems with that:
1. Even a small number of messages could be more than the token budget for
conversation history
2. Only remembering the last x messages in a conversation can lead to poor
UX, as earlier details are lost
The various strategies supported within the memory module resolve
combinations of both. The history module, in contrast, seems
over-simplistic for production applications that must perform well in long
conversations.
…On Tue, Jul 23, 2024 at 5:52 PM Nicolas Porto Campana < ***@***.***> wrote:
@chrislrobert <https://github.com/chrislrobert>, I was able to do
something like it using the RunnableWithMessageHistory with
DynamoDBChatMessageHistory with a small history_size, for example 10 or 20
messages. Something like this wouldnt it be enough?
I also noticed that DynamoDBChatMessageHistory inherits from
BaseChatMessageHistory and ConversationSummaryBufferMemory from
BaseChatMemory.
—
Reply to this email directly, view it on GitHub
<#24562 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACB3IHQKVTEN7WMWBT6XLVDZN3GCLAVCNFSM6AAAAABLKYR6CWVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTAMJTGEZDOOI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Thanks, @hwchase17, for helping to make the roadmap here a bit more clear. Personally, I expected LangChain to handle the LLM interface in a first-class way, and I was perfectly happy with how I addressed persistence in my app — just as I was happy with how I handled websockets and streaming, balancing privacy and compliance needs with UX and performance concerns in how conversation histories were stored, UI concerns for streaming markdown and auto-scrolling, etc., etc. Obviously, I understand the instinct to build increasing functionality, convenience, and elegance into the core library, and I certainly appreciate the more consistent support for async, streaming, and more. In terms of the experience as a LangChain user, I guess I'd just say that ideally, when you announce a deprecation, there'd be a bit more clarity around how to migrate while maintaining as-good-or-better functionality. This lack of a clear path for A related issue is the web of interdependencies between the LangChain and LangGraph packages with vendor packages like OpenAI and Azure. Being forced to update one package for a vulnerability or fix, and that forcing then a large number of cascading dependency updates — some of which introduce regressions, incompatibilities, or deprecations — is nothing unique to the LangChain ecosystem. But, as you expand the footprint of core functionality, introduce additional integrations, and split functionality across, e.g., the LangChain and LangGraph packages, the challenges might grow at a nonlinear rate. As an existing LangChain user, it's creating increasing drag on both my velocity and my quality, both of which cause regular "is it time to throw in the towel?" discussions. At the end of the day, though, I get it: it's always a challenge to balance the needs of early adopters and existing users with those of new users and growth. You guys have worked extremely hard and provided extraordinary value at an extraordinary speed. So I appreciate that, and I am looking forward to what sounds like a fuller suite of very robust and flexible v0.2 conversation-history/context approaches. Thanks again. |
Beta Was this translation helpful? Give feedback.
-
Checked other resources
Commit to Help
Example Code
Description
I have an excellent series of chatbots and automated workflows built with LangChain, all of which rely on
ConversationChain
andConversationSummaryBufferMemory
, as in the example code I shared. I have looked over the v2 migration guidance in https://python.langchain.com/v0.2/docs/how_to/migrate_chains/ many times, and I have spent hours searching for examples and guidance. However, I cannot find any clear way to either:ConversationSummaryBufferMemory
in aRunnable
(e.g., viaRunnableWithMessageHistory
).What I seem to be missing from all of the v2 guidance, examples, and docs is this: how can you control how much of the context window is dedicated to verbatim conversation vs. progressively-summarized earlier conversation? All of the examples I see seem to just include 100% of the verbatim conversation, as if there are no limits on context windows (or cost or performance considerations). The flexible memory approach taken in v1 was quite necessary for any application that can entail over-long conversation histories, and I'm failing to see how this is meant to be handled in v2.
Is there a way to utilize
ConversationSummaryBufferMemory
within aRunnableWithMessageHistory
framework?Chris
System Info
System Information
Package Information
Packages not installed (Not Necessarily a Problem)
The following packages were not found:
Beta Was this translation helpful? Give feedback.
All reactions