You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I added a very descriptive title to this question.
I searched the LangChain documentation with the integrated search.
I used the GitHub search to find a similar question and didn't find it.
Commit to Help
I commit to help with one of those options 👆
Example Code
returnChatPromptTemplate.from_messages([
SystemMessage(content=[
{
"text": cached_prompt,
"type": "text",
"cache_control": {"type": "ephemeral"},
}
]),
HumanMessage(content=f"Here is the comment to analyze: {comment}")
])
Description
I want to use prompt caching to reduce latency.
My earlier approach used PromptTemplate with context to extract file name from a large description of the folder structure and file structures (the the complete directory tree with all the metadata about contents, size, creation date, etc). And it works well.
Now, I want to cache this large context as it does not change much over time.
I implemented the above code snippet by providing the detailed prompt as a system prompt. But it hallucinates badly. It is unable to understand the natural language question given as a user message.
Is it possible to use prompt caching with PromptTemplate?
Thank you
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Checked other resources
Commit to Help
Example Code
Description
I want to use prompt caching to reduce latency.
My earlier approach used PromptTemplate with context to extract file name from a large description of the folder structure and file structures (the the complete directory tree with all the metadata about contents, size, creation date, etc). And it works well.
Now, I want to cache this large context as it does not change much over time.
I implemented the above code snippet by providing the detailed prompt as a system prompt. But it hallucinates badly. It is unable to understand the natural language question given as a user message.
Is it possible to use prompt caching with PromptTemplate?
Thank you
System Info
langchain 0.3.0
langchain-anthropic 0.2.0
langchain-community 0.3.0
langchain-core 0.3.0
langchain-experimental 0.3.0
langchain-openai 0.2.0
langchain-text-splitters 0.3.0
Beta Was this translation helpful? Give feedback.
All reactions