Add Support for Memcached as a LLM Model Cache #27035
prokopchukdim
started this conversation in
Ideas
Replies: 1 comment
-
If we were to add support for more than one client, would it be best to have one class per client? Or, since these would all be Memcached implementations, could we have one unified class which takes as part of its The second option would provide a cleaner interface for end-users, but may be somewhat overengineering the implementation. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Checked
Feature request
We would like to add support for Memcached as a usable LLM model cache. There are two main pure-Python memcached client implementations in Python: pymemcache and python-memcached.
We would primarily like to add support for pymemcache given that it is the most actively maintained, but it may be possible to support both clients under one newly added cache class since both are used.
Motivation
Many of the model caches supported natively are full on DBs. While Redis is supported as an option for distributed in-memory storage, many teams and companies rely on Memcached as a distributed in-memory cache. By adding Memcached support, we hope to make the model caching feature more useful to more teams using Langchain.
Example Usage
Proposal (If applicable)
We intend to add a new
MemcachedCache
implementation inlibs/community/langchain_community/cache.py
to support thepymemcache
client.If there is interest in also supporting the
python-memcached
client, or others, we can explore creating a unified implementation class since all clients should generally adhere to the memcached text protocol.We intend to submit a pull request some time in October, and no later than mid-November.
Beta Was this translation helpful? Give feedback.
All reactions