Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add missing pylibmc client methods; adds graceful handling of memcached errors #27

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

matteius
Copy link

@matteius matteius commented Nov 1, 2017

The most straight-forward part of this that I think you'll agree with is that you weren't wrapping some of the base pylibmc client methods with @invalidate_cache_after_error.
Also changed this to log errors as warnings with exception info (in case of a logging config like raven/sentry) and instead of raising the error, treat them as cache misses.

Logs the error as a warning but return False.  Also add the other
missing memcached client end-points.
@kchristensen
Copy link

Any chance on this getting merged into master sometime soon?

@gusdan
Copy link
Owner

gusdan commented Sep 24, 2018

i defenitly agree to wrap with that decorator missed methods, but i'm not sure about returning None in case of error. For instance if i try to set value to cache and server is down (problem with network etc) i'm expecting to get real error with traceback instead of silent None.

@matteius
Copy link
Author

@gusdan That is why it logs a warning, and I could be easily convinced to change that to an error. However let me explain that for our use case of caching mostly static replies from web services that do not change, that if a cache miss occurs we want our clients to make a call to the service to get that information instead of failing hard because of a caching error. Our caching layer is to help with performance but should not cause requests to fail out right, and returning None is the way to treat it as a cache miss. Since this library is already a wrapper of pylibmc that would leave me with two options: Keep a fork of this library for our projects to use or to write my own extra wrapper layer above this to catch those errors and treat them as cache misses (yuck!).

By logging a warning or an error we can hook into the Python logging configuration w/ the Django settings to specify a Sentry server and other ways of alerting us that the caching layer is having problems, but without explicitly trying to fail the request. That is our use case -- and I encountered this problem heavily before I got deep into memcached configuration and realized that increasing the AWS host size isn't going to increase the compiled key space for memcached, so we were overrunning the key space and running into all kinds of caching errors related to it (whereas having more smaller nodes in the cluster actually increases the key space in a properly configured cluster).

I suppose it comes down to your use case if you would prefer that cache errors bubble up as real exceptions, or if your applications already handle cache misses gracefully then alerting and moving on is probably your preferred strategy.

@gusdan
Copy link
Owner

gusdan commented Sep 30, 2018

By default django memcache backend does raise errors. If memcache stoped we have 500 error it's bad. But if we stop raising errors and for instance don't connect our logger to senty we won't see any errors and we will think "everything is ok, system works as usually" but memcache is not working and our system is working under high load and using more resources (cpu/machines) than needed.

So i still think it's better to explicitly raise errors as django does it by default. Of course i agree that in your case when cache is not important part of system and we have sentry handler in root logger we can skip raising errors. But it looks like specific design for specific architecture and i don't think it should be default behavior.

@matteius
Copy link
Author

matteius commented Oct 3, 2018

Well there is some gray area to what you are saying about the Django default, because while I agree that the default for the code that is specifically checked into the django.core.cache.backends.memcache you are right -- it will raise any error that the client library raises. As the Django documents suggest "After installing Memcached itself, you’ll need to install a Memcached binding. There are several Python Memcached bindings available; the two most common are python-memcached and pylibmc." In the case of python-memcached, which is the library we converted away from in order to move to Auto Discovery, we find that:

"It turns out that socket operations that fail in python-memcached don’t return an error or raise any exceptions. Calling .get() on a dead server returns None on failure, the same thing that is returned when there’s no value."

So immediately upon switching to some wrappers of pylibmc found that additional errors were being raised whereas before they were not, you can see that the default choice for Django memcache is doing some exception handling here: https://github.com/linsomniac/python-memcached/blob/master/memcache.py#L1045

What about having a Sub-Class of ElastiCache that didn't raise the errors and then a user of this library could pick the desired behavior for their application by selecting which cache backend to use?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants