-
-
Notifications
You must be signed in to change notification settings - Fork 164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[feature] Per item TTL in TTLCache #157
Comments
Sure, I played with the idea myself a few years ago... ended up with a custom |
Not something that you'd be keen to merge into master then? My use case revolves around rate limiting. Specifically the combination of these two issues: Part of the point of separating the rate limiter from the the storage backend in the Rush package is to remove the need for duplicating the functionally into an object per rate limiter. You could imagine this even more so with external stores like Redis where you might end up running multiple connections. Each of these rate limiters could potentially want different TTLs. I happened across the TTLCache implementation in the package which if I'm not mistaken is very nearly a full TLRU caching algorithm. It just lacks the per key TTL. I'm very keen to use it in Rush to offer a more robust alternative to the current 'DictionaryStore'. |
Well, not quite ;-) |
Not directly related to the question, but... That TLRUCache can be easily integrated into the current TTLCache implementation with no need of an additional class, and maintaining TTLCache backwards-compatible. Just check in object creation if "self.__ttl" is "callable()", and take it into account when setting the inserted item expiration time. |
Another attempt at supporting per-item TTL values (tkem#157). Split TTLCache into 3 classes: * TTLCacheBase: doesn't have any fixed TTL values and doesn't implement full MutableMapping ABC. Instead of __setitem__() it has add() which requires a TTL value for each item. This can be used as a base class to implement any TTL logic. * TTLCache: inherits from TTLCacheBase and implements the same API as before. For advanced uses add() can be used directly. * FlexTTLCache: inherits from TTLCacheBase and accepts a function to generate TTL values per element.
Another attempt at supporting per-item TTL values (tkem#157). Split TTLCache into 3 classes: * TTLCacheBase: doesn't have any fixed TTL values and doesn't implement full MutableMapping ABC. Instead of __setitem__() it has add() which requires a TTL value for each item. This can be used as a base class to implement any TTL logic. * TTLCache: inherits from TTLCacheBase and implements the same API as before. For advanced uses add() can be used directly. * FlexTTLCache: inherits from TTLCacheBase and accepts a function to generate TTL values per element.
Another attempt at supporting per-item TTL values (tkem#157). Split TTLCache into 3 classes: * TTLCacheBase: doesn't have any fixed TTL values and doesn't implement full MutableMapping ABC. Instead of __setitem__() it has add() which requires a TTL value for each item. This can be used as a base class to implement any TTL logic. * TTLCache: inherits from TTLCacheBase and implements the same API as before. For advanced uses add() can be used directly. * FlexTTLCache: inherits from TTLCacheBase and accepts a function to generate TTL values per element.
is it possible to get the remaining TTL for a single item in the cache? like cache.getRemainingTTL(key) |
Would it be possible to implement a per-item TTL? This could still default to the cache's global TTL if not set.
The text was updated successfully, but these errors were encountered: