-
Notifications
You must be signed in to change notification settings - Fork 350
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HTTP Cache Middleware #508
base: master
Are you sure you want to change the base?
Conversation
Extracted from original work done for Redis.
Pass TTL values as-is to `Store.put/3` calls. This allow implementations to be smart, like a Redis-based use the datastore TTL.
Simple TTL logic, inspired from ConCache - but way less robust. It's here just so items don't linger forever at the ETS.
Implementation is a bit hacky, passing forward {store, opts} tuples around, but enought to work and see how the API looks like. Not entirely happey with `Store` behaviour (it's getting too wide), but not much that can be done. One potential is to work via some sort of Registry, but that may overcomplicate the middleware as impose some avoidable overhead.
To answer the last comment from previous PR
I think it would be best to have separate cache for each client as a default.
One option would be to not ship any storage adapter at all. Even the simples one (ets) gets tricky prevtty quickly when you start to think of concurrent access, expirations etc (that's why we have con_cache). Maybe instead we should shit the HTTP Cache logic only and provide guidance on how to use X as a cache storage (con_cache, redix, ecto, ...) |
FYI I'm working on an HTTP caching library in Erlang with some niceties such as handling of revalidation, auto-compression, handling the whole RFC7234 (e.g. deleting cache when I'm also working on an Erlang LRU backend for this, that handles invalidation by URI or by alternate key, clustering, etc. In addition to an HTTP caching Plug, I'm planning to write a Tesla middleware for this :) Not writing here to brag about it, but to avoid duplicate work depending on what your goals are with this PR. |
@ttl_interval :timer.seconds(5) | ||
|
||
def start_link(opts) do | ||
opts = Keyword.put_new(opts, :name, __MODULE__) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using __MODULE__
would make the cache shared by all clients by default, right?
{:reply, :ok, state} | ||
end | ||
|
||
def handle_call({:delete, key}, _from, state) do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
def handle_call({:delete, key}, _from, state) do | |
@impl GenServer | |
def handle_call({:delete, key}, _from, state) do |
end | ||
|
||
defmodule Store.ETS do | ||
use GenServer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the tricky part - tesla is generally stateless.
How do you see someone managing the ETS storage lifecycle?
I'm almost done with |
FYI I've released |
2bca420
to
fe7207c
Compare
Continuation of #264