-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New Query Cache Feature (part deux) #763
Conversation
Hashing this out while trying to add it in to one of my apps, it looks like one thing Rails does is uses a separate in-memory only store for SQL. This is namespaced by a request id. Then, if you need something that's held over between requests, you'd manually implement the "low-level" caching. The tricky part about this is, you could set your cache storage to be memcached, and what rails does is all low-level cache goes to memcache, but SQL calls still go to in-memory. In order to mimic that flow, we first need luckyframework/lucky#1610 and luckyframework/lucky_cache#1 so we can namespace the queries by the request id. That helps so you don't wipe out my cache. The next thing we need is this PR would need to integrate the cache configuration in to Avram directly so we can set it to the memory store (or completely toggle off with NullStore). Then when you add LuckyCache to your app, you could configure it with say redis or something, and it doesn't affect Avram's MemoryStore... That's tricky though because the Habitat config is a singleton. 🤔 For now, I'll get those other PRs out of the way so we at least have this option. If anyone has a better idea, we don't have to mimic Rails...we just need something that works. |
… This also circumvents the singlton setup from Habitat that would cause your query cache to be stored in redis or something
@paulcsmith brought up a good point. Since the HTTP Requests run on their own fiber, we can just hook in to that the same way we do in Breeze. This means we don't have to worry about manually trying to clear cache between requests. We will still need an escape hatch to clear cache manually. Technically, with this, you could do Now we just have 2 main questions to answer: |
|
||
UserQuery.new.name("Amy").first | ||
UserQuery.new.name("Amy").first | ||
user = UserQuery.new.name("Amy").first |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens if you update some other column on users and then fetch in the same way?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You'd get the cached version, not the updated version. Though, inserts and updates aren't cached here, which means if you did update a record, you'd have access to that updated record through the save operation. As of right now, the other option would be to flush all the cache.
We could do a cache.delete(the_key)
type deal, but that goes to my question about the key. You'd have to know how it was derived... We can't just put a cache_key
method on the models, because you're not necessarily caching a single model. Maybe cache_key
becomes a public method?
def results : Array(T) | ||
exec_query.tap do |records| | ||
preloads.each(&.call(records)) | ||
cache_store.fetch(cache_key, as: Array(T)) do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't log anything if it returns the cached value. We can't add logging to the block because that gets cached, so would it make sense to have fetch
do its own logging?
def cache_key : String | ||
[query.statement, query.args].join(':') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess if you wanted a different cache_key, you could just override this method in your query class 🤔
cache_store.fetch(cache_key, as: Bool) do | ||
queryable = clone | ||
new_query = queryable.query.limit(1).select("1 AS one") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No clue how to test the cache on this method since it would require re-opening a method within this block, but which one? Same for the select_count
which uses different methods...
I have this in a live app and it's working. I'm going to just merge in as-is. Since it's opt-in, we can give people a chance to mess around with it and leave feedback for the following release. |
Fixes #63
Supersedes #753
This is a new take on the query cache over the previous attempt. In this version, we use the new LuckyCache shard to handle the cache logic.
This initial version adds a cache layer to only "SELECT" type queries on the models. I would like to get some feedback on this approach before going further.
How does it look?
Query cache will only be stored in memory, and only on a per (HTTP) request basis. This means that on subsequent requests, you'll have a new cache. As well as other requests not tampering with your cache.
Right now I'm rolling with the assumption that this is off by default, but to turn it on, you'll just set the config in your app:
If you wanted to cache a query for a much longer duration, you'd drop down to a lower level cache using
LuckyCache
directly.What's missing?
ModelQuery.new
queries are being cached. None of the aggregateselect_count
,select_sum
, etc...Final thoughts
This is not intended to be a comprehensive caching solution. Maybe one day, but for now, it's as basic as we can make it and still call it "cache". If you need something heavy-duty, chances are you'll reach for something more like Cloudfront, or Varnish, etc... As you review this, don't think of it in terms of "I wish it could do this and that, and XYZ...". For those type of thoughts, feel free to open an issue on the LuckyCache shard directly. Think of this more in terms of how you could add it in to your app for additional performance, and what sort of "gotchas" might show up if you did. As long as we have some escape hatches to bail you out, we should be good.