-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache Mongo-DB calls (in memory only) #998
base: master
Are you sure you want to change the base?
Conversation
Duplicate of #926 but without the redis cache |
Thank for your contribution Jason. Regarding the config parameters, the option dontCache unfortunately does not provide backwards compatibility. A good example could be something like JEXL configuration approach:
With these two parameters, an existent deployment that update the IoT Agent can use cache if it needed, offering backwards compatibility with config groups already provisioned. The expected behaviour is described below. Regarding the cache distribution, it should allow segmentation (as mentioned in here #926 (comment)). As far as I saw in the code, all the cache is shared across all the tenants. In multi tenant environments, the risk one tenant could overuse all the resources is present. Having this in mind, we can differentiate between two types of cache:
The architecture discussed above is illustrated in the following diagram: As a resume, we would have to have the following env vars, as well as config.js parameter:
The device group provision JSON as well should include the following parameters. This parameters should override the env var or config.js file configurations.
Example of {
"groupCacheMode":"inMemory",
"groupCacheSize":100,
"groupCacheTTL":100000,
"deviceCacheDefaultMode":"inMemory",
"deviceCacheDefaultSize":100,
"deviceCacheDefaultTTL":100000,
"deviceCacheMaxSize":1000
} Equivalence with envars:
Another point to consider is the Cache Replacement Mode. Depending on the scenario, it may be more interesting to have a LRU, MRU or random replacement based policy. Which mode is used right now? It would be interesting to be able to configure it. |
Thanks for this interesting PR! I really hope this can in fact improve read queries from the agents!
Thanks for your time, really appreciate it! |
First, I want to clarify that my previous comment with the description and diagrams, shows the desired behaviour expected from a cache system.
I know it is a bit confusing, but the reason about naming "group cache" is because it is a cache that store groups (and it is releated to each tenant). The reason why the device cache is named as device cache, is because devices are stored on that cache (and it is also linked to a group). Depending on what data is being stored or what the cache belongs to, one naming or another may make more sense.
They are different types of caches. The group cache only store config group (also named provision groups) and they dont store devices. You just have 2 different limits ( |
Thanks for the explanation, I think I got your naming now. So device and group cache are independent from each other. Allow me one last question:
I believe there is checks in the background that still the sum of group and device cache don't exceed the memory of mongodb? |
I think the architecture you described won't work with a pure in-memory cache (which is what this PR now is) but is something to be achieved in PR #926 - all that this PR does for now is substitute an in memory record of the last n hits - it is very, very simple, but very fast to access. I would assume the per tenant config would be on Redis as a series of RedisCaches. The same goes for replacement mode - not this PR but the other one. |
Getting this right is something for the first PR, laying the groundwork so to speak. How is Can you clarify what changes to the current behaviour are you looking for here? I assume something needs fixing, I'm just not sure what.
This is currently
The current architecture is assuming each cache can be enabled separately e.g. :
The local in-memory is the fastest and smallest. If
If we ignore Redis for now, does the in-memory quick and dirty do enough or not? |
@mapedraza - The in-provisioning flag has been switched from
|
Is there still active work on this topic? For a horizontal scalable deployment of IOTAs with one common MongoDB-Cluster this would be a gigantic performance boost lantency-wise. |
c266011
to
a560d54
Compare
Rebased as requested. This part is actually a much smaller change than it appears, since it also corrects the location of the mongoDB test tool, and runs cache flushing when necessary. const mongoUtils = require('../../tools/mongoDBUtils'); |
Will this feature introduce breaking changes for the IOTA-implementations like IOTA-JSON and IOTA-UL or are the changes transparent so a simple version bump will enable the use of this functionality? |
It is opt in, so it is only enabled if you set the configuration to do so. Even if you are using an in-memory cache, you could provision individual devices not to use it - it just depends if you want lower latency or if you are worried about the potential that IoT Agent A uses older cached in-memory info when a provisioning update has occurred through IoT Agent B |
@mapedraza - is this PR still in the queue to be reviewed? It is opt-in, so without setting the parameters, the PR itself is harmless. It is use-case dependent as to whether you want full consistency across multiple IoT Agent instances or lower latency and fewer Database look-ups. The text is quite clear about this:
|
Mongo-DB access is slow. This PR minimizes the need to make database calls, by caching the last 1000 device calls and 100 group calls in a refreshable cache. This in turn reduces network traffic and increases maximum throughput.
All parameters are settable as config or Docker ENV variables.
Adding
cache=true
as part of any Device or Group configuration will ensure that the data can potentially be received from cached and not necessarily retrieved from the MongoDB database.