From b03db15f08b069bde31f380f667a32c5a81c49da Mon Sep 17 00:00:00 2001 From: Robert Swoboda <53216935+Gengar-i@users.noreply.github.com> Date: Wed, 10 Jan 2024 14:37:14 +0100 Subject: [PATCH] Modified readme file --- src/lib/Persistence/Cache/Readme.md | 21 ++++++++------------- 1 file changed, 8 insertions(+), 13 deletions(-) diff --git a/src/lib/Persistence/Cache/Readme.md b/src/lib/Persistence/Cache/Readme.md index 38e252ec8a..ec4cc98956 100644 --- a/src/lib/Persistence/Cache/Readme.md +++ b/src/lib/Persistence/Cache/Readme.md @@ -8,25 +8,24 @@ tries to balance this need up against the complexity and additional [system over Handlers using `AbstractHandler` internally consist of one cache layer: -- "Shared cache": A Symfony Cache based cache pool, supporting a range of cache adapters like filesystem, Redis, Memcached. - Note: Due to being shared by design, clusters need remote cache, thus multi lookups are strongly advised to reduce round trip latency. +- "Shared cache": A Symfony Cache based cache pool, supporting a range of cache adapters like filesystem, Redis, Memcached. + Note: Due to being shared by design, clusters need remote cache, thus multi lookups are strongly advised to reduce round trip latency. Handlers using `AbstractInMemoryHandler` / `AbstractInMemoryPersistenceHandler` in addition add a second cache layer in front of "Shared cache": -- "InMemory cache": A burst cache in PHP aiming at covering the most heavily used parts of the Content model to reduce repeated lookups to remote cache system. - Note: It's not shared but per request/process. To keep risk of race condition negligible, it has own milliseconds ttl & item limits. +- "InMemory cache": A burst cache in PHP aiming at covering the most heavily used parts of the Content model to reduce repeated lookups to remote cache system. + Note: It's not shared but per request/process. To keep risk of race condition negligible, it has own milliseconds ttl & item limits. There are abstract test classes for the respective abstract classes above, these are opinionated and enforce conventions to: -- Avoid too much logic in cache logic _(e.g. warm-up logic)_, which can be a source of bugs. -- Avoids having to write error-prone test cases for every method. +- Avoid too much logic in cache logic _(e.g. warm-up logic)_, which can be a source of bugs. +- Avoids having to write error-prone test cases for every method. _This ensures the cache layer is far less complex to maintain and evolve than what was the case in 1.x._ ### Tags List of content tags that can be somewhat safely "semi-officially" used to clear related entities in cache: - - `c-`: Cache tag which refers to Content/ContentInfo entity. - `l-`: Cache tag which refers to Locations and/or their assigned Content/ContentInfo entities. - `lp-`: Like the tag above but applied to all Content/Locations in the subtree of this ID, so it can be used by tree operations. @@ -39,13 +38,11 @@ _For further tags used for other internal use cases, see the \*Handlers for how ### Shared Cache: When to use, when not to use It's worth noting that shared cache comes at a cost: - - Increased complexity - Latency per round trip - Memory use Because of that, _typically_ avoid introducing cache if: - - Lookup is per user => _it will consume a lot of memory and have very low hit ratio_ - For drafts => _usually belongs to a single user and is short-lived_ - Infrequently used lookups @@ -54,16 +51,14 @@ Because of that, _typically_ avoid introducing cache if: ### Tags: When to use, when not to use Like cache, tags also comes at a cost: - - Slower invalidation - Memory cost - - _E.g.: ATM on RedisTagAwareAdapter tag relation data is even non-expiring as it needs to guarantee surviving cache._ + - _E.g.: ATM on RedisTagAwareAdapter tag relation data is even non-expiring as it needs to guarantee surviving cache._ For those reasons, only introduce use a tag: - - Mainly to represent an entity _(e.g. `c-`)_ - Only if it's represented on many different cache keys or if a key can have a lot of different variants. - - _Tip: Otherwise prefer to delete by cache key(s) when cache clear is needed, it will be faster and consume less memory._ + - _Tip: Otherwise prefer to delete by cache key(s) when cache clear is needed, it will be faster and consume less memory._ ### Possible future considerations