-
Notifications
You must be signed in to change notification settings - Fork 220
Add LRU in MasterService, complexity O(1) #287
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Thank you for your contribution. Here are a few points that might warrant consideration: First, if eviction support were implemented, the garbage collection (GC) related functionalities could potentially be removed. Second, as reflected in the codebase, our current implementation utilizes lock sharding, wherein each object is safeguarded by the lock corresponding to its shard. Implementing a Least Recently Used (LRU) eviction strategy, however, would likely necessitate a global lock to manage the LRU queue, potentially introducing performance bottlenecks due to contention. My proposal involves repurposing the existing GC thread into a eviction thread. When eviction becomes necessary—perhaps triggered by monitoring specific watermarks or thresholds—this thread could select a target shard, acquire the lock exclusive to that shard, and subsequently perform eviction operations solely within its confines. This approach aims to circumvent the performance limitations associated with a global lock. |
I've conceived a potentially simpler approach that we could discuss. If we modify the garbage collection (GC) producer's operation from 'get' to 'put_end', would this effectively function as a First-In, First-Out (FIFO) eviction mechanism? |
Thanks for your constructive suggestions! |
Exactly yes, we have tested this method, while in real cases, FIFO is not always the best eviction method, which client will frequently get a subset of a large prefilled datasets. |
Can you ensure that the LRU implementation is thread-safe? |
Work in process, later will fix |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR introduces an LRU mechanism into the MasterService to prevent discarding put() requests when the store reaches maximum capacity. Key changes include:
- Initializing and clearing LRU data structures in the MasterService constructor and destructor.
- Updating the Get() and PutStart() methods to incorporate LRU update and eviction logic.
- Adding preprocessor definitions and corresponding LRU member variables to the header file.
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.
File | Description |
---|---|
mooncake-store/src/master_service.cpp | Implements LRU update and eviction in various service methods. |
mooncake-store/include/master_service.h | Adds preprocessor flags and LRU data structures. |
Comments suppressed due to low confidence (2)
mooncake-store/include/master_service.h:18
- [nitpick] Consider defining USE_LRU_MASTER without a string value (e.g. using '#define USE_LRU_MASTER') to better align with standard preprocessor flag practices.
#define USE_LRU_MASTER "ON"
mooncake-store/src/master_service.cpp:143
- [nitpick] Consider refactoring the duplicate LRU update logic found in Get() and PutStart() into a helper function to improve code maintainability and reduce redundancy.
all_key_list_.push_front(key);
} | ||
all_key_list_.push_front(key); | ||
all_key_idx_map_[key] = all_key_list_.begin(); | ||
if(all_key_list_.size() >= LRU_MAX_CAPACITY) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Verify that the eviction condition correctly reflects the intended capacity constraint; if LRU_MAX_CAPACITY is the max allowed entries, eviction should occur only when adding a new key would exceed that capacity.
if(all_key_list_.size() >= LRU_MAX_CAPACITY) | |
if(all_key_list_.size() > LRU_MAX_CAPACITY) |
Copilot uses AI. Check for mistakes.
Hi, @zhaoyongke , Thank you for your contributions, https://github.com/kvcache-ai/Mooncake/actions/runs/14609309219/job/40984273235?pr=287. master_service_test has failed. Please check it. |
@@ -15,6 +15,9 @@ | |||
#include "allocator.h" | |||
#include "types.h" | |||
|
|||
#define USE_LRU_MASTER "ON" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Define it in the CMake File?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fine, I'll fix it
@@ -168,6 +171,12 @@ class MasterService { | |||
std::shared_ptr<BufferAllocatorManager> buffer_allocator_manager_; | |||
std::shared_ptr<AllocationStrategy> allocation_strategy_; | |||
|
|||
#ifdef USE_LRU_MASTER | |||
// LRU statistics | |||
std::list <std::string> all_key_list_; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use a class (e.g., class LRUList {....}) to wrap these two LRU lists.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep, and we'd better add some basic tests for the new class.
@@ -59,6 +60,11 @@ execute_process( | |||
) | |||
string(STRIP ${PYTHON_SYS_PATH} PYTHON_SYS_PATH) | |||
|
|||
if (USE_LRU_MASTER) | |||
add_compile_definitions(USE_LRU_MASTER) | |||
add_compile_definitions(LRU_MAX_CAPACITY=1000) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we'd better move it to command arguments, so that we can change it on demand.
We found mooncake store will discard all the put() requests after its capacity reaches the max. This case is not applicable in our online service. We solved this via LRU( Least Recently Used) algorithm.