Adaptive Cache replacement strategies have shown superior performance in comparison to classical strategies like LRU and LFU. Some of these strategies like Adaptive Replacement Cache (ARC), Clock with Adaptive Replacement (CAR) are quite effective for day to day applications but they do not encode access history or truly learn from cache misses. We propose a reinforcement learning framework, RLCaR which seeks to tackle these limitations. We use TD 0 model-free algorithms like Q-Learning, Expected SARSA and SARSA to train our agent to efficiently replace pages in cache in order to maximize the cache hit ratio. We also developed a memory cache simulator in order to test our approach and compare it with LRU and LFU policies.
forked from sumanyumuku98/RL-CAR
-
Notifications
You must be signed in to change notification settings - Fork 0
Farzad-Mehrabi/RL-CAR
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
B.Tech Thesis Code for RLCaR: Deep Reinforcement Learning Framework for Optimal and Adaptive Cache Replacement
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published
Languages
- Jupyter Notebook 82.0%
- HTML 15.2%
- Python 2.8%