You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Create an experimental, alternative brick server module of brick_ets that does not use an in-memory ETS table (ctab) to store metadata part of key-values but uses on-disk (HyperLevelDB) metadata DB introduced by GH17 redesigning disk storage.
The current brick_ets only reads from HyperLevelDB during start up and stores copies of all keys in in-memory ETS table, but this alternative module will not use the ETS table and read from HyperLevelDB for each read request from a client. It may cache some frequently used metadata on a 2Q cache (a refinement of LRU cache with scan resistant) available as Erlang NIF: arekinath/e2qc.
I have a project that requires low memory footprint and does not demand super low read latency nor in-memory key scan. A couple of months ago, I tried to load 50 million key-values to single-node Hibari but top command showed it took 9.149 GB for RES memory, which was not acceptable. One reason for that was the keys were big (53 bytes), so I will need to unload all or majority of metadata from the memory.
I believe an ultimate goal of redesigning disk storage will be having tired storage per Hibari table because we can use micro transaction within a brick. But for now, I want to quickly put together on-disk metadata storage based brick server module.
The text was updated successfully, but these errors were encountered:
Create an experimental, alternative brick server module of brick_ets that does not use an in-memory ETS table (ctab) to store metadata part of key-values but uses on-disk (HyperLevelDB) metadata DB introduced by GH17 redesigning disk storage.
The current brick_ets only reads from HyperLevelDB during start up and stores copies of all keys in in-memory ETS table, but this alternative module will not use the ETS table and read from HyperLevelDB for each read request from a client. It may cache some frequently used metadata on a 2Q cache (a refinement of LRU cache with scan resistant) available as Erlang NIF: arekinath/e2qc.
I have a project that requires low memory footprint and does not demand super low read latency nor in-memory key scan. A couple of months ago, I tried to load 50 million key-values to single-node Hibari but
top
command showed it took 9.149 GB for RES memory, which was not acceptable. One reason for that was the keys were big (53 bytes), so I will need to unload all or majority of metadata from the memory.I believe an ultimate goal of redesigning disk storage will be having tired storage per Hibari table because we can use micro transaction within a brick. But for now, I want to quickly put together on-disk metadata storage based brick server module.
The text was updated successfully, but these errors were encountered: