You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I understand that one can limit the amount of RAM on an process with the --maxmem option quite nicely, but that affects the apps max available memory 'visibility'.
In short, if memory barrier hit, then malloc/calloc/realloc don't work.
In my opinion, would be nice if you could somehow add an option for the process to continue using the virtual address memory/space after said barrier is hit.
I know full well of all the performance impacts on it, but having that ability could help with potential crashes and not to mention server apps that consume a massive amount of RAM but don't utilize it all at once.(Aka has it allocated for "just in-case")
I once made an manual experiment on this myself, I've been using llama2.cpp for a while and thought of reducing the amount of RAM it used based on the model loaded, I made custom malloc/calloc/realloc implementations myself, and all of them used a virtual file buffer (mmap basically), then proceeded to clear the RAM buffer associated with the file buffer after each token generation)
Actual RAM usage went from 20gb to 2gb.
Sure the disk usage spiked and performance dipped, however, with modern today's nvme ssd's the performance was sane.
In short, saved 18gb of ram, and had to wait around 30% longer for response generation.
I think the tradeoff is worth it.
The text was updated successfully, but these errors were encountered:
Thanks for the idea. However, implementing it will almost certainly require injecting a dll into the target process (something I would like to avoid). I would also need to provide custom logic for malloc/calloc/... I will keep this ticket open, but it's not something I plan to work on in the nearest future.
I understand that one can limit the amount of RAM on an process with the --maxmem option quite nicely, but that affects the apps max available memory 'visibility'.
In short, if memory barrier hit, then malloc/calloc/realloc don't work.
In my opinion, would be nice if you could somehow add an option for the process to continue using the virtual address memory/space after said barrier is hit.
I know full well of all the performance impacts on it, but having that ability could help with potential crashes and not to mention server apps that consume a massive amount of RAM but don't utilize it all at once.(Aka has it allocated for "just in-case")
I once made an manual experiment on this myself, I've been using llama2.cpp for a while and thought of reducing the amount of RAM it used based on the model loaded, I made custom malloc/calloc/realloc implementations myself, and all of them used a virtual file buffer (mmap basically), then proceeded to clear the RAM buffer associated with the file buffer after each token generation)
Actual RAM usage went from 20gb to 2gb.
Sure the disk usage spiked and performance dipped, however, with modern today's nvme ssd's the performance was sane.
In short, saved 18gb of ram, and had to wait around 30% longer for response generation.
I think the tradeoff is worth it.
The text was updated successfully, but these errors were encountered: