-
-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Option to disable storing pages in temp folder #979
Comments
👋 The full extract/caching was designed to avoid performing the unarchiving job which (used) to take a bunch of time.. This is a pretty old part of the design, the server used to rely on The server does support random access when you hit individual page endpoints nowadays, but it's true that it always copies to the filesystem first. I'm not sure how much work it'd take to make it all happen in-memory. |
Thanks for the fast reply. As another thought. I'm running lanraragi on a very low powered device. Is there a option to just generate page thumbnails if i open the archive overview and not when i open the archive (or disable them entirely)? It causes the server to lag because all cores are at 100% if there are triple digit pages. I don't really use the overview very much, so every time i want to read a big archive it takes 30 seconds before i can turn to the next page. |
No specific option, but I believe the changes from #885 will help as they restrain page thumbnail jobs to run sequentially on one worker only. |
I couldn't find any information in the docs to disable it.
Setting temp folder size to 0 doesn't work.
I don't have it on SSD so it just creates a copy on my hard drive for no reason at all.
Most manga/comic readers (komga, kavita, yacreader) just read/unzip single pages of a cbz in memory through random access. It is much faster than unpacking it, writing it and then reading.
Could this be implemented please? It just creates a lot of extra hardware strain and the gains are negligible for a single user. Even with SSD it wouldn't be worth it. Hard drives take a fraction of a second to read a few MB for a page if you have them on a NAS spinning the entire time.
tbh I'm confused why caching is standard. It would only make sense if there are multiple users or you are on desktop and the drive platter are spinning down when not used. But aren't most using this on a NAS?
Sure it would use a bit of memory to unzip pages in memory but isn't this a case where it should be used?
The text was updated successfully, but these errors were encountered: