Skip to content

Fix "No memory" error and freezing a browser tab #46

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

AnastasiaSliusar
Copy link
Collaborator

@AnastasiaSliusar AnastasiaSliusar commented Jan 30, 2025

This PR has solutions for #43 and #45
Freezing a browser tab happened when decompressed data could be large. For example, when compressed file has 10MB, but decompressed data has 98MB.

Since real decompressed data size is not known, as the solution is to take a average compressed ratio according to the example of compressed file of 10 MB and realize inital memory for (size of compressed file * compressed ratio)MB. Drawback is that the more decompressed data is the slower archive_read_data method is itself.
image

@AnastasiaSliusar AnastasiaSliusar marked this pull request as ready for review January 30, 2025 16:29
@martenrichter
Copy link
Contributor

Maybe some ideas for handling larger archives, I had these already two weeks before, but they were not necessary.
Instead of a single wasm function, that decompresses the whole archive, you can divide the function into three parts:
(i) Initializing the libarchive for the archive and setting up other datastructures (basically the part before for the loop through the archive).
(ii) A function that processes a single file, or a bunch of files (may be limited by a number or maximum number of occupied memory). This function can then be called in a loop from javascript, copying over the results to the destination.
Maybe you can recycle some of the memory for the files in the next turn.
(iii) A cleanup function., that closes all objects allocated and opened in (i) and maybe in (ii)

Of course, you could put it into a ReadableStream, as this is how webstreams work.
The benefit would be that you can potentially decrease the memory of the Wasm build, allocate a smaller size of intermediate memory buffers, and overall, run the whole setup in less powerful devices.

@AnastasiaSliusar
Copy link
Collaborator Author

Maybe some ideas for handling larger archives, I had these already two weeks before, but they were not necessary. Instead of a single wasm function, that decompresses the whole archive, you can divide the function into three parts: (i) Initializing the libarchive for the archive and setting up other datastructures (basically the part before for the loop through the archive). (ii) A function that processes a single file, or a bunch of files (may be limited by a number or maximum number of occupied memory). This function can then be called in a loop from javascript, copying over the results to the destination. Maybe you can recycle some of the memory for the files in the next turn. (iii) A cleanup function., that closes all objects allocated and opened in (i) and maybe in (ii)

Of course, you could put it into a ReadableStream, as this is how webstreams work. The benefit would be that you can potentially decrease the memory of the Wasm build, allocate a smaller size of intermediate memory buffers, and overall, run the whole setup in less powerful devices.

Dear @martenrichter thank you for your ideas!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants