Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENOMEM In very specific cases #12

Open
svidovich opened this issue Apr 17, 2019 · 5 comments
Open

ENOMEM In very specific cases #12

svidovich opened this issue Apr 17, 2019 · 5 comments

Comments

@svidovich
Copy link
Contributor

In certain use cases, I will get less files than expected from a particular firmware. According tot he container logs, the unpacker process ( whichever it might be -- binwalk, for example ) has run the machine out of memory and been killed before it completed. Because the logic for successful unpack is whether or not there are files resultant, if binwalk runs and doesn't finish but does manage to produce files, the unpack will have been deemed 'successful' by the plugin, but in reality it was not successful.

Now, it is noted that the plugins are calling execute_shell_command from common_helper_process. execute_shell_command seems to be dropping exit codes on the floor. Could it be possible to instead use execute_shell_command_get_return_code instead, and handle the case in which a process returns something non-zero?

By the way the plugins return data, it seems to be a dictionary, so adding the return code into the dictionary along with the output shouldn't break anything. Perhaps something can be done in unpack.py to handle this? I am willing to help in this effort.

@dorpvom
Copy link
Collaborator

dorpvom commented Apr 18, 2019

Hi, sounds useful to me. I have not thought about memory issues with docker yet so an indicator for something out of the ordinary would definitely help.
If you like to contribute I would be happy to review a Pull Request.

@dorpvom
Copy link
Collaborator

dorpvom commented Apr 18, 2019

In the meantime, can you make out a trend for when this happens? I would guess on large / very large files but there could be a different problem altogether.

@svidovich
Copy link
Contributor Author

Yes, it is with large files, especially those with large compression ratios. Binwalk has been the child process so far that I've witnessed issues with. I imagine it doesn't help that the container has been constrained pretty tightly -- it has only been given 512MB of memory to work with!

@svidovich
Copy link
Contributor Author

I made a pull request in regards that should mitigate this problem somewhat...

@dorpvom
Copy link
Collaborator

dorpvom commented Apr 26, 2019

So, first thanks for the contribution. If you can incorporate the requests in the PR the changes will be merged.
Regarding the generally low memory for containers: This can be chose when running the container, i.e. with the docker run command. So from my viewpoint this is a FACT, not FACT_extractor problem. I will add an option to the interface in #10 and consider a config value for the FACT PR.

m-1-k-3 pushed a commit to m-1-k-3/fact_extractor that referenced this issue Aug 31, 2021
maringuu pushed a commit that referenced this issue Sep 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants