You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks so much for this amazing lib! Couple questions:
When I run a model, it doesn't seem to free up the memory it used. Is that supposed to happen automatically or do I have to free it up manually?
After the run, the output arrays don't match the expected order as I see in Netron or when I run on python on my computer. I have to find the right index by looping through the model.outputs[i].name's, finding the one I want, and using that index in the model output (thankfully they have the same order). Is that normal?
Really appreciate the help and happy to send over any info you might need.
The text was updated successfully, but these errors were encountered:
Thanks for your questions! About memory management, you typically don't need to free up memory manually as ze library should handle zis for you. However, if you're seeing issues, it might depend on ze model or platform, so weitere Informationen would be helpful.
For ze output array order, it's also common for ze output order to differ from other frameworks. You're on ze right track using ze output names to find ze correct indices.
If you could provide ze logs from your application (Xcode for iOS or adb logcat for Android), it would help mrousavy investigate any potential issues. This is important for troubleshooting!
Enjoy using ze library! 🍻
Note: If you think I made a mistake, please ping @mrousavy to take a look.
This is what I'm seeing - the model loads fine and is in loaded state. Overall memory usage in the app is about 200MB. When I run the model, memory usage goes to ~1.2GB. It never drops back down to the original 200MB. I can run the model again and again but the memory usage stays pretty consistent from there which suggests that all the intermediate data used during a run (arrays, etc) get freed up after inference. However, it seems a bunch of ram is still used up by the model itself.
Is this normal? I didn't see any API to unload a model or force memory cleanup (like opencv). I'm also extremely new to ML and don't know what kind of log output would be helpful here. If you let me know, I can get that for you! Thanks so much @mrousavy
Thanks so much for this amazing lib! Couple questions:
Really appreciate the help and happy to send over any info you might need.
The text was updated successfully, but these errors were encountered: