You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So aws lambda now support up to 10gb and increases computational capacity in relation with the mem allocation. I was running inference with 3gb allocation and compared with 10gb but did not see any major improvements. Why could this be? Maybe the static compiled torch can not use all vcpu?
The text was updated successfully, but these errors were encountered:
I think all the cores should be used out of the box even with static build. You may also try to change some PyTorch flags as described in documentation and torchlambda build.
You may see available flags here and specify them like this (for example: torchlambda build --pytorch USE_OPENMP=ON.
You may need to profile your application somehow and that might require manually changing C++ code, if you find something please let me know though.
So aws lambda now support up to 10gb and increases computational capacity in relation with the mem allocation. I was running inference with 3gb allocation and compared with 10gb but did not see any major improvements. Why could this be? Maybe the static compiled torch can not use all vcpu?
The text was updated successfully, but these errors were encountered: