-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Chalice generates package that is too large #567
Comments
Hmm thats odd. It should just be pulling down and unzipping the whl files. Can you take a look at what was actually deployed by looking in the |
how to deploy bigger code then 50mb |
@malikasinger1 You cannot. That is the current size limit for AWS Lambda. http://docs.aws.amazon.com/lambda/latest/dg/limits.html @thunderfish24 If you can provide get me some info about what you removed from the package to trim it down I would be interested, I'm going to close this issue in the meantime. In general there probably isn't anything we can add to do this safely. Since if you know what you want to use you can remove unrelated parts to trim it down to size, or do something like strip the binaries. But there is nothing we can do at a general level in a tool like Chalice since we don't know how people intend to use the package. The only other thing I could think of to get the package size down is to remove the tests, but those only add up to very much:
And again that is making the assumption that those not being used or referenced in any way. So I think unfortunately the only thing to do from our end is to vendor the package to have just what you need. |
@stealthycoin Sorry I cannot really find time to dig into this deeper. Here is the non-chalice Python3 ~38.5 MB solution that I'm using: ryansb/sklearn-build-lambda#16. You can see here which size optimizations are being employed, such as pulling out test and documentation files, but I didn't find that most optimizations had more than a 2-3 MB effect. However, since posting this issue, I realize that there is also the aspect of using architecture-specific ATLAS (or some other) optimized version of numpy+scipy numerical libraries. I'm not sure how that compares to simply pulling the "plain vanilla" wheels. Without these optimizations, I probably still would not use Chalice for numpy+scipy lambda fuctions, even if the 50 MB size limit could be met. |
I having the same issue. The zip size is 80 MB. I am using Boto3 where botocore unzipped is 25MB. Also I am using matplotlib of 29M and numpy with 68M. |
@skghosh-invn I think you might find this useful: https://aws.amazon.com/blogs/machine-learning/how-to-deploy-deep-learning-models-with-aws-lambda-and-tensorflow/ |
I managed to pack sklearn and dependencies for chalice with this script: |
I have a non-chalice lambda function that imports numpy and scipy. I am able to zip up my custom lambda function package to be under 30 MB, and thus I can deploy this to AWS lambda without exceeding the 50 MB limit. However, when I try to use chalice and include both numpy (1.13.3) and scipy (0.19.1) in the requirements.txt, it tells me that the package size is 62.2 MB and that I cannot deploy the package because it is too large.
I might be able to hack around this using the vendor directory and copying over a small subset of manylinux1_x86_64-compiled scipy files that I actually need, but the simple solution using requirements.txt would be much easier to maintain and execute.
Anyone have any thoughts on chalice's 62.2 MB vs. my <30 MB package size?
The text was updated successfully, but these errors were encountered: