-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AWS Lambda invoker's performance depends on the Python interpreter #1219
Comments
Hi @gfinol , just to make sure this is not an issue with lithops rather than its dependences, could you check the following?:
Thanks |
Also, notice that "Boto3 and Botocore ended support for Python 3.7 on December, 13, 2023". So, the best performance is achieved with a Python version no longer supported. |
Just to make sure, maybe you could create a 3.11 venv and do |
@aitorarjona I tried to do what you suggested with a 3.11 env, but it failed due to some version incompatibilities between libraries versions and the python version. But I managed to get it working with 3.10. The results look like the previous ones: (Note that the certifi requeriment in conda_py37.txt points to a file, that line was removed to install them in python 3.10) I agree with you that this, at a first glance, looks like a problem with the thread pool used. Not sure how that could be confirmed... |
I remember that some years ago I changed the In the aws_lambda.py, can you try commenting the lines 630-653 and uncommenting lines 655-670? this way we will see how the boto3 lib perfoms invoking functions, and if this is the casue of the issue you are experiencing. |
@JosepSampe, I've been doing the tests that you suggested. I've executed the tests twice, because the results are worse. Here are the resulting plots: With the Python 3.10 from the OS in Ubuntu 22.04 from the official AMI in AWS EC2: Using the interpreter from conda, Python 3.10: And using Python 3.7 with conda: In general, the performance is worse. For example, we can have a look to the invocations using python 3.7: In this recent plot, the invocations in the second and third I leave here the plots for the other python versions with conda: |
So, in summary, is this something related to Lithops? or is it more related to python? or AWS Lambda? |
I think that this is something related to Lithops. I guess that it might be related to how Lithops uses the invoker thread pool or the connection pool. But I reviewed the code of the AWS Lambda backend and I didn't see anything...
|
Python InterpreterI'm currently using Python 3.11 interpreter of VM in AWS EC2 with Ubuntu 22.04.
And this is how I use invoke:
With this code, that is different from the way lithops originally works, I get the same problem described in this issue. This is why I think that is not related to Lithops. I have a Containerized Runtime with many dependencies. For this experiment, every Lambda will just returns a String "Hello World".
As you can see in the As you can see, there is barely any difference between both cold and warm. This is because of this added delay described in this thread. Conda Python InterpreterIf I install miniconda and create an env Python 3.11 in my AWS EC2 with Ubuntu 22.04. I execute the same code and get: The behavior using the conda environment looks more like what Lithops would do. Warm functions are take less than 1 second and cold takes half of the time it used to take. I don't know why Conda solved the problem... |
I've noticed an issue with the performance of invocation of AWS Lambda functions. Depending on the python interpreter used, the performance of the invocation of cloud functions changes.
For example, when using the Python 3.10 interpreter of VM in AWS EC2 with Ubuntu 22.04, some AWS Lambda functions start is delayed between 5 and 10 seconds. As can be seen in this plot:
But using the same Python version (3.10.12) from Conda in the same VM, same OS and same AWS account I obtained a much better performance:
Despite the performance improvement when using Conda, there are still almost 50% of functions that take 1 second longer to start, even when in a warmed-up state (see the two last map stages from the previous plot). This behavior is the same for Python 3.8, 3.9, 3.10 and 3.11.
Click to see: Python 3.8 plot (using conda)
Python 3.9 plot (using conda)
Python 3.10 plot (using conda)
Python 3.11 plot (using conda)
But with Python 3.7 the performance is what one would expect to be (almost perfect):
All this previous plots have been generated doing 3 maps of 100 functions that sleep for 5 seconds. This has been executed from a
t2.large
VM with Ubuntu 22.04 inus-east-1
, with all the Lithops default configurations except for theinvoke_pool_threads
that was set to128
. I have also used the same VM with Amazon Linux 2023 OS and the results are similar to the previous ones using the Conda interpreter (I could upload the plots if requested). I've used the current master branch of Lithops to do this test, but the issue can be reproduced using versions 3.0.0, 3.0.1, 2.9, and also 2.7.1.Here is the code used:
The text was updated successfully, but these errors were encountered: