Performance Evaluation of Different Programming Languages in AWS Lambda

Serverless is in theory an architectural principle independent of a programming language. Nevertheless, there are differences in the behavior and performance of different languages in such an environment. Since, disregarding any reuse, each serverless function does need to have its own execution environment, the building of said execution environment is a crucial factor for the execution time of the function overall. This first initialization of said execution environment is called a cold start. Regardless of programming languages, the cold start time is a problem for all serverless functions, since they generally run in containerized execution environments. However, the language can have a big impact on its severity. Languages that are designed to be executed by its own virtual machine to ensure platform independence might have a worse performance overall than a language that is taking on a more lightweight approach like JavaScript. But the cold start time is not the only factor when measuring the performance. If an application exceeds a certain frequency threshold, the called serverless function might still have an active container that is ready to be reused for another function execution. Considering these points, it is imperative to build a comprehensive understanding of the differences in performance. To further investigate these performance differences of different programming languages for serverless functions, an experiment was conducted.

Provider

Node.jsPythonGoJavaRuby.NetPHPSWIFT
Lambda
Azure
IBM
Google
Comparison of native programming language support of the top four
FaaS cloud platforms

The four biggest function-as-a-service providers are Google with Google cloud functions, Microsoft with Azure Functions, Amazon with AWS Lambda and IBM Cloud Functions using Apache Open Whisk. The table above shows a comparison of the four competitors regarding the spectrum of runtime environments for different programming languages that they natively support. Beyond this overall representation, they vary in supported versions and sub-languages that run in the containers. Most of them also support building your own runtime environment via technologies like docker. For a current and more detailed representation of supported languages visit their respective documentations.

Experiment Setup

When considering different versions and maturity of documentation, AWS Lambda does provide the broadest set of supported programming languages. Consequently, AWS was chosen as the platform for the experiment.

Underlying the choice of programming languages, there are different container environments available. AWS provides two options. Either the container boots an Amazon Linux or an Amazon Linux 2. To simplify the experiment, the newest version of the Lambda Linux is chosen as the container runtime. The only exception is Go, for which only Amazon Linux is available as runtime container. Furthermore, to provide the most long-term relevant comparison, the newest versions of each language runtime environment at the time was used.

To setup the experiment, a simple client service has been created that executes HTTP GET requests on demand. These calls can be parameterized with a target URL, a specified amount of calls to execute and a delay to wait between the calls. The targets for this service were newly created Lambda functions. The functions consist of six hello world example functions, written in the six programming languages under scope, respectively. When invoked, the function code does only return a fixed string response. Each function has a concurrency restriction attached, which limits the number of instances that may exist of the function to 1. This ensures that no new instances with additional cold starts are created, because another execution did not yet finish. To make the functions accessible over the internet, an AWS HTTP API Gateway was created for the experiment, containing a separate resource for every function.

The latency to the client is not an optimal choice as a metric, since the client was using an unstable private connection. Hence the latency benchmarks inside of AWS were taken as a basis for the experiments results. To capture these latencies, AWS X-Ray was activated on each function. X-Ray logs every step of the execution in detailed metrics. Relevant metrics for the experiment were the reaction time of the request and the reaction time of the function. Further relevant for identifying the cold starts, was the internal information about the function execution, containing an initialization metric and the invocation time of the code. Between the functions’ invocation and the response of the function is an overhead, that represents data formatting and internal communication of Lambda. Further note that for Go not all metrics were available through X-Ray, this might be caused by the ambiguity of the environment version. The only specification naturally available is 1.x.

Node.jsPythonGoJavaRuby.Net
API262,6220,7343,3490,8246,5685,6
initialization 120,3 (46%)115,7 (52%)202,6 (59%)363,6 (74%)137,4 (56%)167,3 (24%)
function36,7 (14%)7,8 (4%)31,2 (6%)21,1 (9%)385,4 (56%)
invocation24,5 (9%)0,9 (0,3%)28,7 (5,8%)12,8 (5,2%)380,9 (56%)
Performance average of different stages for 10 cold start executions in ms

The table above shows the mean documented times for the API response, function response, invocation and initialization of 10 cold starts in six different container runtimes. The percentage values state the metrics percentage participation in the APIs response time. The API response times range from 220ms in Python, making it the fastest function to react overall on a cold start, to 685ms in .Net Core, making it the slowest. The overall mean of a function returning in Lambda is 375ms. The second fastest environment is Ruby with 247ms and the third is Node.js with 263ms. The initialization times, which are the metric representing the time it took to initialize a new container with runtime, reflect these rankings. Noticeably slowest is Java, with an initialization time of 364ms. Also, relatively slow is Go with 203ms and .Net Core with 167ms. Taking function return times and invocation times into consideration, .Net seems to be the slowest by far. On the opposite end is Python, which seems to be the only one capable of performing at warm function execution level even on a cold start.

Node.jsPythonGoJavaRuby.Net
API28,120,2524,733,933,442,8
function10,62,73,915,220,4
invocation5,1 2 3,44,119,7
Performance average of different stages for 20 executions (including cold start) in ms

The table above shows the mean documented times for the API response, function response, invocation and initialization of 20 invocations in six different container runtimes. This table includes the first, cold function invocation at the start of the 20 invocations. As a first impression it can be deducted that the reaction times of the API overall decrease drastically, when running on a warm container. Since the sample size is rather small with only 20 invocations, only a tendency can be inferred in that certain languages will perform better on short bursts of requests than others. To name them, the best performing languages considering the API response times are Python, followed by Go, followed by Node.js. In this metric, Java recovers from its bad performance in the cold start comparison with getting on the second place in function return times and invocation with 3.9 and 3.4ms, surpassed only by Python with 2.7 and 2ms. Comparing this to the previous results of only cold start executions, .Net improves on its function return and invocation times. .Net is almost able to catch up with Ruby, which is the second slowest. The table further shows, that the cold start issue is not restricted to the initialization only. The other metrics like the function return times and invocation times have also significantly improved for most languages, except Python. In Python’s case they declined slightly, which might be caused by the small sample size or the internal overhead of Lambda.

Node.jsPythonGoJavaRuby.Net
API15,79,67,99,822,29
function9,22,42,514,91,2
invocation422,13,60,7
Performance average of different stages for 20 executions (excluding cold start) in ms

The table above shows the mean documented times for the API response, function response, invocation and initialization of 20 starts in six different container runtimes. This table does not include the first, cold function invocation at the start of the 20 invocations. By excluding the cold start from the computations, this table’s results depict a more steadily called function, which always has a warm container running. Hence the results are more applicable to scenarios in which there is a higher load. The most notable differences to the previous performance tests are that .Net now is the fastest in function return and invocation with 1.2 and 0.7ms. Python is second with 2.4 and 2ms, very closely succeeded by Java with 2.5 and 2.1ms. This shows that the environments which do have their own VM to start, namely Java and .Net, are capable to catch up to the more lightweight environments, once there is a certain frequency of invocations, that keeps the containers warm. Ruby and Node.js seem to have a relatively high amount of overhead. Node.js loses over 50% of its time to overhead until the functions return. Ruby has the most overhead before the function returns with over 75%.

Discussion of Results

Although the experiment provides a comprehensive overview of the different programming languages used in Lambda functions, it does not directly compare the Lambda performance to any of the other cloud providers. Hence, the experiment results are only sound for this specific platform.

Furthermore, the actual function code used in the experiment does contain a trivial set of instructions and therefore most likely is not directly applicable for more complex code. The performance of the language itself outside of the initialization will become more dominant relative to the complexity of the code executed.

Lambda also provides the ability to deploy custom runtimes, which enables the use of more programming languages. The experiment does only cover the natively supported languages in Lambda. Also note that the experiment only included the newest version available for every environment available. Real use cases might either be restricted to use an older version due to legacy dependencies or might find that for their specific use cases an older version performs better.

Another point to mention is that when bringing in any form of dependency into the code, the initialization times will be affected.

In conclusion, in any real-world application of these results, different factors, like the frequency of execution of a function, as well as the complexity of the code and dependency requirements, have to be considered.

Short URL for this post: https://blog.oio.de/XLhTY
This entry was posted in Java Runtimes - VM, Appserver & Cloud and tagged , , , . Bookmark the permalink.

3 Responses to Performance Evaluation of Different Programming Languages in AWS Lambda

  1. Mohammed Ramadan says:

    you don’t know how much I wanted these information decide whether to go 100% serverless or not. Thank you very much for that awesome benchmark.
    but how much RAM did you reserve for each ?

    • Dennis Aulenbacher says:

      Hi Mohammed,

      altough I left the default RAM values for the different runtimes untouched, a tendency can be identified regarding the memory footprint of different languages. The ones with a higher cold start impact also tend to have a higher memory footprint, because they build up more complex runtime environments.
      The default for nodejs, python and Ruby is 128MB, while for C#, Java, and Go it’s 512MB.

      Keep in mind, that the tests were conducted on older versions of runtimes /containers as there are available now. So there might have been significant improvements.

      Best Wishes
      Dennis

Leave a Reply

Your email address will not be published. Required fields are marked *