Container Image and Provisioned Concurrency AWS Lambda Questions and Answers 2022

Container Image Support for AWS Lambda

Q: What is Container Image Support for AWS Lambda?

AWS Lambda now enables you to package and deploy functions as container images. Customers can leverage the flexibility and familiarity of container tooling, and the agility and operational simplicity of AWS Lambda to build applications.

Click here to read more Education topics

Q: How can I use Container Image Support for AWS Lambda?

You can start with either an AWS provided base images for Lambda or by using one of your preferred community or private enterprise images. Then, simply use Docker CLI to build the image, upload it to Amazon ECR, and then create the function by using all familiar Lambda interfaces and tools, such as the AWS Management Console, the AWS CLI, the AWS SDK, AWS SAM, and AWS CloudFormation.

Q: Which container image types are supported?

You can deploy third-party Linux base images (e.g. Alpine or Debian) to Lambda in addition to the Lambda provided images. AWS Lambda will support all images based on the following image manifest formats: Docker Image Manifest V2 Schema 2 (used with Docker version 1.10 and newer) or Open Container Initiative (OCI) Spec (v1.0 and up). Lambda supports images with a size of up to 10GB.

Q: What base images can I use?

AWS Lambda provides a variety of base images customers can extend, and customers can also use their preferred Linux-based images with a size of up to 10GB.

Q: What container tools can I use to package and deploy functions as container images?

You can use any container tooling as long as it supports one of the following container image manifest formats: Docker Image Manifest V2 Schema 2 (used with Docker version 1.10 and newer) or Open Container Initiative (OCI) Specifications (v1.0 and up). For example, you can use native container tools (i.e. docker run, docker compose, Buildah and Packer) to define your functions as a container image and deploy to Lambda.

Q: What AWS Lambda features are available to functions deployed as container images?

All existing AWS Lambda features, with the exception of Lambda layers and Code Signing, can be used with functions deployed as container images. Once deployed, AWS Lambda will treat an image as immutable. Customers can use container layers during their build process to include dependencies.

Q: Will AWS Lambda patch and update my deployed container image?

Not at this time. Your image, once deployed to AWS Lambda, will be immutable. The service will not patch or update the image. However, AWS Lambda will publish curated base images for all supported runtimes that are based on the Lambda managed environment. These published images will be patched and updated along with updates to the AWS Lambda managed runtimes. You can pull and use the latest base image from DockerHub or Amazon ECR Public, re-build your container image and deploy to AWS Lambda via Amazon ECR. This allows you to build and test the updated images and runtimes, prior to deploying the image to production.

Q: What are the differences between functions created using ZIP archives vs. container images?

There are three main differences between functions created using ZIP archives vs. container images:

  1. Functions created using ZIP archives have a maximum code package size of 250 MB unzipped, and those created using container images have a maximum image size of 10 GB. 
  2. Lambda uses Amazon ECR as the underlying code storage for functions defined as container images, so a function may not be invocable when the underlying image is deleted from ECR. 
  3. ZIP functions are automatically patched for the latest runtime security and bug fixes. Functions defined as container images are immutable, and customers are responsible for the components packaged in their function. Customers can leverage the AWS provided base images which are regularly updated by AWS for security and bug fixes, using the most recent patches available.

Q: Is there a performance difference between functions defined as zip and container images?

No – AWS Lambda ensures that the performance profiles for functions packaged as container images are the same as for those packaged as ZIP archives, including typically sub-second start up times.

Q: How will I be charged for deploying Lambda functions as container images?

There is no additional charge for packaging and deploying functions as container images to AWS Lambda. When you invoke your function deployed as a container image, you pay the regular price for requests and execution duration. To learn more, visit AWS Lambda pricing. You will be charged for storing your container images in Amazon ECR at the standard ECR prices. To learn more, visit Amazon ECR pricing.

Q: What is the Lambda Runtime Interface Emulator (RIE)?

The Lambda Runtime Interface Emulator is a proxy for the Lambda Runtime API,which allows customers to locally test their Lambda function packaged as a container image. It is a lightweight web server that converts HTTP requests to JSON events and emulates the Lambda Runtime API. It allows you to locally test your functions using familiar tools such as cURL and the Docker CLI (when testing functions packaged as container images). It also simplifies running your application on additional compute services. You can include the Lambda Runtime Interface Emulator in your container image to have it accept HTTP requests natively instead of the JSON events required for deployment to Lambda. This component does not emulate the Lambda orchestrator, or security and authentication configurations. The Runtime Interface Emulator is open sourced on GitHub. You can get started by downloading and installing it on your local machine.

Q: Why do I need the Lambda Runtime Interface Emulator (RIE) during local testing?

The Lambda Runtime API in the running Lambda service accepts JSON events and returns responses. The Lambda Runtime Interface Emulator allows the function packaged as a container image to accept HTTP requests during local testing with tools like cURL, and surface them via the same interface locally to the function. It allows you to use the docker run or docker-compose up command to locally test your lambda application.

Q: What function behaviors can I test locally with the emulator?

You can use the emulator to test if your function code is compatible with the Lambda environment, runs successfully, and provides the expected output. For example, you can mock test events from different event sources. You can also use it to test extensions and agents built into the container image against the Lambda Extensions API.

Q: How does the Runtime Interface Emulator (RIE) help me run my Lambda compatible image on additional compute services?

Customers can add the Runtime Interface Emulator as the entry point to the container image or package it as a sidecar to ensure the container image now accepts HTTP requests instead of JSON events. This simplifies the changes required to run their container image on additional compute services. Customers will be responsible for ensuring they follow all security, performance, and concurrency best practices for their chosen environment. RIE is pre-packaged into the AWS Lambda provided images, and is available by default in AWS SAM CLI. Base image providers can use the documentation to provide the same experience for their base images.

Q: How can I deploy my existing containerized application to AWS Lambda?

You can deploy a containerized application to AWS Lambda if it meets the below requirements:

  1. The container image must implement the Lambda Runtime API. We have open-sourced a set of software packages, Runtime Interface Clients (RIC), that implement the Lambda Runtime API, allowing you to seamlessly extend your preferred base images to be Lambda compatible.
  2. The container image must be able to run on a read-only filesystem. Your function code can access a writable /tmp directory storage of 512 MB. If you are using an image that requires a writable root directory, configure it to write to the /tmp directory.
  3. The files required for the execution of function code can be read by the default Lambda user. Lambda defines a default Linux user with least-privileged permissions that follows security best practices. You need to verify that your application code does not rely on files that are restricted by other Linux users for execution.
  4. It is a Linux based container image.

Provisioned Concurrency

Q: What is AWS Lambda Provisioned Concurrency?

Provisioned Concurrency gives you greater control over the performance of your serverless applications. When enabled, Provisioned Concurrency keeps functions initialized and hyper-ready to respond in double-digit milliseconds.

Q: How do I set up and manage Provisioned Concurrency?

You can configure concurrency on your function through the AWS Management Console, the Lambda API, the AWS CLI, and AWS CloudFormation. The simplest way to benefit from Provisioned Concurrency is by using AWS Auto Scaling. You can use Application Auto Scaling to configure schedules, or have Auto Scaling automatically adjust the level of Provisioned Concurrency in real time as demand changes. To learn more about Provisioned Concurrency, see the documentation.

Q: Do I need to change my code if I want to use Provisioned Concurrency?

You don’t need to make any changes to your code to use Provisioned Concurrency. It works seamlessly with all existing functions and runtimes. There is no change to the invocation and execution model of Lambda when using Provisioned Concurrency.

Q: How will I be charged for Provisioned Concurrency?

Provisioned Concurrency adds a pricing dimension, of ‘Provisioned Concurrency’, for keeping functions initialized. When enabled, you pay for the amount of concurrency that you configure and for the period of time that you configure it. When your function executes while Provisioned Concurrency is configured on it, you also pay for Requests and execution Duration. To learn more about the pricing of Provisioned Concurrency, see AWS Lambda Pricing.

Q: When should I use Provisioned Concurrency?

Provisioned Concurrency is ideal for building latency-sensitive applications, such as web or mobile backends, synchronously invoked APIs, and interactive microservices. You can easily configure the appropriate amount of concurrency based on your application’s unique demand. You can increase the amount of concurrency during times of high demand and lower it, or turn it off completely, when demand decreases.

Q: What happens if a function receives invocations above the configured level of Provisioned Concurrency?

If the concurrency of a function reaches the configured level, subsequent invocations of the function have the latency and scale characteristics of regular Lambda functions. You can restrict your function to only scale up to the configured level. Doing so prevents the function from exceeding the configured level of Provisioned Concurrency. This is a mechanism to prevent undesired variability in your application when demand exceeds the anticipated amount.

AWS Lambda functions powered by Graviton2 processors

Q: What are AWS Lambda functions powered by Graviton2 processors?

AWS Lambda allows you to run your functions on either x86-based or Arm-based processors. AWS Graviton2 processors are custom built by Amazon Web Services using 64-bit Arm Neoverse cores to deliver increased price performance for your cloud workloads. Customers get the same advantages of AWS Lambda, running code without provisioning or managing servers, automatic scaling, high availability, and only paying for the resources you consume.

Q: Why should I use AWS Lambda functions powered by Graviton2 processors?

AWS Lambda functions powered by Graviton2, using an Arm-based processor architecture designed by AWS, are designed to deliver up to 34% better price performance compared to functions running on x86 processors, for a variety of serverless workloads, such as web and mobile backends, data, and stream processing. With lower latency, up to 19% better performance, a 20% lower cost, and the highest power-efficiency currently available at AWS, Graviton2 functions can power mission critical serverless applications. Customers can configure both existing and new functions to target the Graviton2 processor. They can deploy functions running on Graviton2 as either zip files or container images.

Q: How do I configure my functions to run on Graviton2 processors?

You can configure functions to run on Graviton2 through the AWS Management Console, the AWS Lambda API, the AWS CLI, and AWS CloudFormation by setting the architecture flag to ‘arm64’ for your function.

Q: How do I deploy my application built using functions powered by Graviton2 processors?

There is no change between x86-based and Arm-based functions. Simply upload your code via the AWS Management Console, zip file, or container image, and AWS Lambda automatically runs your code when triggered, without requiring you to provision or manage infrastructure.

Q: Can an application use both functions powered by Graviton2 processors and x86 processors?

An application can contain functions running on both architectures. AWS Lambda allows you to change the architecture (‘x86_64’ or ‘arm64’) of your function’s current version. Once you create a specific version of your function, the architecture cannot be changed.

Q: Does AWS Lambda support multi-architecture container images?

No. Each function version can only use a single container image.

Q: Can I create AWS Lambda Layers that target functions powered by AWS Graviton2 processors?

Yes. Layers and extensions can be targeted to ‘x86_64’ or ‘arm64’ compatible architectures. The default architecture for functions and layers is ‘x86_64’.

Q: What languages and runtimes are supported by Lambda functions running on Graviton2 processors?

At launch, customers can use Python, Node.js, Java, Ruby, .Net Core, Custom Runtime (provided.al2), and OCI Base images.

Q: What is the pricing of AWS Lambda functions powered by AWS Graviton2 processors? Does the AWS Lambda free tier apply to functions powered by Graviton2?

AWS Lambda functions powered by AWS Graviton2 processors are 20% cheaper compared to x86-based Lambda functions. The Lambda free tier applies to AWS Lambda functions powered by x86 and Arm-based architectures.

Q: How do I choose between running my functions on Graviton2 processors or x86 processors?

Each workload is unique and we recommend customers test their functions to determine the price performance improvement they might see. To do that, we recommend using the AWS Lambda Power Tuning tool. We recommend starting with web and mobile backends, data, and stream processing when testing your workloads for potential price performance improvements.

Q: Do I need an Arm-based development machine to create, build, and test functions powered by Graviton2 processors locally?

Interpreted languages like Python, Java, and Node generally do not require recompilation unless your code references libraries that use architecture specific components. In those cases, you would need to provide the libraries targeted to arm64. For more details, please see the Getting started with AWS Graviton page. Non-interpreted languages will require compiling your code to target arm64. While more modern compilers will produce compiled code for arm64, you will need to deploy it into an arm-based environment to test. To learn more about using Lambda functions with Graviton2, please see the documentation.

Amazon EFS for AWS Lambda

Q: What is Amazon EFS for AWS Lambda?

With Amazon Elastic File System (Amazon EFS) for AWS Lambda, customers can securely read, write and persist large volumes of data at virtually any scale using a fully managed elastic NFS file system that can scale on demand without the need for provisioning or capacity management. Previously, developers added code to their functions to download data from S3 or databases to local temporary storage, limited to 512MB. With EFS for Lambda, developers don’t need to write code to download data to temporary storage in order to process it.

Q: How do I set up Amazon EFS for Lambda?

Developers can easily connect an existing EFS file system to a Lambda function via an EFS Access Point by using the console, CLI, or SDK. When the function is first invoked, the file system is automatically mounted and made available to function code. You can learn more in the documentation.

Q: Do I need to configure my function with VPC settings before I can use my Amazon EFS file system?

Yes. Mount targets for Amazon EFS are associated with a subnet in a VPC. The AWS Lambda function needs to be configured to access that VPC.

Q: Who should use Amazon EFS for Lambda?

Using EFS for Lambda is ideal for building machine learning applications or loading large reference files or models, processing or backing up large amounts of data, hosting web content, or developing internal build systems. Customers can also use EFS for Lambda to keep state between invocations within a stateful microservice architecture, in a StepFunctions workflow, or sharing files between serverless applications and instance or container-based applications.

Q: Will my data be encrypted in transit?

Yes. Data encryption in transit uses industry-standard Transport Layer Security (TLS) 1.2 to encrypt data sent between AWS Lambda functions and the Amazon EFS file systems.

Q: Is my data encrypted at rest?

Customers can provision Amazon EFS to encrypt data at rest. Data encrypted at rest is transparently encrypted while being written, and transparently decrypted while being read, so you don’t have to modify your applications. Encryption keys are managed by the AWS Key Management Service (KMS), eliminating the need to build and maintain a secure key management infrastructure.

Click here to read more Technology questions and Answers

Q: How will I be charged for Amazon EFS for AWS Lambda?

There is no additional charge for using Amazon EFS for AWS Lambda. Customers pay the standard price for AWS Lambda and for Amazon EFS. When using Lambda and EFS in the same availability zone, customers are not charged for data transfer. However, if they use VPC peering for Cross-Account access, they will incur data transfer charges. To learn more, please see Pricing.

Q: Can I associate more than one Amazon EFS file system with my AWS Lambda function?

No. Each Lambda function will be able to access one EFS file system.

Q: Can I use the same Amazon EFS file system across multiple functions, containers, and instances?

Yes. Amazon EFS supports Lambda functions, ECS and Fargate containers, and EC2 instances. You can share the same file system and use IAM policy and Access Points to control what each function, container, or instance has access to.

Click here to read more blogs

About Author


After years of Technical Work, I feel like an expert when it comes to Develop wordpress website. Check out How to Create a Wordpress Website in 5 Mins, and Earn Money Online Follow me on Facebook for all the latest updates.

Leave a Comment