AWS Lambda is a service of the AWS cloud that allows you to run applications without setting up any infrastructure. As the developer, you only have to upload the code, and AWS takes care of setting up the runtime environment. Unlike a traditional server application, a Lambda program does not run all the time; instead, it is dormant until an event arrives. The AWS Lambda service then starts the Lambda and passes the event to it.
Common use cases are HTTP events from the Amazon API Gateway, message queue events from Amazon MQ and SQS, and events from S3. Lambdas can also be started manually by sending an event with the AWS CLI or programmatically with the AWS SDK from another program.
This architecture has several benefits. You only pay when the Lambda is running. The AWS Lambda service can quickly start multiple instances of a Lambda when suddenly hundreds of messages arrive.
A downside of this architecture is that processing an initial event can take a slightly longer time. This is because the AWS Lambda service has to start the Lambda program first (cold start). To reduce this cold start delay, AWS retains a Lambda program (keeps it warm) for some time after the invocation has ended. Subsequent events can be handled much faster by calling the retained Lambda instance. If cold start delays are a problem, it is possible to provision Lambdas (Provisioned Concurrency). The AWS Lambda service keeps the configured number of Lambdas always loaded (warm) to be invoked when events arrive without a cold startup time. Be aware that this increases the cost because you pay when the Lambda runs and for the whole time when the Lambda is provisioned.
AWS charges you for the runtime duration depending on the memory usage and architecture (x64, arm) and for the number of requests. There are additional charges for data transfer in/out and provisioned Lambdas. AWS Lambda has a free tier that includes one million requests and 400,000 GB-seconds of compute time each month.
Check out the pricing page for more information:
https://aws.amazon.com/lambda/pricing/
Before considering AWS Lambda for your architecture, you should know one limitation. AWS Lambda programs currently (December 2021) can only run
for 15 minutes (900 seconds). The AWS Lambda service will terminate programs that run longer than that.
Also, local storage is limited to 512 MB (/tmp directory), but this can be worked around by attaching an EFS drive.
Other attributes of AWS Lambda are:
- Memory: 128 MB to 10,240 MB, in 1-MB increments.
- CPU power allocated proportionally to the amount of memory provisioned. 10,240 MB = 6vCPU, 1,706 MB = 1vCPU
- Support for x64 and Arm architecture. Arm about 20 % cheaper in duration cost.
- Default concurrent executions 1000 (depends on the AWS region). This value can be increased to tens of thousands by contacting the AWS support.
- Environment Variables support (up to 4 KB)
- Invocation payload (request and response) 6 MB (synchronous), 256 KB (asynchronous)
- Zip deployment up to a file size of 50 MB
- Docker deployment up to an image size of 10GB
There are two ways to deploy a Lambda with infrastructure as code tools: Docker image and zip file. Deploying with Docker image follows the standard Docker workflow, build the Docker image, push it to a repository and configure the Lambda with a link to the image. AWS Lambda currently only supports Amazon ECR (Elastic Container Registry) as Docker repository.
The second way is by uploading a zip file containing the application to AWS Lambda. The service takes care of setting up an environment and installing the application from the zip file.
In the following article, I will show you three examples. First, I demonstrate how to create a Lambda from a Docker image. In the second example, I will show a zip deployment and how to run the Lambda on the Arm architecture. The final example is an application that cleans up Cloudwatch log groups and log streams. This Lambda will periodically be triggered by an event from AWS Event Bridge.
Hello World ¶
A Go Lambda always depends on the aws-lambda-go library. This is the only dependency that you must install. Apart from that, a Lambda is like a regular Go application where you can use any library and connect to any service inside and outside of AWS.
go mod init helloworld
go get github.com/aws/aws-lambda-go
touch main.go
A Go Lambda always follows the same template. Start the Lambda with lambda.Start
and pass a reference to a handler function. In the handler function, implement the logic.
package main
import (
"github.com/aws/aws-lambda-go/lambda"
)
func main() {
lambda.Start(handle)
}
func handle() (string, error) {
return "Hello World", nil
}
The name of the handler function does not matter, but the signature must satisfy the following requirements:
- The handler may take between 0 and 2 arguments. If there are two arguments, the first argument must implement
context.Context
. - The handler may return between 0 and 2 arguments. If there is a single return value, it must be
error
. If there are two return values, the second value must beerror
.
These are all valid signatures:
func ()
func () error
func (TIn) error
func () (TOut, error)
func (context.Context) error
func (context.Context, TIn) error
func (context.Context) (TOut, error)
func (context.Context, TIn) (TOut, error)
One functionality I don't use in the examples in this blog post is initializing global variables with an init()
function.
The code in this function is called once when the Lambda is loaded. Expensive initialization should always be done in the init()
function,
for example, opening database connections, creating clients to other AWS services.
var s3Client s3.Client
func init() {
cfg, err := config.LoadDefaultConfig(context.TODO())
if err != nil {
....
}
s3Client := s3.NewFromConfig(cfg)
}
Docker ¶
This example will be deployed as a Docker image. Here is the Dockerfile I use for this example.
FROM public.ecr.aws/lambda/provided:al2 as build
RUN yum install -y golang
RUN go env -w GOPROXY=direct
ADD go.mod go.sum ./
RUN go mod download
ADD main.go ./
RUN go build -tags lambda.norpc -ldflags='-s' -o /helloworld main.go
# runtime
FROM public.ecr.aws/lambda/provided:al2
COPY --from=build /helloworld /helloworld
ENTRYPOINT [ "/helloworld" ]
This is a regular multi-stage Docker build. There is nothing special you have to do for Lambda support; only make sure that the last step in the Dockerfile starts the Lambda. Docker deployment is useful when your Go program needs to call external programs. With Docker, you can pack everything into one image and completely control the environment.
Deployment ¶
Next, we write the Pulumi program that creates an ECR repository, builds the Docker image, pushes it to ECR, and finally, provisions the Lambda.
const name = "helloworld"
func main() {
pulumi.Run(func(ctx *pulumi.Context) error {
repo, err := createPrivateEcrRepository(ctx)
if err != nil {
return err
}
err = createEcrRepositoryLifecycle(ctx, err, repo)
if err != nil {
return err
}
registryInfo := getRegistryInfo(ctx, repo)
// Build and publish the container image.
image, err := docker.NewImage(ctx, name+"-image", &docker.ImageArgs{
Build: &docker.DockerBuildArgs{Context: pulumi.String("../lambda")},
ImageName: repo.RepositoryUrl,
Registry: registryInfo,
})
if err != nil {
return err
}
role, err := createIamRoleForLambda(ctx)
if err != nil {
return err
}
function, err := createLambda(ctx, image, role)
if err != nil {
return err
}
// Export the lambda ARN.
ctx.Export("lambda", function.Arn)
return nil
})
}
The createEcrRepositoryLifecycle
function creates the ECR repository, and createEcrRepositoryLifecycle
creates a lifecycle policy in
our ECR repository to clean up untagged images. This is optional, but it makes sure that ECR deletes old and unused images. AWS charges you based on the number of GB stored in ECR (see ECR pricing page).
The getRegistryInfo
function creates temporary credentials for ECR. Pulumi delegates the Docker build to your local Docker process, and
this process needs the credentials to upload images to the private ECR. Alternatively, install the Amazon ECR Docker Credential Helper. If you go this route, you need to change the Pulumi code. Visit the Pulumi documentation
for more information.
The next function is createIamRoleForLambda
, which creates the mandatory IAM execution role for the Lambda. This is an important concept on AWS. By default, a Lambda has no access to other AWS services, and only a role allows access to certain services.
Another concept is that you never give permissions directly to an application. Instead, you assign roles to the
surrounding container. The program inside inherits the permissions from the role. This works the same way as other services on AWS, like EC2 and ECS.
A Lambda needs at least the permissions to create AWS Cloudwatch log groups and log streams and write log statements.
AWS provides the manged IAM policy AWSLambdaBasicExecutionRole
which contains the required Cloudwatch permissions (logs:CreateLogGroup, logs:CreateLogStream, logs:PutLogEvents).
func createIamRoleForLambda(ctx *pulumi.Context) (*iam.Role, error) {
role, err := iam.NewRole(ctx, name+"-lambda-exec-role", &iam.RoleArgs{
AssumeRolePolicy: pulumi.String(`{
"Version": "2012-10-17",
"Statement": [{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}]
}`),
})
if err != nil {
return nil, err
}
_, err = iam.NewRolePolicyAttachment(ctx, name+"-lambda-exec", &iam.RolePolicyAttachmentArgs{
Role: role.Name,
PolicyArn: pulumi.String("arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"),
})
if err != nil {
return nil, err
}
return role, nil
}
The last step in the Pulumi program is createLambda
which provisions the Lambda function.
In this example, we create the Cloudwatch log group with Pulumi. This step is optional, but I recommend it. If you don't create the log group
AWS Lambda will create it automatically, but it sets the retention period to never expire, which means that the logs will
stay forever on AWS. I always want a retention period on the log groups, so AWS automatically deletes old log
files. The name of the log group for Lambdas is always /aws/lambda/<your_lambda_name
.
Another benefit of managing the log group with Pulumi is that pulumi destory
also deletes the log group.
func createLambda(ctx *pulumi.Context, image *docker.Image, role *iam.Role) (*lambda.Function, error) {
logGroup, err := cloudwatch.NewLogGroup(ctx, name, &cloudwatch.LogGroupArgs{
Name: pulumi.String("/aws/lambda/" + name),
RetentionInDays: pulumi.Int(30),
})
if err != nil {
return nil, err
}
For provisioning the Lambda, we need to set the name, the execution role, memory size, and timeout. Memory size also determines the number of vCPU (10,240 MB = 6vCPU). The timeout parameter specifies how long your Lambda can run before AWS stops it. In this example, we set the timeout to 3 seconds. Valid values are between 1 and 900 seconds (15 minutes).
For a Docker Lambda we need to set the package type to Image
and specify the location of the Docker image
with ImageUri.
args := &lambda.FunctionArgs{
ImageUri: image.ImageName,
MemorySize: pulumi.Int(128),
Name: pulumi.String(name),
PackageType: pulumi.String("Image"),
Publish: pulumi.Bool(false),
Role: role.Arn,
Timeout: pulumi.Int(3),
}
function, err := lambda.NewFunction(
ctx,
name+"-lambda",
args,
pulumi.DependsOn([]pulumi.Resource{role, logGroup}),
)
if err != nil {
return nil, err
}
return function, nil
}
With this last piece of the puzzle in place, we can now provision everything with
pulumi up
And after a few seconds, our Lambda is installed and ready to receive events.
We can call the Lambda with the AWS CLI.
aws lambda invoke --function-name $(pulumi stack output lambda) --region $(pulumi config get aws:region) out.json
cat out.json
"Hello World"
Make sure that you delete everything with pulumi destroy
if you no longer need the Lambda.
Zip and Arm ¶
This example will be deployed as a zip file and runs on the Arm architecture. This Lambda also takes an input and sends back a response. In this example, the SHA256 of the input parameter.
package main
import (
"crypto/sha256"
"encoding/base64"
"fmt"
"github.com/aws/aws-lambda-go/lambda"
)
func main() {
lambda.Start(handle)
}
func handle(input string) (string, error) {
fmt.Println("input: ", input)
hasher := sha256.New()
hasher.Write([]byte(input))
sha := base64.URLEncoding.EncodeToString(hasher.Sum(nil))
return sha, nil
}
The Pulumi program is much shorter because we don't have to set up ECR and Docker. We only have to create the IAM execution role and the Lambda.
const name = "hash"
func main() {
pulumi.Run(func(ctx *pulumi.Context) error {
role, err := createIamRoleForLambda(ctx)
if err != nil {
return err
}
function, err := createLambda(ctx, role)
if err != nil {
return err
}
// Export the lambda ARN.
ctx.Export("lambda", function.Arn)
return nil
})
}
We need to create a zip file for a zip deployment. Pulumi has built-in support for creating zip files. The Lambda will be deployed to the custom Amazon Linux 2 (AL2) runtime (provided.al2
).
This runtime mandates that the application's name inside the zip file is bootstrap
.
Like in the first example, we also have to specify the name, memory size, the execution role, and the timeout in seconds. And to save money, the Lambda runs on the Arm architecture.
codeArchive := pulumi.NewAssetArchive(map[string]interface{}{
"bootstrap": pulumi.NewFileAsset("../lambda/main"),
})
args := &lambda.FunctionArgs{
Runtime: pulumi.String("provided.al2"),
Handler: pulumi.String("bootstrap"),
Code: codeArchive,
MemorySize: pulumi.Int(128),
Name: pulumi.String(name),
Publish: pulumi.Bool(false),
Role: role.Arn,
Timeout: pulumi.Int(3),
Architectures: pulumi.StringArray{pulumi.String("arm64")},
}
function, err := lambda.NewFunction(
ctx,
name,
args,
pulumi.DependsOn([]pulumi.Resource{role, logGroup}),
)
if err != nil {
return nil, err
}
return function, nil
Because this Lambda runs on the Arm architecture, we have to compile the Go program for this architecture. We can do this on any Go-supported platform thanks to the excellent Go cross-compile capability.
GOOS=linux GOARCH=arm64 CGO_ENABLED=0 go build -ldflags='-s' -o main .
Now we can provision the Lambda with
pulumi up
And after a successful deployment trigger the Lambda with the AWS CLI
aws-vault exec home -- aws lambda invoke \
--function-name $(pulumi stack output lambda) \
--region $(pulumi config get aws:region) \
--cli-binary-format raw-in-base64-out \
--payload '"hello world"' \
out.json
cat out.json
"uU0nuZNNPgilLlLX2n2r-sSE7-N6U4DukIj3rOLvzek="
Cloudwatch Cleanup ¶
In this last example, I show you how to trigger a Lambda from another AWS service.
This Lambda cleans up Cloudwatch log groups. Most AWS services automatically create log groups in AWS Cloudwatch if you don't create them beforehand. As mentioned before, the retention period of these automatically created log groups is set to never expire. Therefore, if possible, I always create the log groups with Pulumi or Terraform. Sometimes this is not possible. In these cases, the following Lambda helps clean up the logs. It periodically scans alls log groups in an AWS account. If it finds a group without a retention period, it changes the period to 12 months. It also deletes empty log groups that are older than the retention period. This is a known problem of Cloudwatch that it does not delete empty log groups.
You find the source code for this Lambda here:
https://github.com/ralscha/blog2020/blob/master/golambda/cloudwatch_cleanup/lambda/main.go
This Lambda will be deployed with a zip file and runs on the Arm architecture.
One difference to the previous examples is that we have to give this Lambda more permissions, so it can scan all log groups, set the retention period and delete log groups.
func createIamRoleForLambda(ctx *pulumi.Context) (*iam.Role, error) {
role, err := iam.NewRole(ctx, name+"-lambda-exec-role", &iam.RoleArgs{
AssumeRolePolicy: pulumi.String(`{
"Version": "2012-10-17",
"Statement": [{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}]
}`),
InlinePolicies: iam.RoleInlinePolicyArray{iam.RoleInlinePolicyArgs{
Name: pulumi.String("logwatch"),
Policy: pulumi.String(`{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups",
"ec2:DescribeRegions",
"logs:PutRetentionPolicy",
"logs:DescribeLogStreams",
"logs:DeleteLogGroup"
],
"Resource": "*"
}
]
}`),
}},
})
_, err = iam.NewRolePolicyAttachment(ctx, name+"-lambda-exec", &iam.RolePolicyAttachmentArgs{
Role: role.Name,
PolicyArn: pulumi.String("arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"),
})
if err != nil {
return nil, err
}
return role, nil
}
This Lambda is trigged by another AWS service: Amazon EventBridge
One feature of EventBridge is to emit events periodically. This is controlled by a rule with either a cron expression
or a rate expression.
Here we configure a cron expression that triggers the Lambda once a month.
The EventBridge to Lambda connection requires three parts. The event rule that specifies when to emit an event.
onceAMonthRule, err := cloudwatch.NewEventRule(ctx, name+"-onceAMonth", &cloudwatch.EventRuleArgs{
Description: pulumi.String("Triggers Cloudwatch Cleanup Lambda once a month"),
ScheduleExpression: pulumi.String("cron(0 6 1 * ? *)"),
})
if err != nil {
return err
}
The event target which specifies the target that needs to be called.
_, err = cloudwatch.NewEventTarget(ctx, name+"-onceAMonthTarget", &cloudwatch.EventTargetArgs{
Rule: onceAMonthRule.Name,
Arn: function.Arn,
})
if err != nil {
return err
}
A permission that allows EventBridge to start a Lambda.
_, err = lambda.NewPermission(ctx, name+"-allow-cloudwatch-to-call-lambda", &lambda.PermissionArgs{
Action: pulumi.String("lambda:InvokeFunction"),
Function: function.Name,
Principal: pulumi.String("events.amazonaws.com"),
SourceArn: onceAMonthRule.Arn,
StatementId: pulumi.String("AllowExecutionFromCloudWatch"),
})
if err != nil {
return err
}
Like the other examples, you provision the infrastructure with pulumi up
.
To test if the Lambda works, trigger it manually with the AWS CLI.
aws lambda invoke --function-name $(pulumi stack output lambda) --region $(pulumi config get aws:region) out.json
Check out the Cloudwatch logs to see if everything worked correctly.
This concludes my tutorial about creating AWS Lambdas with Go and provisioning them with Pulumi (Go).
For more information about AWS Lambda, check out the official developer guide:
https://docs.aws.amazon.com/lambda/latest/dg/welcome.html
Lambdas with Go:
https://docs.aws.amazon.com/lambda/latest/dg/lambda-golang.html
For Pulumi, check out this how-to guide:
https://www.pulumi.com/registry/packages/aws/how-to-guides/aws-go-lambda/
For estimating Lambda costs check out the AWS Pricing Calculator:
https://calculator.aws/#/createCalculator/Lambda