AWS SAM CLI inside Docker Compose

Like most developers, I like to be able to run the application I’m working on locally. This desire ran into a few problems when I worked on an application that used AWS Lambda functions in production for various processing tasks.

AWS have fortunately anticipated the need to run Lambdas locally, and have developed the Serverless Application Model Command Line Interface (SAM CLI). SAM CLI can do a lot more than what I’ll be using it for here, but there are two specific functions that are worth briefly discussing. sam local start-api is a command that will use a template file to start a faux API Gateway. This approach allows your lambda functions to be triggered by simply hitting the defined HTTP endpoints. sam local start-lambda on the other hand creates a listener service that allows your lambdas to be invoked directly using (for example) one of the AWS SDKs. This was the approach being used by the application I was working on.

Docker Compose is a common solution to running all of the required server-side components needed for a web application, and was the solution being used in this case. Unfortunately it wasn’t immediately clear how to get Compose to play nice with SAM CLI.

It appears that when using sam local that SAM CLI spins up a Python application to serve as the listener, and then to actually execute the function, it spins up a Docker container for the relevant Lambda runtime. Hence the challenge: We’re wanting to run SAM CLI in a Docker container (so that we can spin up the whole application with Compose), but that means we’ll have a container trying to spin up a container…which is a bit tricky.

FYI: The container used by SAM CLI is lambci/lambda, which gets pulled down the first time a function is triggered.

The short version of the solution is that instead of expecting the SAM CLI container to run its own Docker engine, we give the container access to the engine on the host (our local machine), and allow it to spin up new containers on the host. Given SAM CLI will be making certain assumptions about the location of files relative to its execution context, we also need to make sure that those assumptions are valid when running the child container on the host. Let’s step through a simplified example.

WARNING: There are significant security risks associated with this approach. Don’t use this for anything other than local development.

For the sake of brevity, I’ll skip the details of developing a Lambda function, and just keep to the details specific to SAM CLI.

The first thing we need to do is create a SAM Template that gives SAM CLI a definition for the configuration of our function:

# sam-cli-template.yml

Resources:
  MyLambdaFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.handler
      CodeUri: ./my-first-lambda
      Runtime: python3.6
      Timeout: 30

This example creates a function named MyLambdaFunction, which is an important detail when invoking the function. The function code can be found in a directory called my-first-lambda which is child of the current directory. It has a Python 3.6 runtime, will timeout after 30s, and will call the function named handler inside the index module (probably defined in an index.py file).

Now that we have the function definition, we can define the Dockerfile that will run SAM CLI. We’ll use Alpine to keep it as lightweight as possible, install Python and the AWS CLI which are dependencies of SAM CLI, and of course install SAM CLI itself:

# Dockerfile.lambdas

FROM: alpine:3.9

RUN apk add --no-cache build-base python3
RUN pip install --upgrade pip
RUN pip install awscli

RUN pip install aws-sam-cli

WORKDIR /var/task

COPY ./sam-cli-template.yml template.yml

VOLUME /var/run/docker.sock

EXPOSE 3001

ENTRYPOINT ["/bin/sh"]

We’ve also copied in the template we created in the previous step, and exposed port 3001 for talking to SAM CLI.

The last step brings it all together. Normally you’d have an existing Compose file that spins up your server and any other dependencies, but we’ll just do the one service here. We’ll build it with the Dockerfile we just created, and use our current directory as the context. I’ve named the resultant image myapp-lambda:local.

# docker-compose.yml

version: '3'
services:
  lambdas:
    build:
      context: ./
      dockerfile: Dockerfile.lambdas
    image: myapp-lambda:local
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    entrypoint: sam local start-lambda --host 0.0.0.0 --port 3001 --docker-volume-basedir /c/Users/Carl/DevProjects/sam-cli-example
    networks:
      - lambda

There are 2 critical pieces to this compose file:

Socket passthrough: We mount the local docker socket as a volume into the docker socket in the SAM CLI container. This ensures that any containers started by SAM CLI will actually start as a new container on the host.

Base directory: We pass the --docker-volume-basedir flag to the execution of SAM CLI. This has to be the absolute path on the Host machine (that’s your local machine, not the container) to the directory from which SAM CLI will execute.

In my case I’m saying I have:

  • /c/Users/Carl/DevProjects/sam-cli-example as the root directory.
    • All of the files we created would be stored here
  • /c/Users/Carl/DevProjects/sam-cli-example/my-first-lambda as the directory for my Lambda code.

I’ve specified Windows paths in the example, as it might be confusing how they should be structured. If you’re on a *nix or *nix-like system you can just specify your paths like you normally would.

Thanks for reading. Toodles!