AWS CLI in a Docker Container

aws-docker

Most of the systems I run now have Docker on it.  I try to install as less as possible on the base system as I can.  This blog post will walk you through how to use the container and two of the many useful commands available to the AWS CLI tool.

You can find an automated build of this container on Docker Hub here: https://hub.docker.com/r/garland/aws-cli-docker/ This Docker image is small (only 30MB) because it was built with the Alpine Linux base image.

Starting the Docker container

It is pretty simple to start the Docker container and get a shell:


docker run \
-it \
--env AWS_ACCESS_KEY_ID=YOUR AWS KEY \
--env AWS_SECRET_ACCESS_KEY=YOUR AWS SECRET \
--env AWS_DEFAULT_REGION=us-west-2 \
garland/aws-cli-docker /bin/sh

Once you are in the shell, you can use any of the supported commands.  For example, you can copy/upload items to S3, list ec2 instances, and start ec2 instances. A command list and the usage guide can be found here.

Copy files to S3

If you have a set of files on your local server that you want to copy over to S3,  you can use this tool to do that.  I’ve written instructions for copying over files in /opt/database to s3://garland.public.bucket/database below.

First, you need to restart the container and map the directory you want to copy over.


docker run \
-it \
--env AWS_ACCESS_KEY_ID=YOUR AWS KEY \
--env AWS_SECRET_ACCESS_KEY=YOUR AWS SECRET \
--env AWS_DEFAULT_REGION=us-west-2 \
-v /opt/database:/opt/database \
garland/aws-cli-docker /bin/sh

This adds the Docker -v option which maps a path from your local server to inside the container.  The format is /local/path:/inside/container/path

Now that you are inside the container with a shell, you can execute this to copy the folder over.

aws s3 sync /opt/database s3://garland.public.bucket/database

The entire databasefolder has been uploaded to the garland.public.bucket folder.

You can get the help pages for any level of the CLI.  For example, you can type in aws help to open the help pages to the top level of the CLI.  It will show you all of the AWS resources it can control.  You can then delve in deeper and find help for each resource. For example, type in aws s3 help if you wanted help with S3 tasks to open a help menu specific to S3 tasks and usage.

Copy files to S3 – Automated

If you didn’t want to do the copy in an automated shell, you can execute the container with the command in one line!

docker run \
--env AWS_ACCESS_KEY_ID=YOUR AWS KEY \
--env AWS_SECRET_ACCESS_KEY=YOUR AWS SECRET \
--env AWS_DEFAULT_REGION=us-west-2 \
-v /opt/database:/opt/database \
garland/aws-cli-docker \
aws s3 sync /opt/database s3://garland.public.bucket/database

You’ll notice that the -it switch, the Docker switch for an interactive terminal, was removed.  I also replaced the /bin/sh command with the S3 command from above.  You can easily automate this by figuring out what the command does in the interactive mode and creating a script or run it like this not in the interactive command line mode.

Copy files to S3 – Automated and Background

Docker can do so much more than that though!  What if the copy (or any operation) takes a long time and you don’t want to hold up your current shell? You can easily background this task with Docker.

docker run \
-d \
--env AWS_ACCESS_KEY_ID=YOUR AWS KEY \
--env AWS_SECRET_ACCESS_KEY=YOUR AWS SECRET \
--env AWS_DEFAULT_REGION=us-west-2 \
-v /opt/database:/opt/database \
garland/aws-cli-docker \
aws s3 sync /opt/database s3://garland.public.bucket/database

The only change I’ve made to the previous example is adding the -d switch.  This tells Docker to background the task.  Now, how will you get the output from that command?

When you ran the previous command, it returned an ID to you.  Copy that ID and run:

docker logs [ID HERE]

This will return all of the stdout from the AWS CLI command that ran.

Advertisements

2 thoughts on “AWS CLI in a Docker Container

  1. I can’t believe what I’m reading. You’re telling me you spent all that time running a cli tool in a container? Wrapping complexity like this around a simple to use cli tool is sign of the times I fear. You’re a web developer I take it?

    Like

    1. It depends on what I am doing. Sometime i do run the cli tool in the container. I regularly connect to multiple accounts at the same time. If my local environment is not setup for that account, it is way easier to start a container locally and just run the command with those keys. Other times im on Docker clusters like Kubernetes or Swarm clusters and I am not going to install the cli tool on there. It already has Docker server installed, so i just start a container to use the cli tool. Another use case is in a Jenkins workflow, where it needs to upload something to S3 or update an AWS Lambda function. Most of the Jenkins slaves are only running Docker and has nothing else installed.

      I guess what I am trying to say is that, I dont use the AWS CLI in the container locally much. However it is very useful in a highly containerized environment.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s