Most of the systems I run now have Docker on it. I try to install as less as possible on the base system as I can. This blog post will walk you through how to use the container and two of the many useful commands available to the AWS CLI tool.
You can find an automated build of this container on Docker Hub here: https://hub.docker.com/r/garland/aws-cli-docker/ This Docker image is small (only 30MB) because it was built with the Alpine Linux base image.
Starting the Docker container
It is pretty simple to start the Docker container and get a shell:
docker run \ -it \ --env AWS_ACCESS_KEY_ID=YOUR AWS KEY \ --env AWS_SECRET_ACCESS_KEY=YOUR AWS SECRET \ --env AWS_DEFAULT_REGION=us-west-2 \ garland/aws-cli-docker /bin/sh
Once you are in the shell, you can use any of the supported commands. For example, you can copy/upload items to S3, list ec2 instances, and start ec2 instances. A command list and the usage guide can be found here.
Copy files to S3
If you have a set of files on your local server that you want to copy over to S3, you can use this tool to do that. I’ve written instructions for copying over files in
First, you need to restart the container and map the directory you want to copy over.
docker run \ -it \ --env AWS_ACCESS_KEY_ID=YOUR AWS KEY \ --env AWS_SECRET_ACCESS_KEY=YOUR AWS SECRET \ --env AWS_DEFAULT_REGION=us-west-2 \ -v /opt/database:/opt/database \ garland/aws-cli-docker /bin/sh
This adds the Docker
-v option which maps a path from your local server to inside the container. The format is
Now that you are inside the container with a shell, you can execute this to copy the folder over.
aws s3 sync /opt/database s3://garland.public.bucket/database
databasefolder has been uploaded to the
You can get the help pages for any level of the CLI. For example, you can type in
aws help to open the help pages to the top level of the CLI. It will show you all of the AWS resources it can control. You can then delve in deeper and find help for each resource. For example, type in
aws s3 help if you wanted help with S3 tasks to open a help menu specific to S3 tasks and usage.
Copy files to S3 – Automated
If you didn’t want to do the copy in an automated shell, you can execute the container with the command in one line!
docker run \ --env AWS_ACCESS_KEY_ID=YOUR AWS KEY \ --env AWS_SECRET_ACCESS_KEY=YOUR AWS SECRET \ --env AWS_DEFAULT_REGION=us-west-2 \ -v /opt/database:/opt/database \ garland/aws-cli-docker \ aws s3 sync /opt/database s3://garland.public.bucket/database
You’ll notice that the
-it switch, the Docker switch for an interactive terminal, was removed. I also replaced the
/bin/sh command with the S3 command from above. You can easily automate this by figuring out what the command does in the interactive mode and creating a script or run it like this not in the interactive command line mode.
Copy files to S3 – Automated and Background
Docker can do so much more than that though! What if the copy (or any operation) takes a long time and you don’t want to hold up your current shell? You can easily background this task with Docker.
docker run \ -d \ --env AWS_ACCESS_KEY_ID=YOUR AWS KEY \ --env AWS_SECRET_ACCESS_KEY=YOUR AWS SECRET \ --env AWS_DEFAULT_REGION=us-west-2 \ -v /opt/database:/opt/database \ garland/aws-cli-docker \ aws s3 sync /opt/database s3://garland.public.bucket/database
The only change I’ve made to the previous example is adding the
-d switch. This tells Docker to background the task. Now, how will you get the output from that command?
When you ran the previous command, it returned an ID to you. Copy that ID and run:
docker logs [ID HERE]
This will return all of the stdout from the AWS CLI command that ran.