S3cmd in a Docker Container

s3-docker

Most of the systems I run now have Docker on it.  I try to install as less as possible on the base system as I can.  There will be times where you need to ship items off to S3 or get items.  This is where the https://github.com/sekka1/docker-s3cmd tool become useful.  Without having to install anything onto the system, I can run a Docker command to get and put items from S3.

S3cmd is a powerful command line tool that allows you to perform various tasks against Amazon S3 service.  It is used widely to copy and upload objects to S3.

This Docker image is small (31MB) and is built with the Alpine Linux base image. You can find it on Docker hub: https://hub.docker.com/r/garland/docker-s3cmd/

docker-s3cmd

The documentation in the repository describes all of this but I give more details here:  https://github.com/sekka1/docker-s3cmd

Here is how to get items from S3:

If you have some files in S3 at s3://my.bucket/database and you want to copy that down to the local server to /opt/database, you can run this docker command to copy it.


AWS_KEY=YOUR AWS KEY
AWS_SECRET=YOUR AWS SECRET
BUCKET=s3://garland.public.bucket/database
LOCAL_FILE=/opt

docker run \
--env aws_key=${AWS_KEY} \
--env aws_secret=${AWS_SECRET} \
--env cmd=sync-s3-to-local \
--env SRC_S3=${BUCKET} \
-v ${LOCAL_FILE}:/opt/dest \
garland/docker-s3cmd

Here is how to put items to S3:

Now, let’s do the reverse.  I have a local folder named /opt/database and I want to copy that over to S3 to s3://my.bucket/database2


AWS_KEY=YOUR AWS KEY
AWS_SECRET=YOUR AWS SECRET
BUCKET=s3://garland.public.bucket/database2/
LOCAL_FILE=/tmp/database

docker run \
--env aws_key=${AWS_KEY} \
--env aws_secret=${AWS_SECRET} \
--env cmd=sync-local-to-s3 \
--env DEST_S3=${BUCKET} \
-v ${LOCAL_FILE}:/opt/src \
garland/docker-s3cmd

Run in interactive mode and browse around S3:

You can also run it in interactive mode by using the s3cmd command just like you would on a command line.  This will allow you to browse around the S3 buckets and download and upload files.


AWS_KEY=YOUR AWS KEY
AWS_SECRET=YOUR AWS SECRET

docker run -it \
--env aws_key=${AWS_KEY} \
--env aws_secret=${AWS_SECRET} \
--env cmd=interactive \
-v /:/opt/dest \
garland/docker-s3cmd /bin/sh

You will now have to execute a script to setup the s3cmd config file to add your keys.  You just have to run:


/opt/main.sh

Now you can use the s3cmd like normal.  For example, to get a listing of all of the buckets in S3:


s3cmd ls /

All of your files on the local machine have been mapped to /opt/dest inside the container.  You can find any files you put on S3 there.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s