Running my blog using docker at AWS

I have decided to move to AWS and using Nginx and ghost blog in docker containers at a t2 AWS EC2 instance.
Basically, NginX will be facing the web traffic while ghost node runtime will handle the content management.

Create an EC2 instance using AWS Console, I suggest you add a role that allows you to access to S3 from your EC2 instance so that you can easily use it for copying and backing up config files.

Install docker into AWS EC2 instance (using standard AMI)

sudo yum install docker -y

Start the docker engine

sudo service docker start

Add ec2-user to docker group

sudo usermod -a -G docker ec2-user

Log out and log back into your EC2 instance.

Pull the docker images (you can run them and it would pull if the image doesn't exist):

docker pull ghost docker pull nginx

Create your nginx local folders

mkdir -p /home/ec2-user/dev/ghost/Nginx/{logs,certs,sites-enabled}

create the config file for nginx under /home/ec2-user/dev/ghost/Nginx/sites-enabled

I use a localhost.conf file

server {
listen 0.0.0.0:80; server_name localhost; access_log /var/log/nginx/web.log; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header HOST $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://blog:2368; proxy_redirect off; if ($http_x_forwarded_proto != "https") { rewrite ^(.*)$ https://<domain>$1 permanent; } } location /static/ { alias /home/ec2-user/static/; } location /assets/ {
expires 30d; add_header Pragma public; add_header Cache-Control "public"; proxy_pass http://blog:2368/assets/; proxy_set_header Host $host; proxy_buffering off; } # jQuery (and other vendor files) # Cache these assets for 365 days!! location /public/ { expires 365d; add_header Pragma public; add_header Cache-Control "public"; proxy_pass http://blog:2368/public/; proxy_set_header Host $host; proxy_buffering off; } }

I have linked the docker containers as below using below parameters:

local directory path for ghost: /home/ec2-user/dev/ghost
local nginx path: /home/ec2-user/ghost/Nginx/*

Before you start ghost in production mode make sure to add the below under production section at config.js located at /home/ec2-user/dev/ghost/

paths: { contentPath: path.join(process.env.GHOST_CONTENT, '/') } Run the ghost blog in production mode, now we are using the default port and will use the same port to map to nginx to serve public internet:

docker run --name blog -p 2368:2368 -e NODE_ENV=production -v /home/ec2-user/dev/ghost/:/var/lib/ghost ghost

Link blog named running container to nginX container

docker run -v /home/ec2-user/dev/ghost/Nginx/sites-enabled/:/etc/nginx/conf.d/ -v /home/ec2-user/dev/ghost/Nginx/certs/:/etc/nginx/certs -v /home/ec2-user/dev/ghost/Nginx/logs/:/var/log/nginx --name nginxsrv -p 80:80 -d --link blog:blog nginx

You can start and stop anytime with:

docker stop blog

docker stop nginxsrv

or restart with

docker restart blog

docker restart nginxsrv

If you want to take a dive into blog or nginx containers, you can use below command to do a bash into the container

docker exec -it blog /bin/bash

Please note that AWS has EC2S - Elastic Cloud Container Service that is specifically designed for docker containers. I am using an IaaS approach using EC2 and installing docker engine to it.

I will be distributing static content located at S3 RRS via AWS cloud front.

Below is the architecture:

If you take a snapshot and create an AMI of the running instance while docker images are running, you will not be able to start the docker containers at your cloned EC2 instance, it will simply go into a hanging state.

I have found out a way around to solve this type of an issue and posted at docker forums https://forums.docker.com/t/what-to-do-when-all-docker-commands-hang/28103/5?u=korayhk