In my last post, I covered my development environment for working on multiple Docker projects. In this post, I'll talk about how I transitioned my development workflow to my personal server and how my solution allows for rapid deployments. Many of the concepts are the same, so if you missed Part 1, you can catch up here.

Our server set up is fairly simple. All we need is a suitable host OS with Docker and Docker compose installed. In my case, I'm using Ubuntu Server 18.04 on Digital Ocean. Once you've configured docker, choose a location on the server for your applications to live. In my case, I chose my home directory since I only have a handful of projects and the security is high enough for the projects I host.

In my project workspace, I first created a new global-docker directory. This is very similar to my development global-docker configuration, but with a few more tools added to the mix. Specifically, I've added additional configuration for Traefik to enable automatic Let's Encrypt certs for all projects and I've added a new service, Watchtower.

docker-compose.yml

version: "3"
services:
  watchtower:
    image: v2tec/watchtower
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /home/<user>/.docker/config.json:/config.json
    command: --interval 60
    networks:
      - web
      - internal
  traefik:
    container_name: traefik
    image: traefik:latest
      restart: always
    ports:
      - "80:80"
      - "443:443"
      - "8080:8080"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./traefik.toml:/traefik.toml
      - ./acme.json:/acme.json
    networks:
      - web
      - internal

networks:
  web:
    external: true
  internal:
    external: false 

traefik.toml

debug = false
logLevel = "ERROR"
defaultEntryPoints = ["https","http"]
[entryPoints]
  [entryPoints.http]
    address = ":80"
  [entryPoints.https]
    address = ":443"
    [entryPoints.https.tls]
  [entryPoints.api]
    address=":8080"
    [entryPoints.api.auth]
      [entryPoints.api.auth.basic]
        users = [
          "<USER>:<HTPASSWD>"
        ]
        
[api]
  entrypoint="api"
  
[retry]

[docker]
endpoint = "unix:///var/run/docker.sock"
domain = "superlinkx.com"
watch = true
exposedByDefault = false

[acme]
email = "youremail@example.com"
storage = "/acme.json"
onHostRule = true
onDemand = false
caServer = "https://acme-v02.api.letsencrypt.org/directory"
entryPoint = "https"

[acme.httpChallenge]
entryPoint = "http" 

The important differences in the server global-docker are the config for Traefik is now mainly done in the traefik.toml file and the inclusion of several other config files (these start empty, and I'm mounting them just so it's easier to access them from the host when needed. They could just as easily live in proper volumes). We will need to create these two files, as well as touch  an acme.json file in the current directory. acme.json should have user-only rw access, no other permissions.

The traefik.toml configuration file is already pretty much set up, just replace the domain and email bits with your actual server config. These are used by Let's Encrypt for creating the certs. The api bits aren't necessary, but this example includes a basic auth option which can be useful while debugging Traefik. I'd recommend either disabling the API in production or using a better security scheme though.

This full configuration will generate certificates whenever you spin up a new domain in your project configs. Like on dev, we define our project's domains with Traefik labels. Whenever a project gets started, Traefik will handle the certificates and generate new ones when they expire.

The acme.jsonfile is used to store the certificates, so it shouldn't really be edited and should be kept safe on the server. Secure as necessary.

Watchtower is how we make sure our project containers stay up to date. It will need to know what our different Docker registries are, which is why we mount the ~/.docker/config.json into its container. This file is updated anytime you use docker login. We have set the polling interval to 60 seconds, so every minute Watchtower will check all running containers and update them in place when it finds a new version.

All we have to do to deploy a new container is push it to the registry we pull it from on our server. That's it, no extra configuration or special deployment steps. Push to a Docker registry and let the server find it when it's ready. The container will go down for a few seconds, depending on how long it takes for its services to come back online during the process. Ideally, you'll want to have a way to handle redundancy if you plan on pushing mission critical applications, but that is out of scope for this guide.

Our project definitions are super simple now. We just create a new directory to hold their configuration and other files, create a docker-compose that pulls one or more application service containers from our registry and bring them up. Traefik will handle everything else for us. Here's my portfolio container as an example:

docker-compose.yml (superlinkx.com)

version: "3"
services:
  portfolio:
      hostname: superlinkx.com
      container_name: portfolio
      restart: always
      image: registry.gitlab.com/superlinkx/udacity-portfolio:master
      
      networks:
        - web
        - internal
      labels:
        - "traefik.enable=true"
        - "traefik.backend=portfolio"
        - "traefik.frontend.rule=Host:superlinkx.com,derpy.dev,superlinkx.dev"
        - "traefik.docker.network=web"
        - "traefik.entryPoints=https"
        - "traefik.port=80"
        - "traefik.frontend.headers.SSLRedirect=true"
      external_links:
        - traefik
  networks:
    web:
      external: true
    internal:
      external: false 

As you can see, we just need to add an SSLRedirect Header to Traefik and change our entryPoints to https. Traefik handles multiple domains just fine and will create certs for all of them. We want our project services to always restart with Docker, so if the server reboots or we need to manually restart the Docker daemon, everything will come back automatically.

In this part, we looked at how to use what we learned in Part 1 about configuring multiple Docker services with docker-compose and Traefik, and added HTTPS configuration, automatic certs, and Watchtower to keep our services up to date. In Part 3, we will look at using Gitlab to give our projects Continuous Integration and automatically push our valid containers to our Gitlab private registry, where Watchtower can pick them up.