Recently, I've been looking into a solution for deploying smaller projects using Docker containers without leaning on heavy tools like Kubernetes. Kubernetes is a great tool, but it's very focused on high availability and massive scalability, so it's often too heavy for your typical PHP sites and applications. Thus, my search for a simpler method began.
The first step in my journey was to create a sane dev environment that would make it easy to spin up various projects and actually connect to them effortlessly. For this, I explored the amazing tool Traefik. Utilizing Traefik as a reverse proxy for your Docker projects is surprisingly straightforward. My solution for development has two parts: a global
docker-compose which I run constantly and my project-specific
docker-compose files that I use to define the project and its routing.
My global config starts a simple Traefik reverse proxy service to route my local hosts to the correct container, all on port 80. It also creates a
dev bridge network, which I use to allow Traefik to communicate with my various project containers. Below is my open source variant, which is available at https://github.com/superlinkx/global-docker
version: "3" services: traefik: image: traefik:latest restart: always command: --api --docker ports: - "80:80" - "8234:8080" volumes: - /var/run/docker.sock:/var/run/docker.sock networks: - dev networks: dev: driver: bridge
This configuration is one of the simplest ways to get started with Traefik. We create a service that automatically restarts with the Docker daemon, and configure it to connect directly to the Docker socket. That's pretty much all there is to it. It will be able to see our services on our
dev network and all the actual configuration is done in each project's
We now have Traefik up and running, so let's look at one of my simple project configurations that takes advantage of this system. Below is the
docker-compose.yml from my portfolio site. (Source code is at https://gitlab.com/superlinkx/udacity-portfolio)
version: "3" services: udacity-portfolio: hostname: udacity-portfolio.localhost container_name: udacity-portfolio build: context: docker networks: - default volumes: - ./:/usr/share/nginx/html labels: - "traefik.backend=udacity-portfolio" - "traefik.frontend.rule=Host:udacity-portfolio.localhost" - "traefik.docker.network=global-docker_dev" - "traefik.port=80" networks: default: external: name: global-docker_dev
The parts of this config that matter for setting up Traefik routing are the
labels fields. We give our external network the label
default. Note that we don't just use
dev, we need to use the fully qualified network name. Since it was defined in a service called
global-docker the fully qualified network name is
The labels tell Traefik what to do with our service. We set a backend name to make it more friendly if we ever need to check the Traefik API dashboard. We set a frontend rule that gives a list of hosts that Traefik should route to this service (you can add additional hostnames comma delimited). And finally we tell Traefik which port our application runs on.
When we start this service, Traefik will see it almost immediately. Assuming we are running everything on a Linux machine, we can simply fire up our browser and navigate to
udacity-portfolio.localhost. On other hosts where Docker isn't running natively, you'll need to set up either a DNS with
*.test domains or modify your hosts file accordingly. This is outside of our scope right now, but I may eventually put out another article that looks at how my Windows dev environment is set up.
In this first part, we looked at my global docker service which provides a reverse proxy and global network for all of our other services. We then looked at one of my example services and how we can easily add Traefik labels to create our reverse-proxy routes and connect to any apps we are running without the need for managing port mappings.
In part 2, I'll talk about how I've set up my production service to handle routing real requests and keep all my containers up to date automatically, and do it all with automatic Let's Encrypt certificates. Until next time!