This blog isn’t the only website or webapp that I operate from my living room. I operate several small sites and tools that run on rented virtual private servers. Until recently, I ran those applications directly on the machine and configured everything manually. On the one hand, this can be a lot of fun but on the other hand, sometimes it also causes great pain. Triggered by the need to host new applications, which were a bit more complex, I started to tinker with my setup. First, I gave Coolify a shot which is a cool project, but it did too much magic for my taste. Then I went down another rabbit hole and looked into simple Kubernetes variants like k3s, fortunately, this adventure didn’t last long. My goal was to find a way to run containerized versions of applications while still keeping everything as simple as possible. I found the solution in Docker Compose in combination with Traefik Proxy. Docker Compose is a neat way of defining and running applications that can consist of one or more containers, volumes, and networks. Traefik Proxy is a reverse proxy that can be hosted via Docker, dynamically load configuration from Docker containers, and automatically manage Let’s Encrypt certificates for applications. In the following, I want to show you how I use this combination to host my stuff.
To start everything off, I created a new Git repository that holds all my infrastructure configuration files. Never again would I edit a file somewhere on a remote machine and then lose all changes because I didn’t open it as root. Having all configurations in Git makes it easy to roll back changes and investigate what I did to bring down my server.
Now that I had a repository, I added some files, namely compose files. Each application has a
docker-compose.yaml file that instructs Docker Compose on how to run the application. My initial compose file for linkding, a self hosted bookmark service can be seen in the following example.
version: "3" services: linkding: image: "sissbruecker/linkding:1.17.2" container_name: "links" ports: - "9090:9090" volumes: - "links-data:/etc/linkding/data" restart: unless-stopped volumes: links-data:
Here you can see that the file starts with information about the Docker Compose version, then comes a section that defines all services that should be run, and at the end, I define a single volume that is used by the service to persist data. This compose file can be started with the Docker Compose CLI by running
docker-compose up. After doing this, the service can be accessed on port 9090. That’s a great start, but I needed the services to be accessible from the internet, and of course, they have to be secured with a TLS certificate.
That sounds like a job for a reverse proxy, and my reverse proxy of choice happens to be Traefik Proxy. It can be hosted via Docker, and dynamically loads its configuration from Docker. This means that I can spin up containers, and when they have some labels the proxy will automatically route traffic to them and even manage certificates. The next example shows the
docker-compose.yaml for Traefik Proxy.
version: "3" services: traefik: image: "traefik:v2.9.10" container_name: "traefik" command: - "--providers.docker=true" - "--providers.docker.exposedbydefault=false" - "--providers.docker.network=traefik_traefik" - "--entrypoints.web.address=:80" - "--entrypoints.web.http.redirections.entrypoint.to=websecure" - "--entrypoints.web.http.redirections.entrypoint.scheme=https" - "--entrypoints.web.http.redirections.entrypoint.permanent=true" - "--entrypoints.websecure.address=:443" - "--certificatesresolvers.letsencrypt.acme.tlschallenge=true" - "--certificatesresolvers.letsencrypt.acme.email=REDACTED" - "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json" ports: - "443:443" - "80:80" volumes: - "traefik-letsencrypt:/letsencrypt" - "/var/run/docker.sock:/var/run/docker.sock:ro" networks: - "traefik" restart: unless-stopped volumes: traefik-letsencrypt: networks: traefik:
The basic structure of this compose file is similar to the one I use for linkding. As you can see, I expose the default ports for HTTP 80 and HTTPS 443 traffic. To automatically detect configuration from other containers Traefik needs access to the
/var/run/docker.sock socket which is solved by mounting it as a volume. Another thing I did was to put all applications that should be routed by Traefik into the same “traefik” network (called “traefik_traefik” when used by other applications). In the command section of the
traefik service configuration, you can see the various command line arguments I pass to Traefik’s static configuration. These arguments instruct Traefik to load dynamic configuration from all Docker containers in the “traefik_traefik” network, redirect all traffic from the “web” endpoint on port 80 to the “websecure” endpoint on port 443, and lastly, I define a certificate resolver “letsencrypt” that can be used to fetch Let’s Encrypt certificates for applications.
Next, I wanted to route traffic to my linkding container. This can be achieved by simply placing some labels on the container. These labels are used to enable Traefik, define rules used to route traffic, set the entry points used by the container, and finally, the last label sets the certificate resolver that should be used. Additionally, the container has to be in the “traefik_traefik” network. The ports section is no longer needed. The changes I made to the compose file are shown in the following example.
version: "3" services: linkding: image: "sissbruecker/linkding:1.17.2" container_name: "links" - ports: - - "9090:9090" + labels: + - "traefik.enable=true" + - "traefik.http.routers.links.rule=Host(`example.org`)" + - "traefik.http.routers.links.entrypoints=websecure" + - "traefik.http.routers.links.tls.certresolver=letsencrypt" volumes: - "links-data:/etc/linkding/data" + networks: + - "traefik_traefik" restart: unless-stopped volumes: links-data: + networks: + traefik_traefik: + external: true
After configuring some applications in the same way and running them on my small server, I ran into a problem with my server’s resources. Sometimes some of the applications took up all of the available memory and the host system had to take them down. Fortunately, there is a simple way to avoid this by setting some limits for the services defined in the compose files. These limit the memory and CPU power a service is able to use, and I highly recommend setting them. In this last example, you can see how I configured the limits for the linkding service.
version: "3" services: linkding: image: "sissbruecker/linkding:1.17.2" container_name: "links" # ... restart: unless-stopped + deploy: + resources: + limits: + cpus: "0.25" + memory: "200M" # ...
Hosting my applications as containers works really well, and with all their configuration in a Git repository, setting everything up is fast and easy. As with all servers, it’s important to consider security every step of the way so please research best practices and stay safe while self-hosting.