I moved my services from a virtual-machine environment to docker. Here’s how and why.
Let’s start with the “why”. The answer is that it crept up on me. I thought I’d experiment with docker containers, to learn what they did. The proper way to experiment is to do something non-trivial. So I chose one of my virtual machines and decided to dockerise it.
I achieved my objective of learning docker, because I was forced to read and re-read the documentation and become familiar with docker commands and the docker-compose file. Having started I just, er, kept going, until 2 months later, my entire infrastructure was dockerised.
I ended up with the following docker-compose stacks:
apache | serves a few static pages, currently only used by my Let’s Encrypt configuration |
wordpress | Three WordPress websites for personal and business use. You are looking at one of them. |
nextcloud | Nextcloud installation, using SMB to access my user files |
postgresql | database for my Gnucash financial records |
emby | DLNA and media server. Replaces Plex. Used to share music and photos with the TV in the lounge. |
freepbx | A freepbx installation. This container appears on my dhcp-net (see below), has its own IP addresses. This is, in part, because it reserves a large number of high-numbered ports for RTP, and I didn’t want to map them. |
ftp | ftp server used by my Let’s Encrypt processes |
iot | Node-red installation, used for my very limited home automation. Rather than starting with an existing Node-red image, I rolled this one from a basic OS container, basing the Dockerfile on instructions for installing Node-red on Ubuntu. This is another container on my dhcp-net, because it has to respond to discovery protocols from Alexa, including on port 80. |
Iredmail installation. It is highly questionable whether this should have been done, because I ended up with a single container running a lot of processes: dovecot, postfix, amavis, apache. I should really split these out into separate containers, but that would take a lot of work to discover the dependencies between these various processes. Anyhow, it works. | |
nfs | nfs exporter |
piwigo | Gallery at piwigo.chezstephens.org.uk |
portainer | Manage / control / debug containers |
proxy | Nginx proxy directing various SNI (hostname based) http queries to the appropriate container |
samba | Samba server |
svn | SVN server |
tgt | Target (ISCSI) server |
zabbix | Monitor server. Does a good job of checking docker container status and emailing me if something breaks. |
One thing missing from docker is the ability to express dependencies between services. For example, my nextcloud container depends on the samba server becuase it uses SMB external directories. I wrote a Makefile to allow me to start and stop (as well as up/down/build) all the services in a logical order.
My docker installation (/var/lib/docker) has its own zfs dataset. This causes docker to use zfs as the copy-on-write filesystem in which containers run, with a probable performance benefit. It also has the side effect of polluting my zfs dataset listing with hundreds (about 800) of meaningless datasets.
One of the needs of many of my servers is to persist data. For example the mail container has thousands of emails and a MySQL database. I needed to persist that data across container re-builds, which assume that you are rebuilding the container from scratch and want to initialse everything.
Each docker-compose stack had its own zfs dataset (to allow independent rollback), and each such stack was only depdenent on data within that dataset. The trick is to build the container, run it (to perform initialization), then docker copy data you want to keep (such as specific directories in /etc and /var) to the dataset, then modify docker-compose.yaml to mount that copy in the appropriate original location. The only fly in the ointment is that docker cp doesn’t properly preserve file ownership, so you may need to manually copy file ownership from the initial installation using chown commands.
Several of the stacks run using the devplayer0/net-dhcp plugin, which allows them to appear as indepdendent IP addresses. A macvtap network would have achieved the same effect, except I would have to have hard-coded the IP addressed into the docker-compose files. The net-dhcp plugin allows an existing dhcp server to provide the IP addresses, which fits better into my existing infrastructure.
At the end of all this, was it worth it? Well, I certainly enjoyed the learning experience, and proving that I was up the challenge. I also ended up with a system that is arguably easier to manage. Next time I update/reinstall my host OS, I think I will find it easier to bring docker up than to bring up the virtual machines, which requires the various virtual machine domains to be exported and imported using various virsh commands.