I’ve finally found some time to play with Docker some more. I’ve been very busy, and then I got sidetracked for a couple of weeks with CoreOS when deciding which Docker host OS I want. CoreOS is a pretty nifty idea, and I was going to use it at home as my Docker host(s). With the cloud, everybody has a lab a few clicks away, but I still have a collection of older PCs at home I use for my lab work out of habit, and I thought I might use CoreOS and let its cloud features keep my services up when I took various boxes down and back up. I got a little practice in with etcd and fleetd and got it working on temporary PXE-boot machines, and every time I thought I had it figured out I would take down all but one of the CoreOS hosts so I could permanently install CoreOS on the other nodes, but then etcd never came back up. It took me a while to figure out that you simply can’t do that with CoreOS. It needs a 3-node minimum cluster to be able to stand to lose one node, otherwise the etcd cluster dies. It needs a quorum of nodes alive to agree on which cluster configuration is the correct one, and if you started with 3 nodes and reformat two of them, the etcd cluster is dead and can’t be extended with trying to add new nodes.
I’ve been using Ubuntu for years, and Debian before that. (And Red Hat and Slackware back in the 90’s….) But I’ve been getting grumpy with Ubuntu over the past 3-4 LTS releases, and I’m now supporting RHEL-alike servers at work, so after I found CoreOS’s cloud features to be unworkable for a small home setup I decided to use minimal CentOS 7 as my Docker host with docker-io installed from EPEL. (I could have used CoreOS without etcd and fleetd, but I decided that would be problematic going forward.) My plan is to keep the host OS minimal, although I have been adding utilities like netstat, lsof, bind-utils and such, and to keep any services and heavy userland Dockerized. Ideally I want to be able to back up the Docker containers, blow the host OS away and restore the containers to a new host OS with no loss of utility.
The first few services—Minecraft servers for my nephews—went easily
into containers. I recently found and fell in love with Splunk,
so I tried to Dockerize it to aggregate my home logs. That was both
more and less tricky than I thought, and the more I work with Docker
the more I realize there are many ways to do something and different
opinions on how to do it right. I’ll try to write a blog or two on
Dockerizing my apps, but for now I just wanted to mention something
that’s tripped me up for a couple of days. Splunk worked great at
first, but then all my searches started failing and referred me to
the logs. But the only errors I could find were “Unable to create FIFO:
path=<path>/alive.token error=”Permission denied”” and another error about
the file not existing when trying to remove it. I had Splunk running
as root inside the container, so it wasn’t a traditional permission
issue. It was selinux which CentOS has and Ubuntu doesn’t. Based on
this StackOverflow question
I was able to
sudo docker inspect \<container\>
my Splunk container to find the data volume file path (I made /opt/splunk
a data volume) and then ran
sudo chcon -Rt svirt_sandbox_file_t \<path-to-volume-file\>. After that Splunk works again!