TL;DR: Mount a easy to find directory as the _/data_ directory on your Docker container unless you want to go down the rabbit hole of “Why isn’t it easier to find my container’s files?”

Hey all, with my recent entry into the whole homelabbing scene, one of the key technologies that I wanted to learn more and try out was containerization. With all the tech buzz words that are out in the world, containerization has to be one of the most often used. Sure the word itself containerization lends itself to how the technology operates at a high level, but how does it work at a lower level? Will it replace VMs? How easy is it really to make changes to a “container” and then send it to someone? Who is this aimed at? Why are there socks at the end of my bed?

Searching google for containerization software returns a plethora of results for Docker so I decided to begin my journey there, surely I couldn’t go wrong. I installed the Docker CLI on one of my servers, and spawned a “Hello World” container, and sure enough, it worked without issue. The next thing I wanted to try was a simple Ubuntu container. Using the CLI, it was quite easy and sleek to spin up a Ubuntu container. You were able to immediately interact with it, use the container’s terminal, etc. But what if you didn’t want to use the CLI, and just wanted it the same container to run in the background, constantly running some process or waiting for input? 

Bumpy Road

I quickly found out that it wasn’t so easy with the docker start containername command syntax, as each time you would be presented with the tty of the container, and if you exited the process, the entire container stopped. These things had to be stated when you ran the docker run containername command, and created the container. “Well crap” I thought, can I just run a new Ubuntu container, and have it use the same storage as the old container? Wait, where the heck does Docker even save the storage for these containers? I sifted deep into the Docker documentation, and learned of the union file system that they utilize for their containers, and how they create a new partition in a folder, with a unique ID. Well how do I find the IDs of the partitions attached to the old Ubuntu container? Using the docker inspect containername command. Holy canolie, that command just spit out a plethora of information, that I didn’t even know was associated with the container. Lo and behold, there were multiple unique partitions created for my container, and I couldn’t figure out which partition held the data for my Ubuntu’s user directory. 

Moving Forward

My next idea was the run a Minecraft server as a Docker container. Surely there had to be a good solution for this, as this was a great use case for a Docker container on my home network. Sure enough, there is a great container which can be found here. I ran the server for a couple days, however I only gave the machine about 2 gigabytes of RAM, and wanted to allocate more to the machine. This meant that I had to create a new container for the Minecraft server, even though all the world data for the server was stored locally within the container. How do I get the data from the container? Surely I thought that the docker export containername was the solution to my problem. I exported the contents of the container, opened up the resulting archive, and looked everywhere for my world folder. Where the heck was it? Well this is the way that I found out that the docker export containername command does not export the /data directory which was where, sure enough dangit, the Minecraft world data was stored. 

Well then, how the heck do I get my world data from the container, and get it to my desktop? My absolutely disgusting solution? The following script, that I set up to run as a cronjob once a day, at 3 A.M.:

#!/bin/bash #This will create a backup of the data folder within the docker container sudo docker exec mcserver /bin/bash -c 'cp -r /data /backups/tmpbackup/; tar -zcvf /backups/$(date +"%m-%d-%Y")-backup.tar.gz /backups/tmpbackup/; rm -r /backups/tmpbackup/'

The perks of creating this script was that the backup of the Minecraft world was automated – I didn’t even have to think about it. And since the daily backup was stored in the /backups directory, as a .tar.gz file, and NOT in the /data directory, when using the docker export containername command, the backups would come out as well. (You could also us the docker cp command to copy files between the host OS and the container if you wanted to).

From here on out, upon creating Docker containers, I knew that it was important to share some easy to find, well labelled directory as the **/data** directory to the Docker container. Using the following flag when creating the Minecraft server container will accomplish the previously mentioned task:

sudo docker run \ -p 25565:25565 \ --name mcserver \ -v /home/perf3ct/files/minecraftserver:/data \ itzg/minecraft-server:openj9

Leaping Forward

After that forbidden experience, I spawned a few more containers, each doing different tasks, and each time I looked for the flag in the container documentation, on how to mount the local folder as the /data directory for the container. 

From here, I created a Docker container for the following services, so they run locally: 

Docker Registry
Minecraft Forge Server
Minecraft Vanilla Server

Although a small crutch, using the UI of Portainer has allowed me to more quickly monitor and interact with my Docker containers, and for that I am forever thankful. Now onto figuring out how to commit containers as one would commit GitHub code, so that others can “pull” the containers…