Self-hosting administration: The self-hosting handbook
Welcome to the fourth page of a handbook on self-hosting. Begin here. Read the previous page here. On this page, we’ll cover how to handle self-hosting administration, from system updates to making sure your containers are in tip-top shape.
Table of contents
- Self-hosting quickstart: Docker, domains, and DNS (look below!)
- A docker-compose tutorial
- Using docker-compose to add web apps
- Self-hosting administration
- Self-hosting Nextcloud with Docker
- What’s self-hosting administration all about?
- Updating your Docker containers
- Pruning your Docker system
- Using Docker Swarm to strengthen your infrastructure
Hi there! This is a blog post from the people behind SSD Nodes.
We’re the first honest-value VPS provider. Instead of inflating costs, our engineers developed a lean infrastructure that lets us offer up to 10x more RAM per dollar than the competition.
Ready to learn more about what you get from an honest VPS provider? Here’s a hint: Our 24GB RAM + KVM VPS is only $9.99/mo.Learn more about us ⚡
Generally speaking, a self-hosting infrastructure requires quite little in the way of maintenance and upkeep. Of course, the stakes depend entirely on your unique application.
I’m using my self-hosting stack only for myself, and I’m not running any services I couldn’t go without or duplicate with another app/service I already have on my machines, so I don’t worry about things like nine nines of availability. For example, my self-hosted Nextcloud folder also syncs up with Dropbox via a symbolic link on my desktop, so all my critical files are within reach if (and probably when) I bring my VPS down.
Here are some of my recommendations:
Update your primary system regularly. I think once a week is fair enough—you’ll get the latest security updates, which will help keep your VPS secure. You can also enable automatic updates on Ubuntu servers with two simple commands:
$ sudo apt-get install unattended-upgrades $ sudo dpkg-reconfigure unattended-upgrades
Check running containers. If you’re already hopping onto your VPS to perform an update, while not also run a quick
docker ps?. I’m embarrassed to admit this, but I’ve created test servers with… lax security practices, only to find my VPS running some cryptocurrency miner via a Docker container. A quick
ps ensures you know what’s going on, whether that’s a breach or merely a container gone awry.
Create backups! While you can destroy and recreate containers at will without losing your data—thanks Docker volumes!—you can’t predict a catastrophic event. I use a second server as a backup server, and use Borg to synchronize files from one to the next. There’s no automatic restore process, but at least the data duplicated. We’ll soon be posting a guide on backing up your VPS to your local machine, and I’ll be sure to link it here.
Of course, you’ll also want to do some best practices on security, such as running a firewall and something like fail2ban to block malicious access attempts.
And a few things to avoid:
Don’t update the packages inside of your containers. While it’s possible to actually “log into” your running Docker containers using
docker exec -it ..., and in theory you could then perform an
apt-get update && apt-get upgrade inside of them, I strongly discourage this. Many Docker images are crafted using specific package versions and configuration files, which could conflict or be overwritten via an upgrade. We’ll cover smarter updates in a moment.
Stay away from the
docker-compose down command. Running
docker-compose down will stop running containers and delete them, and then delete associated volumes and networks. Your data should remain within the volume folders that you specify in your
docker-compose.yml file, but it’s better to be safe than sorry. If you need to stop containers, use
docker-compose stop instead.
In theory, updating your Docker containers is easy. In practice, it may be more complex than the following explanation makes it seem.
Let me explain.
Each container is based on an image. These images are kept in the Docker Hub and pulled to your machine when you first ask Docker to run a container. The developers who create these images might update them, for example, to use a newer version of PHP. If you want the latest and greatest, you’ll want to update your images and then use them to recreate your containers.
This update process is incredibly simple:
$ docker-compose pull $ docker-compose up -d
The first command will
pull all the newest Docker images from Docker Hub and then recreate them as needed.
Here’s what happened when I followed this process just now:
$ docker-compose pull Pulling db ... done Pulling portainer ... done Pulling redis ... done Pulling nextcloud ... done Pulling freshrss ... done Pulling gitea ... done Pulling proxy ... done Pulling proxy-letsencrypt ... done $ docker-compose up -d Recreating portainer ... done Recreating db ... done Recreating redis ... done Recreating proxy ... done Recreating gitea ... done Recreating freshrss ... done Recreating nextcloud ... done Recreating letsencrypt ... done
As you can see, Docker pulled a handful of updated images from Docker Hub and recreated them, all without any hassle. The reverse proxy took about 15 seconds to re-register all the services, and everything was back up and running!
If you’re still not convinced, here’s what the Docker developers say about this process:
If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers (preserving mounted volumes). To prevent Compose from picking up changes, use the –no-recreate flag.
Compose preserves all volumes used by your services. When docker-compose up runs, if it finds any containers from previous runs, it copies the volumes from the old container to the new container. This process ensures that any data you’ve created in volumes isn’t lost.
There are a few caveats to this method of updating:
Many apps have their own update mechanism. Nextcloud, for example, will automatically update its codebase via another system entirely, which makes updating the Docker image useless.
Without volumes, you’ll lose data. If you don’t set up your
docker-compose.yml file to use Docker volumes for your data, and instead store data inside the container itself, you’ll ax it during the recreation. Better to use volumes at all times.
Beware of complex migrations. Let’s say that a service you’re self-hosting has moved from one major version to another, which includes a complete refactoring of how it accesses its database. The developers behind this service need everyone upgrading from one version to the next to perform a specific migration process. If you pull the new image without performing the migration, you’ll run into issues in this example. Fear not—such migrations and special upgrade processes are rare, but prove the point that performing updates without any research and planning can lead to big headaches!
Pruning your Docker system
When you update images via the above process, Docker doesn’t just delete the old ones. Instead, it holds onto them, just in case you might need them again in the future—call it a bandwidth-saving feature. Over time, you’ll end up holding onto a lot of obsoleted Docker images. Check out how mine have built up in just a few short weeks (visualized via the wonderful Portainer):
All in all, these Docker images are taking up 5.7GB of disk space, whereas, by my back-of-the-envelope addition skills, the used images should only need about 1.5GB of disk space. That’s quite a bit of unnecessary overhead! As with all good and growing things, there comes a time for some pruning.
docker system prune—this single command will remove all stopped containers, all extraneous networks, all dangling images (those not referenced by other images), and all the build cache. Here’s what happened when I ran it on my self-hosting VPS just now:
docker system prune WARNING! This will remove: - all stopped containers - all networks not used by at least one container - all dangling images - all build cache Are you sure you want to continue? [y/N] y Deleted Networks: nginx-proxy Deleted Images: untagged: linuxserver/[email protected]:7e3057b927364c162afcb63f03446d7980ac28df8ee3c5f4eb1f62d6719fa3b7 deleted: sha256:ea4c925e70bf5479c4b0133e8097d7fd4d5d183263bb3a33f9cbe812d88797c8 deleted: sha256:23841f8245895fdafd2134843ce45f8e525c6ba3176723d45cf9004e06ba091d deleted: sha256:0548b5a67fb72ea29271749d25f6b3fce679579caff1663a7d41b8b3aed98e23 deleted: sha256:3970590e481b7ba871e1d6fafecb024c05f971203d22c480f4c7766d576102f2 deleted: sha256:8d8af4bf2329279740da31cafa1c7ab3ec752b38753b38c864a7ac9d8be67f02 deleted: sha256:bb67a57cfd03375e783af7d4d2d563dbb43e4f3378c904680d4d1e79d8d438da deleted: sha256:7938a8782c7ba47eb02a3d4540765dbd2946f249908d440d0a06e1b3e32b1310 deleted: sha256:f42d553d06361a1af7bdfe04f2b2c74483749e39d80ae5ce9692a3e68a48f275 untagged: jwilder/[email protected]:5145492f8a974d777e7ca6ee01032c476033388495f56eea06ab770e1d1e5c62 ... Let's just skip a few lines here... ...jrcs/[email protected]:ca226e5009194dd1758501babc466a3a405466e2aaceee987e59443595fef0e1 deleted: sha256:7f88517e1c5db545b02957ae284f0dc8a5e6e0b55f7035fc293145713964f425 deleted: sha256:c1376b634217ba163c23c981b20e34a2f391d75d118271228088eb3063d9fce0 deleted: sha256:ae8480aa399f4f21b4fe4f8d5e4ead2704e64df7e36b1063cd6ebb9820136ec1 deleted: sha256:5e965517b587c864668908ba9d322e5afd8e40eb913f579a768d7b771c38d0d5 deleted: sha256:2b40e85460d965507756b02bef95f983223239efe9f4bbf9b4498e86bb63a59c deleted: sha256:8512c5515613ce9c33d89a16f3112edc22dc6b6069193cb09ee3b8b27c63cfd5 untagged: gitea/[email protected]:7a0d95015a90fbf7cca5a8aaacc56eff4570e853244587a9f33ff05dadbfb76f deleted: sha256:d653039e35fe10c49397d6c83ae01ca7e1478c8ba4e2fa3556bc780c4618bfde deleted: sha256:bdafa6b12f4289a9f47b4a521778ab7818f7316d891ca01a72b63b208fa4ffe0 deleted: sha256:2da687c847019e9df0b312ae46ea70e7c91f79d3b28a227283c23d74b5d11e4b deleted: sha256:e4cf66de579aabaadadf32a3b1d6c61470c49511786a42f15b62dc31bd1ab496 deleted: sha256:043ef2276f85b86315488d2c1f3a6494d075993ea5c640e9b793d79ab942e079 Total reclaimed space: 2.031GB
Not bad, eh? Here’s what Portainer reports after this pruning:
You’re safe to run
docker system prune whenever you feel like your disk space is rising—it all depends on how often you’re creating/destroying containers, updating images, or trying (and perhaps abandoning) new images.
You might have noticed that, in the image just above, I removed many
Unused images, but not all of them. That’s because
docker system prune deletes only dangling images, not unused ones. If an image is dangling, it’s also unused, but an image can be unused but also not dangling. Got it?
Since I don’t feel like keeping an OpenVPN image around any more, it’s time to dig a little deeper.
Docker has created a number of
prune commands, and this time, we’ll use
prune in combination with
docker image. We’ll also add a
-a option to specify that we want to remove all unused images, not just dangling ones.
$ docker image prune -a WARNING! This will remove all images without at least one container associated to them. Are you sure you want to continue? [y/N] y Deleted Images: untagged: httpd:alpine untagged: [email protected]:d41352ad39b3c5595b40fdc8a0f4ffda068108dfbe5f9c326a20bd3cb02b10b6 deleted: sha256:8b0a96451769f2c32d971a9daf4a2e3819628e7b0247485641ff9216a2e9229e deleted: sha256:3e420044ba2af313bdeafc95783deaff1635f1e616887013a5736a66ae50337f deleted: sha256:0b135db9ea8cfb977c5c59a30204ebaf05dc261c430c35d0bbf6a8393d500b0d deleted: sha256:6919d90346e4f9d37b4da2dcf89e373897f7d43817ef8d196bef1958491bdcda deleted: sha256:b0b6a845f58324956d57a93efe31829908827f372ad6137ea51859a2e1ad9340 untagged: mysql:5.7 ... Let's just skip a few lines here... ... untagged: kylemanna/openvpn:latest untagged: kylemanna/[email protected]:6ccd8a3c02f98b256adabfc511de1bd9043504084bde4ede8b5458689fd94b8b deleted: sha256:ced52f3b0c544d2d4d07ae8615a1612a5539165cedddc9daf7cde6cbdaf6217b deleted: sha256:8fd7b40da319c09160320293cbb1ecf8e0e94cef208f7faa18819a089bd80d29 deleted: sha256:f57f13e5f45c8d3e3e976e3ca33e033fd94f3e785f7726a23c30f0ce69392517 deleted: sha256:646199403f1d4bc801a0115ba820db4a6eb85b8a90dc1d26bcbad88961071b3a deleted: sha256:99bc3452a6180b649e471d2e2c62d6fadde7fa435199d8a1205e53abafeebcf3 deleted: sha256:cd7100a72410606589a54b932cabd804a17f9ae5b42a1882bd56d263e02b6215 Total reclaimed space: 1.767GB
And now, the results via Portainer:
With these cleanups, Portainer now reports the expected 1.5GB disk space used by Docker images—no more bloat!
You can also remove all unused images via the
docker system prune command if you add the same
-a option to the end of it:
docker system prune -a. I do them separately, but the results should be the same.
prune command also works with other Docker systems, such as
docker network prune,
docker container prune, if you’d like more specificity with how you clean up your self-hosting stack.
A note on volumes: You might have noticed that by default
docker system prune doesn’t touch volumes. That’s because volumes are where your personal data is kept, and one accidental delete will mean you’ve lost your data. You can try pruning volumes via
docker system prune --volume or
docker volume prune, but I can only recommend you proceed with caution. I’m not about to run either of those on my own system, thank you very much.
If you want to take your self-hosting experience to the next level, and either have a secondary (or even tertiary!) VPS available to you, I highly recommend looking into Docker Swarm. You can combine your
docker-compose deployment scheme with Docker Swarm to create a load-balanced, self-healing infrastructure, which is pretty darn neat.
For now, this process is going to remain outside of the handbook’s scope, as I think most people won’t need a cluster for self-hosting.
But, for those who are curious, we have an excellent tutorial on getting started with Docker Swarm.
Next: Into the unknown!
The fifth page—Self-hosting Nextcloud with Docker—is now available! The sixth, seventh, and nth pages of this handbook are coming soon. Next, I’ll cover how to add specific apps, such as Nextcloud or just a plain ‘ol Nginx web server, to this stack.
Get 24GB RAM for $9.99/mo!
Save $3,960 with SSD Nodes versus competitors like Digital Ocean or Amazon Lightsail.
Snag limited-time prices:
Table of contents
Like what you saw? Subscribe to our weekly newsletter.