Docker backup: Saving and restoring your volumes
You don’t have a backup unless you’ve restored your data from it.
The above quote is accurate even in the age of Docker. You need to have a backup of your applications, and, more importantly, your Docker volumes. Volumes are the persistent storage provider for Docker containers, and you can learn more about them here.
We’ll pick up where that piece left off and work with the volume we created for our blog based on the Ghost CMS.
Docker volumes are supposed to be managed by the Docker daemon, and we don’t want to fiddle with that. The strategy here is to get a copy of a volume as a compressed file in one of our regular directories, like
/home/$USER/backups. This compressed copy of the volume then acts as our backup.
First, we spin up a temporary container, and we mount the backup folder and the target Docker volume to this container. When an ordinary directory like
~/backups is mounted inside a Docker container we call it a bind mount. Bind mounts, unlike Docker volumes, are not exclusively managed by Docker daemons, and hence we can use them as our backup folder.
The official Docker documentation recommends this behavior, so you know it’s safe to try on your containers and volumes. But before you do take a backup, ask yourself this question:
Is the data in this volume changing right now?
If you are running a small blog where you add the content, not your customers, then the answer is most certainly no. On the other hand, an e-commerce site can receive an order at any given moment, even when you are running the backup! If that’s the case, then you need to stop the main container before running a backup.
In our example, the main container is
ghost-site which uses Docker volume
my-volume, mounted at
/var/lib/ghost/content, to store all of its data. We first stop the container.
$ docker stop ghost-site
Next, we spin up a temporary container with the volume and the backup folder mounted into it.
$ mkdir ~/backup $ docker run --rm --volumes-from ghost-site -v ~/backup:/backup ubuntu bash -c “cd /var/lib/ghost/content && tar cvf /backup/ghost-site.tar .”
Let’s dissect the second command.
docker run creates a new container, that much is obvious. After that:
--rm flag tells Docker to remove the container once it stops.
--volumes-from ghost-site : Mounts all the volumes from container
ghost-site also to this temporary container. The mount points are the same as the original container.
-v ~/backup:/backup: Bind mount of the
~/backup/ directory from your host to the
/backup directory inside the temporary container.
ubuntu: Specifies that the container should run an Ubuntu image.
bash -c “...” : Backs up the contents of your website as a tarball inside
/backup/ inside the container. This is the same
~/backup/ directory on your host system where a new
ghost-site.tar file would appear.
You don’t have a backup until you have at least once tried to recover your original data from the backup. Let’s not wait for a disaster to strike, and then figure out how to restore. Let’s do a trial run when things are running fine.
To begin with, I have the following dummy content on my website:
Logging into the VPS, let’s delete the container and volume, mimicking a disaster.
$ docker rm -f ghost-site $ docker volume rm my-volume
Now, the steps for recovery would involve:
Creating a new volume
Spinning up a temporary container to recover from the tarball into this volume
Mounting this volume to the new container
$ docker volume create my-volume-2 $ docker run --rm -v my-volume-2:/recover -v ~/backup:/backup ubuntu bash -c “cd /recover && tar xvf /backup/ghost-site.tar” $ docker run -d -v my-volume-2:/var/lib/ghost/content -p 80:2368 ghost:latest
If everything checks out, then you will be able to see the same dummy content, log in with the same email and password. In other words, your actions preserve the state of the application.
Tarballs are not backups!
We showed you how to create a tarball out of the contents of your volume, but that tarball still lives on the host. If you make a critical error on configuring your host, lock yourself out via iptables, or otherwise force yourself to reinstall your operating system via the dashboard, your backup is useless!
Setting up a remote backup solution is the best way to ensure that, in the face of disaster, your data is with you. For small websites, a simple
scp command would transfer all the content securely to your local system. Larger websites with a lot of content would require a bit more sophistication. The options vary from
rsync to dedicated NFS servers running periodic backups. Pick one that serves your needs the best.
But, for the meantime, enjoy the slight sense of readiness and preparedness that comes with knowing how to backup your Docker volumes in a pinch!
The 10X cloud for developers who demand performance.
We've pioneered next-generation cloud hosting with NVMe disk technology: 10X performance at 1/5 the price of slower servers from DigitalOcean, Linode, and Vultr. Deploy faster and scale at a fraction of the cost.