Tutorial: Getting started with Docker on your VPS
Docker is a platform for designing, building, and launching “containers,” which are isolated environments containing all the software and configuration files necessary to run a service or application. Getting started with Docker on your VPS is pretty straightforward, and once you’re set up via this tutorial, it’s like you’ve “leveled up” in DevOps. More stability, more flexibility, and more “save your behind” if and when you mess up.
Containers are really valuable because they allow you to run different applications on a single VPS without them interacting—and potentially conflicting—with one another. Let’s say you want to run two different WordPress-based websites from a single VPS. Without Docker, these two WordPress installations will share an Apache installation and share a MySQL database. That means that if one crashes, it could bring down the other. And if the database gets corrupted, both your sites could be affected.
With Docker, each instance of WordPress is kept entirely isolated, with separate Apache servers and configurations, and separate MySQL databases. If one WordPress container crashes, it won’t affect the other in the slightest. This improves stability, first and foremost, but also security. Development can also be eased by doing all of your work in a Docker container, and then creating an identical container on your VPS.
Let’s get started on installing Docker and taking the first steps into a container-powered VPS.
Updated on June 13, 2018!
- A KVM-based VPS running any of our available OS options.
- A non-root user account (commands requiring escalated privileges will use
Step 1. Installing Docker
Installing Docker used to be a more complex process, but the maintainers now offer a installation script you can download and execute in a single command. Type in the following into your VPS and you’ll have a functional Docker installation in a matter of minutes.
$ sudo curl -sS https://get.docker.com/ | sh
The script checks your operating system, downloads and installs package repositories, installs Docker iself alongside any dependencies, and starts the Docker service.
Get 24GB RAM for $9.99/mo!
You're 90 seconds away from running Docker on SSD Nodes!
Deploy with SSD Nodes
Step 2. Testing your Docker installation
The people behind Docker recommend testing out your Docker installation with a basic
Hello world command to ensure everything is working the way it should. If so, you’ll see the following output:
$ sudo docker run hello-world Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world 78445dd45222: Pull complete Digest: sha256:c5515758d4c5e1e838e9cd307f6c6a0d620b5e07e6f927b07d05f6d12a1ac8d7 Status: Downloaded newer image for hello-world:latest Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker ID: https://cloud.docker.com/ For more examples and ideas, visit: https://docs.docker.com/engine/userguide/
Step 3. Some post-installation configurations
Now that Docker is properly installed, let’s take a moment to change a few configurations, which will make Docker a little easier to use on a day-to-day basis. We’ll do the following:
- Enable Docker to start automatically after a reboot.
- Allow the non-root user to manage Docker.
In CentOS, Debian, and Ubuntu 16.04,
systemd is responsible for managing which services start when the system boots up. That means you can enable this with a single command.
$ sudo systemctl enable docker
By default, all Docker containers will start automatically upon a reboot, so once you have enabled the
docker service itself, you do not need to take any further steps. This means that any application you run via Docker will gracefully restart after boot, potentially minimizing downtime (as long as the services inside the container are set up to start at boot themselves).
In order to give our non-root user access to the Docker management commands, we need to create a
docker group (it may already be created for you), and then add your primary user to that group.
$ sudo groupadd docker $ sudo usermod -aG docker $USER
Log out of your VPS by typing
Ctrl+D and log back in. Then, you can test whether or not you can use the
docker command without prepending
$ docker run hello-world
Compose is a tool that helps simplify the configuration and deployment of Docker containers and applications by using an easy-to-read
.yaml file. In some cases, this will be easier than writing out a lengthy command for the shell prompt.
As of June 13, 2018,
1.21.2 is the newest version of
docker-compose. You may want to check the releases page to see if there’s a new version, and then replace the
1.21.2 in the command below with the newer version number.
$ sudo -i # curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose # chmod +x /usr/local/bin/docker-compose
Step 4. Testing Docker with a basic LAMP stack
Now we can get to the exciting bit—actually starting up some Docker containers running actual applications. No more
We’ll start by creating a very basic LAMP stack using the
php:apache container available from Docker. But, before that, let’s create a directory on the host to store our files, which we’ll link to the
/var/www/html directory within the container.
$ mkdir $HOME/apache && cd $HOME/apache
Then, we can create a small PHP file named
info.php that will display information about the PHP configuration. It’s a standard method of testing PHP-based installations.
$ printf '<?php\n phpinfo(); \n?>' > info.php
Finally, we have our
docker command. But, before you run it, check out the information just beneath the command so that you can understand exactly what it’s accomplishing.
$ docker run -d --name=apache -p 8080:80 -v $HOME/apache:/var/www/html php:apache
docker run specifies that we are going to create and start a new container, and the
-d option means we will “detach” from it, much the way one detaches from a
tmux session or an
ssh session. In cases where you want to immediately run commands inside the newly-created container, you can omit the
--name=apache to give the container a specific name. This is recommended, because your chosen names will be easier to manage and remember than the randomized defaults—comes in handy when you want to stop or delete a container.
-p 8080:80 will expose port
8080 to traffic arriving on the VPS, and will route that traffic to port 80 on the container. This makes it possible to expose different containers to different ports, and enable more complex configurations with an
nginx reverse proxy.
-v $HOME/apache:/var/www/html is a virtual drive mapping. In this case, any files in the directory before the colon,
$HOME/apache, will be available in the
/var/www/html directory inside the container.
docker which image to use. More images can be found on the Docker store.
You should now be able to see that the container is running with the
docker ps command:
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d1fbdb7e0c5f php:apache "docker-php-entryp..." 3 seconds ago Up 3 seconds 0.0.0.0:8080->80/tcp apache
You can now also access your basic Apache web server by visiting
http://YOUR-SERVER-IP:8080/info.php in your favorite browser. If all has gone correctly, you’ll see something like the following:
|The PHP info page|
Now, for the sake of showing some more core
docker commands, let’s gracefully shut down this container, delete the container, followed by the image itself.
$ docker stop apache $ docker rm apache $ docker rmi php:apache
Step 5. And WordPress, for good measure
Let’s take the LAMP stack a step further with a full-blown WordPress installation, and this time, let’s also use
docker-compose to make the process a little bit more human-readable.
The first step is creating a new directory for this project.
$ mkdir wp_test && cd wp_test
Then, create a
docker-compose.yml file that will specify the configuration. This will create two containers: one running Apache/Wordpress, and another running the
mysql instance, with data persisted between reboots and container shutdowns. Of course, for production use, you will want to change the passwords to be more secure.
version: '2' services: db: image: mysql:5.7 volumes: - db_data:/var/lib/mysql restart: always environment: MYSQL_ROOT_PASSWORD: somewordpress MYSQL_DATABASE: wordpress MYSQL_USER: wordpress MYSQL_PASSWORD: wordpress container_name: wp_test_db wordpress: depends_on: - db image: wordpress:latest ports: - "8080:80" restart: always environment: WORDPRESS_DB_HOST: db:3306 WORDPRESS_DB_USER: wordpress WORDPRESS_DB_PASSWORD: wordpress container_name: wp_test volumes: db_data:
To launch the container for the first time, use the
docker-compose up command.
$ docker-compose up -d
Now, you can check on these new containers using
$ docker ps 20570a5eb798 wordpress:latest "docker-entrypoint..." 3 seconds ago Up 2 seconds 0.0.0.0:8080->80/tcp wp_test c1872cb1443d mysql:5.7 "docker-entrypoint..." 3 seconds ago Up 3 seconds 3306/tcp wp_test_db
Of course, the WordPress installation is now available on
http://YOUR-SERVER-IP:8080, for you to begin the famous 5-minute installation. And, if for any reason, you need to shut down these containers while retaining the data, use
We hope you’re excited about taking full advantage of container technology on your VPS. By offloading services to containers, you can keep your base OS cleaner, with fewer attack vectors, and with less risk of various applications conflicting with one another.
Plus, it’s much safer to make mistakes with containers! All you need to do is stop the container, remove it, and try again, without worrying that you’re cluttering up your system or potentially breaking it.
Stay tuned for more Docker-centric tutorials in the weeks to come, such as using
nginx as a reverse proxy, so that you can, for example, direct traffic to
yourdomain.com to one container, and
yourotherdomain.com to a second container.
Until then, enjoy your containers! And, while it certainly is possible, we can’t necessarily recommend running Docker inside of Docker.