Tutorial: Getting Started with Docker on Your VPS

A note about tutorials: We encourage our users to try out tutorials, but they aren't fully supported by our team—we can't always provide support when things go wrong. Be sure to check which OS and version it has been tested with before you proceed.

If you want a fully managed experience, with dedicated support for any application you might want to run, contact us for more information.

Docker is a platform for creating and using “containers,” which are isolated packages of everything needed to run a particular type of software. A Docker container is similar to the technology makes a VPS possible—by splitting one server into many smaller accounts, we can run your applications independently of anyone SSD Nodes user.

Containers are great, because they allow you to run different applications on a single VPS without them interacting—and potentially conflicting—with one another. Let’s say you want to run two different WordPress-based websites from a single VPS. Without Docker, these two WordPress installations will share an Apache installation and share a MySQL database. That means that if one crashes, it could bring down the other. And if the database gets corrupted, both your sites could be affected.

With containers, each instance of WordPress is kept entirely isolated, with separate Apache servers and configurations, and separate MySQL databases. If one crashes, it won’t affect the other in the slightest.

This improves stability, first and foremost, but also security. Development can also be eased by doing all of your work in a Docker container, and then creating an identical container on your VPS.

Let’s get started on installing Docker and taking the first steps into a container-powered VPS.


  • A KVM-based VPS running any of our available OS options.
  • A non-root user account (commands requiring escalated privileges will use sudo).

Step 1. Installing Docker

Docker is available on all of our OS options (CentOS, Ubuntu, Debian), so we’ll quickly walk through the installation process for each of these. Once Docker is installed, the commands will be distro-agnostic, so the differences only matter during installation.


First, run this series of commands to set up the Docker repository.

$ sudo apt-get install apt-transport-https ca-certificates curl
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
$ sudo apt-get update

Once the repository has been installed, you can install the latest version of Docker CE.

$ sudo apt-get install docker-ce

Debian 8

The installation for Docker on Debian 7 and 8 is slightly different, so we’ll cover them separately. To begin, let’s install the Docker repository.

$ sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
$ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
$ sudo apt-get update

Finally, you can install Docker CE.

$ sudo apt-get install docker-ce

Debian 7

And now, for Debian 7 Wheezy:

$ sudo apt-get install apt-transport-https ca-certificates curl python-software-properties
$ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
$ sudo apt-get update

You’ll need to edit /etc/apt/sources.list to comment out the following line:

deb-src [arch=amd64] https://download.docker.com/linux/debian wheezy stable

Save and exit the file, and then you can install Docker CE.

$ sudo apt-get install docker-ce


Start by enabling the Docker CE repository.

$ sudo yum install yum-utils
$ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
$ sudo yum makecache fast

Then, you can install and enable Docker.

$ sudo yum install docker-ce
$ sudo systemctl start docker

Step 2. Testing the Docker installation

Note: Now that Docker is installed, we’re back to using the same commands on every OS option.

The people behind Docker recommend testing out your Docker installation with a basic Hello world command to ensure everything is working the way it should. If so, you’ll see the following output:

$ sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
78445dd45222: Pull complete
Digest: sha256:c5515758d4c5e1e838e9cd307f6c6a0d620b5e07e6f927b07d05f6d12a1ac8d7
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:

For more examples and ideas, visit:

Step 3. Some post-installation configurations

Now that Docker is properly installed, let’s take a moment to change a few configurations, which will make Docker a little easier to use on a day-to-day basis. We’ll do the following:

  • Enable Docker to start automatically after a reboot.
  • Allow the non-root user to manage Docker.
  • Install docker-compose.

Automatic start

In CentOS, Debian, and Ubuntu 16.04, systemd is responsible for managing which services start when the system boots up. That means you can enable this with a single command.

$ sudo systemctl enable docker

On Ubuntu 14.04, Docker is automatically configured to start on boot.

By default, all Docker containers will start automatically upon a reboot, so once you have enabled the docker service itself, you do not need to take any further steps. This means that any application you run via Docker will gracefully restart after boot, potentially minimizing downtime (as long as the services inside the container are set up to start at boot themselves).

Non-root access

In order to give our non-root user access to the Docker management commands, we need to create a docker group (it may already be created for you), and then add your primary user to that group.

$ sudo groupadd docker
$ sudo usermod -aG docker $USER

Log out of your VPS by typing exit or Ctrl+D and log back in. Then, you can test whether or not you can use the docker command without prepending sudo.

$ docker run hello-world

Installing docker-compose

Compose is a tool that helps simplify the configuration and deployment of Docker containers and applications by using an easy-to-read .yml/.yaml file. In some cases, this will be easier than writing out a lengthy command for the shell prompt.

As of 10/20/2017, 1.17.1 is the newest version of docker-compose. You may want to check the releases page to see if there’s a new version, and then replace the 1.17.1 in the command below with the newer version number.

$ sudo -i
$ curl -L https://github.com/docker/compose/releases/download/1.17.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
$ chmod +x /usr/local/bin/docker-compose

Step 4. Testing Docker with a basic LAMP stack

Now we can get to the exciting bit—actually starting up some Docker containers running actual applications. No more Hello, world!

We’ll start by creating a very basic LAMP stack using the php:apache container available from Docker. But, before that, let’s create a directory on the host to store our files, which we’ll link to the /var/www/html directory within the container.

$ mkdir $HOME/apache && cd $HOME/apache

Then, we can create a small PHP file named info.php that will display information about the PHP configuration. It’s a standard method of testing PHP-based installations.

$ printf '<?php\n  phpinfo(); \n?>' > info.php

Finally, we have our docker command. But, before you run it, check out the information just beneath the command so that you can understand exactly what it’s accomplishing.

$ docker run -d --name=apache -p 8080:80 -v $HOME/apache:/var/www/html php:apache

First, the docker run specifies that we are going to create and start a new container, and the -d option means we will “detach” from it, much the way one detaches from a tmux session or an ssh session. In cases where you want to immediately run commands inside the newly-created container, you can omit the -d.

We use --name=apache to give the container a specific name. This is recommended, because your chosen names will be easier to manage and remember than the randomized defaults—comes in handy when you want to stop or delete a container.

-p 8080:80 will expose port 8080 to traffic arriving on the VPS, and will route that traffic to port 80 on the container. This makes it possible to expose different containers to different ports, and enable more complex configurations with an nginx reverse proxy.

-v $HOME/apache:/var/www/html is a virtual drive mapping. In this case, any files in the directory before the colon, $HOME/apache, will be available in the /var/www/html directory inside the container.

And finally, php:apache tells docker which image to use. More images can be found on the Docker store.

You should now be able to see that the container is running with the docker ps command:

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                  NAMES
d1fbdb7e0c5f        php:apache          "docker-php-entryp..."   3 seconds ago       Up 3 seconds>80/tcp   apache

You can now also access your basic Apache web server by visiting http://YOUR-SERVER-IP:8080/info.php in your favorite browser. If all has gone correctly, you’ll see something like the following:

The PHP info page
The PHP info page

Now, for the sake of showing some more core docker commands, let’s gracefully shut down this container, delete the container, followed by the image itself.

$ docker stop apache
$ docker rm apache
$ docker rmi php:apache

Step 5. And WordPress, for good measure

Let’s take the LAMP stack a step further with a full-blown WordPress installation, and this time, let’s also use docker-compose to make the process a little bit more human-readable.

The first step is creating a new directory for this project.

$ mkdir wp_test && cd wp_test

Then, create a docker-compose.yml file that will specify the configuration. This will create two containers: one running Apache/Wordpress, and another running the mysql instance, with data persisted between reboots and container shutdowns. Of course, for production use, you will want to change the passwords to be more secure.

version: '2'

     image: mysql:5.7
       - db_data:/var/lib/mysql
     restart: always
       MYSQL_ROOT_PASSWORD: somewordpress
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: wordpress
     container_name: wp_test_db

       - db
     image: wordpress:latest
       - "8080:80"
     restart: always
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_USER: wordpress
       WORDPRESS_DB_PASSWORD: wordpress
     container_name: wp_test

To launch the container for the first time, use the docker-compose up command.

$ docker-compose up -d

Now, you can check on these new containers using docker ps.

$ docker ps
20570a5eb798        wordpress:latest    "docker-entrypoint..."   3 seconds ago       Up 2 seconds>80/tcp   wp_test
c1872cb1443d        mysql:5.7           "docker-entrypoint..."   3 seconds ago       Up 3 seconds        3306/tcp               wp_test_db

Of course, the WordPress installation is now available on http://YOUR-SERVER-IP:8080, for you to begin the famous 5-minute installation. And, if for any reason, you need to shut down these containers while retaining the data, use docker-compose down.

Moving ahead

We hope you’re excited about taking full advantage of container technology on your VPS. By offloading services to containers, you can keep your base OS cleaner, with fewer attack vectors, and with less risk of various applications conflicting with one another.

Plus, it’s much safer to make mistakes with containers! All you need to do is stop the container, remove it, and try again, without worrying that you’re cluttering up your system or potentially breaking it.

Stay tuned for more Docker-centric tutorials in the weeks to come, such as using nginx as a reverse proxy, so that you can, for example, direct traffic to yourdomain.com to one container, and yourotherdomain.com to a second container.

Until then, enjoy your containers! And, while it certainly is possible, we can’t necessarily recommend running Docker inside of Docker.

Additional resources: