Docker Networking — Done the Right way!


Networking has always been a crucial part of operating systems. When virtualization gained momentum the networking stack was one of the most important aspect that Engineers needed to get right. Docker is no exception, when we containerize the application, one of the important constraints we have is that the app itself shouldn't care about the fact that it is running inside a container. Everything from storage stack to system calls should work as if it is a full-fledged operating system. Everything including Networking.

Docker offers various ways to provide networking to its containers. Each container can get its own networking stack including an IP Address. The containers themselves can interact with each other as if they are talking to different nodes on the network. But before we delve into the details about networking let's clear up a few basic details.

Docker for Windows or Mac vs Docker on Linux

Docker on Windows or Mac doesn't run on the operating system itself. What Docker does, instead, is to run a virtual machine on top of Hyper-V Windows or HyperKit on Mac OS.

This VM is connected via its own virtual network interface which complicates the matter. To keep things simple we want to start with Docker on Linux which is the state of Docker that you are most likely to encounter in production. Also this is the state where you have to be concerned about making your application available to the outside world in a secure way.

So to observe Docker networking in its natural habitat takes us to an SSDNodes VPS running Linux (Ubuntu 18.04 LTS) with Docker installed on top of it. The VPS has Docker installed in it.


Here's the setup I will be using for this post, in case you want to follow along:

  1. Ubuntu 18.04 LTS server with a public IP on SSDNodes. Let's call this variable, Public_IP which would be different for different users.

Initial Networking Setup

Let\'s set a baseline by looking at the networking interfaces we have on our VPS before and after Docker was installed. Use the command ip addr to list them.

$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:f3:e8:f7:c7 brd ff:ff:ff:ff:ff:ff
    inet brd scope global enp3s0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:f3ff:fee8:f7c7/64 scope link
       valid_lft forever preferred_lft forever

We have two interfaces one with the main public IP, called ens3,and another is a loopback interface lo used for debugging and other purposes. Now you can go ahead and install Docker. This is what your system has by default.

After installing Docker, if you have not started any new container you will see one new entry.

$ ip addr
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:1e:3f:71:13 brd ff:ff:ff:ff:ff:ff
    inet brd scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:1eff:fe3f:7113/64 scope link
       valid_lft forever preferred_lft forever

This new interface docker0 is added to the VPS and the VPS has an IP on this new interface. Let's create a Ubuntu container and see what we can investigate from that.

$ sudo docker run -dit --name container1 ubuntu:latest

After you create this container, if you run ip addr you will see a new interface will pop up but it will not have any IPv4 address. This is because this interface is used by the container, not the host. However, the container does have an IP address and we can figure it out by running docker inspect

$ docker inspect container1 | grep IPAdress
"IPAddress": "",
"IPAddress": "",

You can see that Docker has a virtual network of its own. The Docker host (our main VPS) is the gateway (with an IP of to the outside world and the container1 has an IPAdress of and new containers on this default network will have incremental IP Addresses in the subnet

This is part of the default networking set up in Docker. Let's look at what other options we have here.

Networking Drivers

Networking in Docker is implemented using Networking drivers. Each driver serves a different purpose and there are a few of them. When you create a network, you specify which driver to use, and that helps you separate different sets of containers into different networks. Let's see some of the most use drivers available

  1. Bridge - This is the most used network driver. It helps create a network with a subnet mask, where each container gets its own private IP. We will be dealing with only the bridge networks since these are the most useful ones. The default example we saw above also uses Bridge networking.
  2. Host - This driver allows your container to access all the traffic of the host system. For example, if you are running a webserver in a container, with host networking. All the traffic on the host's port 80 is directly exposed to the container.
  3. Null - You can run processes in isolated containers that will never see the light of the day (or The Internet) if you create a network uses the null. Basically you have no networking capabilities.

By default, Docker creates three networks using the network drivers mentioned above. The default network names as their underlying drivers. You can list them by running:

$ docker network ls
2c1de7bd40f1   bridge    bridge    local
00d5b332145a   host      host      local
558ee98b0614   none      null      local

Out of the three, its the bridge network that containers will connect to by default. The docker0 interface that we saw earlier connects the host machine to this network as well, with an IP address, which in our case was

If we try to inspect the default bridge network, we will get an overwhelming JSON output printed on our screen, but a little bit of patient inspection would show what containers are a part of this network. Since we created container1 earlier and this is the default network, we would expect to see it here. Inspect the bridge network to list all the containers that are attached to it.

$ docker inspect bridge
 "Containers": {
            "1fa3d365508b14d3b2b7dbf41031de1e76e4bb7a11cd738a807d9e1881b94c61": {
                "Name": "container2",
                "EndpointID": "339014ea840c592f4ea8e7fcb9f05add1659fbf051e16a5a72f89e5c949a4b6b",
                "MacAddress": "02:42:ac:11:00:03",
                "IPv4Address": "",
                "IPv6Address": ""
            "d080b3d8eec29209bd625e93fa06ed0b860d1357cf3e7b5dc1c9fe0ef9401028": {
                "Name": "container1",
                "EndpointID": "a56031f89bc1e432e78b5fd0035e8aa139057e2be2967781535ae11d9c87600e",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "",
                "IPv6Address": ""

You can see that the container has MacAddress and a convenient IP of Of course the /16 at the end of the IP means that we can have 2 raised to the power 16 unique IPs in this network. Theoretically, you can connect 65535 containers on this network. You can create a new container2 and inspect the bridge again to see if the new one appears as a member. Try to ping those containers' IP too, if you want to make sure they are reachable from host or from other containers in the same network.

Creating a Docker Network

You can create your own Docker network using one of the three drivers listed above. We will, of course, be using bridge driver.

$ docker network create --driver=bridge my-network
$ docker network ls
2c1de7bd40f1   bridge       bridge    local
00d5b332145a   host         host      local
d2f78256ff59   my-network   bridge    local
558ee98b0614   none         null      local

The above command created my-network now when you create new containers, you can attach them to this network instead of the default bridge network. Set the --network flag to the name of your custom network for this.

$ docker run -dit --name container3 --network=my-network ubuntu

First thing you would notice is that after docker network create command, your ifconfig shows a new interface with yet another IP address for your host. At this point your host has 4 IPv4 addresses. The newest one would have a randomly chosen name.

$ ip a
8: br-d2f78256ff59: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:f1:49:9c:89 brd ff:ff:ff:ff:ff:ff
    inet brd scope global br-d2f78256ff59
       valid_lft forever preferred_lft forever
    inet6 fe80::42:f1ff:fe49:9c89/64 scope link
       valid_lft forever preferred_lft forever

You can also inspect the network and find that the container3 that you created earlier are not the part of default bridge network but the newer my-network.

$ docker inspect my-network
  "Containers": {
    "f7823c06cca2d7836f8e58c0c3ccbe267c4368a581eed9d9f2212e22d147f40a": {
       "Name": "container3",
       "EndpointID": "e27a4dd8ea4d9d4e4a88e8681bc13d83a8cdf900347e1654a47b8625f4d3ebb9",
       "MacAddress": "02:42:ac:12:00:02",
       "IPv4Address": "",
       "IPv6Address": ""

So the my-network is a new subnet of its own

Why Create new networks?

You might be thinking why bother with knowing all of this? Shouldn't the default network be sufficient? And this is where we need to understand how Docker is intended to use.

An application like, let\'s say, WordPress contains at least two pieces, a database and a webserver with WordPress installed on it. So to Dockerize your WordPress site, you will create an Nginx server with WordPress and connect it to a MySQL container. Now, let\'s say you want to install multiple instances of WordPress on your VPS and they all need to be isolated from one another. Your first line of defense, in this scenario, is to create different networks and have WordPress and MySQL for one instance on one network, and WordPress and MySQL for another instance on another network.

Members of different network can't directly communicate with one another giving us a way to isolate different environments from one another. This results in easier management and improved in security.


Official documentation recommends that we should create our own network instead of relying on the default one. While there are several reasons for this, one of the main ones concern the exposed ports of a container.

With the default network, you have to specify during the creation of a container what ports do you want to expose (most often this is specified in the Dockerfile) but you don't have to do that when you create a network of your own, all the ports of all the containers are accessible from ~within~ the network. You can read about these subtle difference in greater depth (here)[].