How Docker Networking Works: Bridge, Host, and Overlay Networks
Last month, I spent two hours debugging why containers on the same host couldn't talk to each other. They were both running, logs looked fine, but requests just timed out. The problem? I'd created them on different Docker networks without realizing it.
That frustration sent me down a rabbit hole of Docker networking internals. I wanted to understand not just how to fix networking issues, but why they happen in the first place. What's actually happening when containers communicate? What's the docker0 interface I keep seeing in ip addr? And why do some containers need --network host while others work fine with the default?
Here's what I learned about how Docker networking really works under the hood.
The Docker Networking Basics: More Than You'd Think
When you run docker run nginx, Docker doesn't just start a process. It sets up an entire networking stack for that container. This happens so seamlessly that most of us never think about it—until something breaks.
Docker supports multiple network drivers, each solving different problems:
- bridge - The default. Isolated network for containers on a single host
- host - Container shares the host's network stack directly
- overlay - Multi-host networking for Docker Swarm
- macvlan - Containers get their own MAC addresses on the physical network
- ipvlan - Like macvlan but shares the host's MAC address
- none - No networking at all
For most applications, you'll use bridge networks (the default), occasionally host for performance-critical services, and overlay if you're running Swarm. The others are specialized for specific use cases.
Let's dig into how the three most common ones actually work.
Bridge Networks: Virtual Switches Inside Your Host
When you install Docker, it creates a network called bridge (also shown as docker0 in Linux network interfaces). This is a virtual network switch that lives entirely in software on your host machine.
Here's what happens when you start a container on the default bridge network:
docker run -d --name web nginx
Docker performs these steps in sequence:
- Creates a network namespace for the container (isolated network stack)
- Creates a virtual ethernet pair (veth pair - think of it as a virtual network cable)
- Connects one end to the container and the other to the docker0 bridge
- Assigns an IP address from the bridge network's subnet (usually 172.17.0.0/16)
- Sets up iptables rules for NAT and port forwarding
Let me show you what this looks like in practice.
Seeing the Bridge Network
First, list Docker networks:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
f7ab26d71dab bridge bridge local
4e6b77c6a4a4 host host local
a9f1c5e5e5e1 none null local
The bridge network is created by default. Inspect it:
$ docker network inspect bridge
[
{
"Name": "bridge",
"Driver": "bridge",
"IPAM": {
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Containers": {
"abc123...": {
"Name": "web",
"IPv4Address": "172.17.0.2/16",
"MacAddress": "02:42:ac:11:00:02"
}
}
}
]
The container got IP 172.17.0.2, and the gateway is 172.17.0.1 (the docker0 interface itself).
The Virtual Ethernet Pair
Now let's see the virtual ethernet devices. On your host:
$ ip addr show docker0
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
And if you list all interfaces, you'll see veth pairs:
$ ip addr | grep veth
5: veth7a3b4f@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
The veth7a3b4f@if4 is the host side of the virtual cable. The @if4 indicates it's paired with interface 4 inside the container. To see the container side:
$ docker exec web ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536
inet 127.0.0.1/8 scope host lo
4: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
Inside the container, it's called eth0@if5—the @if5 shows it's paired with interface 5 on the host (the veth interface we saw earlier). This is how data travels between the container and the host.
How Packets Flow: The Journey of a Request
Let's trace what happens when the container makes an outbound HTTP request to example.com:
- Container sends packet: The nginx process sends a packet from
172.17.0.2to93.184.216.34(example.com) - Through veth pair: Packet travels through
eth0inside container, crosses the veth pair to host - Hits docker0 bridge: Arrives at the docker0 bridge on the host
- NAT translation: iptables NAT rule changes source IP from
172.17.0.2to the host's IP - Routes to internet: Packet leaves through host's physical network interface
- Response comes back: Reply returns to host IP, iptables remembers the connection and translates it back to
172.17.0.2 - Back through veth: Packet crosses the veth pair back into the container
This NAT (Network Address Translation) is crucial—it's why containers can access the internet even though they have private IP addresses that aren't routable on the internet.
Seeing the iptables Rules
Docker sets up extensive iptables rules to make this work. Check the NAT table:
$ sudo iptables -t nat -L -n
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0
That MASQUERADE rule does the NAT—any packet from the 172.17.0.0/16 network going anywhere gets its source address rewritten to the host's IP.
For published ports (like -p 8080:80), Docker adds DNAT rules:
$ docker run -d -p 8080:80 --name web2 nginx
$ sudo iptables -t nat -L DOCKER -n
Chain DOCKER (2 references)
target prot opt source destination
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 to:172.17.0.3:80
This rule says: any TCP packet to port 8080 on any interface gets redirected to 172.17.0.3:80 (the container).
Container-to-Container Communication
Here's where it gets interesting. If you have two containers on the default bridge network, they can reach each other by IP but not by container name:
$ docker run -d --name web nginx
$ docker run -d --name api node-app
# Inside the api container:
$ docker exec api ping 172.17.0.2 # works (web's IP)
$ docker exec api ping web # fails - no DNS resolution
This is a quirk of the default bridge network. Docker doesn't provide automatic DNS resolution on it for backward compatibility reasons.
The solution? Create a custom bridge network:
$ docker network create my-network
$ docker run -d --name web --network my-network nginx
$ docker run -d --name api --network my-network node-app
# Now DNS works:
$ docker exec api ping web # works! Resolves to web's IP
On custom bridge networks, Docker runs an embedded DNS server (at 127.0.0.11 inside each container) that resolves container names to IPs. This is the recommended approach for container communication.
How the Embedded DNS Works
When a container on a user-defined network makes a DNS query:
- The query goes to
127.0.0.11:53(Docker's embedded DNS server) - Docker checks if the name matches a container on the same network
- If yes, returns the container's IP
- If no, forwards to the host's DNS servers
You can see this in the container's /etc/resolv.conf:
$ docker exec web cat /etc/resolv.conf
nameserver 127.0.0.11
options ndots:0
This embedded DNS server is actually implemented in the Docker daemon itself, not as a separate process. It's one of the clever tricks that makes Docker networking feel seamless.
Host Network: Removing the Isolation
Sometimes you want a container to use the host's network directly, with no isolation. This is what --network host does:
$ docker run -d --network host nginx
Now the container doesn't get its own network namespace, no veth pair is created, and no NAT happens. The nginx process binds directly to port 80 on the host's network interfaces.
Inside the container:
$ docker exec <container> ip addr
# Shows the same interfaces as the host!
When to Use Host Networking
I use host networking in these scenarios:
Performance-critical applications: Eliminates the overhead of NAT and virtual network devices. For high-throughput applications, this can matter.
Network monitoring tools: Tools like tcpdump or wireshark need to see all network traffic, which requires host network access.
Binding to all interfaces: When you need the service accessible on all host IPs without knowing them in advance.
But there are downsides:
- No port isolation: Only one container can bind to port 80
- No network isolation: Container can access host's network stack
- Not portable: Won't work the same way on Docker Desktop for Mac/Windows (they use VMs)
I generally avoid host networking unless I have a specific reason for it. The isolation provided by bridge networks is usually worth the minimal overhead.
Overlay Networks: Connecting Hosts
Overlay networks solve a different problem: how do containers on different hosts communicate? This is essential for Docker Swarm (Docker's built-in orchestration) and was an "aha" moment for me when I first set it up.
An overlay network creates a virtual network that spans multiple Docker hosts. Containers on this network can communicate as if they're on the same local network, even if the hosts are in different data centers.
How Overlay Networks Work
When you create an overlay network in Swarm:
$ docker network create --driver overlay my-overlay
Docker sets up:
- VXLAN tunnels between hosts - encapsulates container traffic in UDP packets
- Distributed key-value store - tracks which containers are on which hosts
- Routing mesh - routes traffic to the right host automatically
Here's the key insight: when a container sends a packet to another container on the overlay network, Docker:
- Checks the distributed store to find which host the destination container is on
- Encapsulates the packet in a VXLAN header (wraps it in another packet)
- Sends it over the physical network to the target host
- On the receiving host, Docker unwraps the VXLAN packet and delivers it to the container
This VXLAN encapsulation is why overlay networks work across hosts—the underlying network just sees normal UDP traffic between hosts.
A Practical Example
Let's say you have two hosts running Docker Swarm:
# On manager node:
$ docker swarm init
$ docker network create --driver overlay app-network
$ docker service create --name web --network app-network --replicas 3 nginx
# Docker automatically spreads the 3 replicas across available hosts
# They can all communicate via the overlay network
Inside any replica:
$ docker exec <container> ping web # Reaches one of the 3 replicas
Docker's routing mesh load-balances this ping across the replicas. This same mechanism makes service discovery and load balancing work in Swarm.
The Trade-off: Complexity vs Flexibility
Overlay networks add complexity:
- Require a key-value store (Docker uses its built-in Raft consensus)
- VXLAN encapsulation adds overhead (about 50 bytes per packet)
- Debugging is harder—traffic is encrypted by default
But they solve a real problem. Without overlay networks, you'd need to manually set up VPNs or configure complex routing between hosts.
For single-host deployments, stick with bridge networks. For multi-host, overlay networks are worth the complexity.
The macvlan and ipvlan Networks: When You Need Real IPs
Sometimes you want containers to appear as physical devices on your network, with real MAC addresses and IPs assigned by your network's DHCP server. That's what macvlan and ipvlan are for.
macvlan: Full Layer 2 Networking
With macvlan, each container gets its own MAC address:
$ docker network create -d macvlan \
--subnet=192.168.1.0/24 \
--gateway=192.168.1.1 \
-o parent=eth0 my-macvlan
$ docker run -d --network my-macvlan nginx
The container appears as a separate device on your physical network with its own MAC address. Your network switch sees it as a distinct device.
Use case: Legacy applications that expect to be on a physical network, or when you need containers to get IPs from your existing DHCP server.
Limitation: Many WiFi networks and cloud providers block macvlan (they filter traffic with unrecognized MAC addresses).
ipvlan: Shared MAC, Different IPs
ipvlan is similar but all containers share the host's MAC address:
$ docker network create -d ipvlan \
--subnet=192.168.1.0/24 \
-o parent=eth0 my-ipvlan
This works better on networks that restrict MAC addresses (like some WiFi networks), but containers still get real IPs from your network's subnet.
I've rarely needed these in production—bridge and overlay handle most use cases. But they're good to know about when you encounter that one weird legacy app.
The none Network: No Networking at All
Finally, there's the none network:
$ docker run -d --network none nginx
This container has no network interfaces except loopback. It can't communicate with anything, not even other containers.
When would you use this?
- Batch processing that reads from and writes to volumes
- Testing code that shouldn't make network calls
- Maximum isolation for untrusted code
I once used this for a data processing container that read CSV files from a volume and wrote results back. It didn't need network access, and --network none ensured it couldn't accidentally make external API calls.
Debugging Docker Networking: My Go-To Commands
Here are the commands I use most often when debugging networking issues:
See all networks and what's connected:
docker network ls
docker network inspect bridge
Check a container's network config:
docker inspect <container> | grep -A 20 NetworkSettings
See the container's view of the network:
docker exec <container> ip addr
docker exec <container> ip route
docker exec <container> cat /etc/resolv.conf
Test connectivity between containers:
docker exec <container1> ping <container2>
docker exec <container1> nc -zv <container2> <port>
Check iptables rules (on host):
sudo iptables -t nat -L -n -v
sudo iptables -t filter -L -n -v
See veth pairs on the host:
ip link show | grep veth
bridge link show docker0
Check Docker's embedded DNS:
docker exec <container> nslookup <other-container>
docker exec <container> dig <other-container>
Common Networking Pitfalls I've Encountered
Problem: Containers can't resolve each other by name
Cause: Using the default bridge network, which doesn't have automatic DNS.
Solution: Create a custom bridge network and use that instead.
Problem: Published port not accessible from outside
Cause: Port is only published on localhost (-p 127.0.0.1:8080:80) or firewall blocking.
Solution: Use -p 8080:80 to publish on all interfaces, check firewall rules.
Problem: Container can reach some containers but not others
Cause: Containers are on different networks.
Solution: Connect both containers to the same network with docker network connect.
Problem: Host networking doesn't work on Docker Desktop
Cause: Docker Desktop runs Docker inside a VM, so --network host connects to the VM's network, not your Mac/Windows network.
Solution: Use port publishing instead, or understand you're in a VM.
Problem: Overlay network containers can't communicate
Cause: Firewall blocking VXLAN (UDP port 4789), or hosts not in Swarm mode.
Solution: Open UDP 4789 between hosts, ensure all hosts have joined the swarm.
What I Learned About Docker Networking
Working through Docker networking taught me several things:
Containers are just Linux processes: The networking "magic" is just Linux kernel features—network namespaces, veth pairs, iptables. Docker orchestrates these, but the primitives are standard Linux.
Default isn't always best: The default bridge network has surprising limitations (no DNS). Creating custom networks is almost always better.
Isolation has layers: You can choose how isolated you want networking to be—from none (completely isolated) to host (no isolation at all).
Debugging requires understanding: You can't effectively debug networking without understanding the underlying mechanisms. Knowing about veth pairs and iptables rules makes troubleshooting so much faster.
Different problems need different drivers: There's no "one size fits all" network driver. Bridge for single-host, overlay for multi-host, host when you need performance, macvlan for legacy apps.
Further Exploration
If you want to dive deeper into Docker networking, here are some areas worth exploring:
Network plugins: Docker's CNI (Container Network Interface) allows custom network drivers. Kubernetes uses this extensively with Calico, Flannel, Weave, etc.
IPv6 support: Docker can run dual-stack networks with both IPv4 and IPv6.
Network security: Docker supports encrypted overlay networks and network policies.
Service mesh: Tools like Istio and Linkerd add another layer on top of Docker networking for advanced traffic management.
eBPF networking: Modern kernels support eBPF for more efficient packet processing. Some Docker network drivers are moving to eBPF.
Wrapping Up
Docker networking felt like magic until I understood what was happening under the hood. Now when I see a docker0 interface or a veth pair, I know exactly what they're doing. And when networking breaks (and it will), I have the mental model to debug it effectively.
The key insight for me was realizing that Docker networking is just clever orchestration of standard Linux networking primitives. Network namespaces provide isolation, veth pairs connect containers to the host, iptables handles NAT and firewalling, and VXLAN tunnels span multiple hosts.
Understanding these fundamentals made me a better Docker user. I make better decisions about which network driver to use, I can debug issues faster, and I understand the trade-offs between isolation and performance.
Next time you run docker run, take a moment to appreciate the networking stack Docker just set up for your container. It's doing a lot more than you might think.