This post will demonstrate how and when the iroute directive is used in OpenVPN.
In most cases iroute is not needed, and in fact many users probably have never used it (or are aware of it, for that matter). It usually comes into play when networks behind the VPN nodes need to communicate. Let's imagine a topology like this:
Let's suppose that you want communication between networks A and B, and between A and C, as indicated by the dotted arrows. The config files are something like this:
gwA # cat /etc/openvpn/server.conf # gwA local 172.20.0.1 port 1194 proto udp dev tun topology subnet mode server tls-server ifconfig 10.0.0.1 255.255.255.0 route 192.168.2.0 255.255.255.0 10.0.0.2 route 192.168.3.0 255.255.255.0 10.0.0.3 client-config-dir ccd # snip rest of config gwA # cat /etc/openvpn/ccd/gwB ifconfig-push 10.0.0.2 255.255.255.0 push "route 192.168.1.0 255.255.255.0 10.0.0.1" gwA # cat /etc/openvpn/ccd/gwC ifconfig-push 10.0.0.3 255.255.255.0 push "route 192.168.1.0 255.255.255.0 10.0.0.1"
gwB # cat /etc/openvpn/client.conf # gwB remote 172.20.0.1 1194 proto udp dev tun topology subnet # snip rest of config
gwC # cat /etc/openvpn/client.conf # gwC remote 172.20.0.1 1194 proto udp dev tun topology subnet # snip rest of config
You think that having all the necessary routes in place as per the above configs would be enough to allow the desired communication, right? Well, let's bring up the VPN and try pinging from a computer in network A to network C:
box-in-a # ping 192.168.3.1 PING 192.168.3.1 (192.168.3.1) 56(84) bytes of data. ^C --- 192.168.3.1 ping statistics --- 8 packets transmitted, 0 received, 100% packet loss, time 7012ms
Let's try from network C to network A:
box-in-c # ping 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. ^C --- 192.168.1.1 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 4017ms
How come? Why isn't it working? Forwarding is enabled on all gateways, routes are in place:
gwA # ip route show 192.168.3.0/24 via 10.0.0.3 dev tun0 192.168.2.0/24 via 10.0.0.2 dev tun0 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.254 10.0.0.0/24 dev tun0 proto kernel scope link src 10.0.0.1 [snip] gwC # ip proute show 10.0.0.0/24 dev tun0 proto kernel scope link src 10.0.0.3 192.168.3.0/24 dev eth0 proto kernel scope link src 192.168.3.254 192.168.1.0/24 via 10.0.0.1 dev tun0 [snip]
In the C to A direction, packets appear to be correctly forwarded over the VPN at gwC. So let's have a look at the logs on gwA:
Sun Nov 15 13:25:35 2009 clientc/172.31.0.1:54788 MULTI: bad source address from client [192.168.3.1], packet dropped Sun Nov 15 13:25:36 2009 clientc/172.31.0.1:54788 MULTI: bad source address from client [192.168.3.1], packet dropped
So there is something wrong after all (172.31.0.1 is gwC's external public IP).
At this point, some background on how OpenVPN works internally is in order.
When OpenVPN receives a packet or frame on the tun/tap interface to forward, it encrypts it and encapsulates it into one or more UDP datagrams, which are then sent out to some remote (usually public) IP address where another VPN node will receive it on its public IP, decapsulate and decrypt them, and send them to the local tun/tap interface, where they will be finally seen by the OS. The process also works in the opposite direction of course.
If the IP addresses involved are only those belonging to the VPN, OpenVPN has no trouble to associate a certain VPN IP to the public IP address of a remote VPN peer (as long as the addresses were pushed by the server to the clients and not statically assigned).
However, when non-VPN packets are involved, OpenVPN needs more information. In our A-to-C example, when gwA receives the packet with src=192.168.1.1 and dst=192.168.3.1, the routing table sends it to the tun0 interface, and thus to OpenVPN. Now, how does OpenVPN know behind which remote peer that destination IP is? That is a necessary piece of information it needs in order to know the destination IP to use for the encapsulating UDP packets.
Similarly, in the C-to-A direction, when gwA's OpenVPN sees a packet with src=192.168.3.1 and dst=192.168.1.1, it needs to make sure that it knows how to reach the source address, in order to send replies later. So it's just a different aspect of the same problem.
Since there can be (and there actually is) more than one peer, OpenVPN needs to know which network is behind each peer. You might think (as I naively did) that it should be able to somehow infer that information from the routes in the routing table, for example since gwA has this route in the routing table
192.168.3.0/24 via 10.0.0.3 dev tun0
it could assume that 192.168.3.0/24 is behind 10.0.0.3, and thus use gwC's public IP to send encapsulated traffic destined to 192.168.3.0/24. Well, it seems that it doesn't work that way. You have to explicitly tell OpenVPN which network is behind each client. This is where our iroute directive comes into play.
What iroute does, essentially, is to tell OpenVPN to create an "internal" OpenVPN route to that network via a specific peer. Of course this is a per-client configuration fragment (because each client can have different networks behind it), so the right place to insert this information on the server is in the client config directory. Let's update our config on gwA:
gwA # cat /etc/openvpn/ccd/gwB ifconfig-push 10.0.0.2 255.255.255.0 push "route 192.168.1.0 255.255.255.0 10.0.0.1" iroute 192.168.2.0 255.255.255.0 gwA # cat /etc/openvpn/ccd/gwC ifconfig-push 10.0.0.3 255.255.255.0 push "route 192.168.1.0 255.255.255.0 10.0.0.1" iroute 192.168.3.0 255.255.255.0
There's no explicit mention of a gateway, but for example having the directive
iroute 192.168.3.0 255.255.255.0
in gwC's client config file already implies that 192.168.3.0/24 is reachable through gwC (thus 10.0.0.3 and associated public IP). The same for gwB. With this final piece of information, OpenVPN is finally able to route traffic for those remote networks. Let's have a look at gwA's log when the clients connect:
Sun Nov 15 16:30:28 2009 gwC/172.31.0.1:38107 MULTI: Learn: 10.0.0.3 -> gwC/172.31.0.1:38107 Sun Nov 15 16:30:28 2009 gwC/172.31.0.1:38107 MULTI: primary virtual IP for gwC/172.31.0.1:38107: 10.0.0.3 Sun Nov 15 16:30:28 2009 gwC/172.31.0.1:38107 MULTI: internal route 192.168.3.0/24 -> gwC/172.31.0.1:38107 Sun Nov 15 16:30:28 2009 gwC/172.31.0.1:38107 MULTI: Learn: 192.168.3.0/24 -> gwC/172.31.0.1:38107 .... Sun Nov 15 16:30:41 2009 gwB/172.17.0.1:58645 MULTI: Learn: 10.0.0.2 -> gwB/172.17.0.1:58645 Sun Nov 15 16:30:41 2009 gwB/172.17.0.1:58645 MULTI: primary virtual IP for gwB/172.17.0.1:58645: 10.0.0.2 Sun Nov 15 16:30:41 2009 gwB/172.17.0.1:58645 MULTI: internal route 192.168.2.0/24 -> gwB/172.17.0.1:58645 Sun Nov 15 16:30:41 2009 gwB/172.17.0.1:58645 MULTI: Learn: 192.168.2.0/24 -> gwB/172.17.0.1:58645
Let's verify that communication is indeed working:
box-in-c # ping -c 4 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. 64 bytes from 192.168.1.1: icmp_seq=1 ttl=62 time=21.0 ms 64 bytes from 192.168.1.1: icmp_seq=2 ttl=62 time=4.83 ms 64 bytes from 192.168.1.1: icmp_seq=3 ttl=62 time=14.9 ms 64 bytes from 192.168.1.1: icmp_seq=4 ttl=62 time=2.29 ms --- 192.168.1.1 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3044ms rtt min/avg/max/mdev = 2.298/10.790/21.081/7.597 ms
After discussing iroute, it's worth mentioning two important gotchas.
You could avoid using iroute altogether by SNATing traffic at the client. In our example, gwC would SNAT traffic coming from 192.168.3.0/24 that needs to be forwarded via the VPN, using something like
gwC # iptables -t nat -A POSTROUTING -s 192.168.3.0/24 -o tun0 -j MASQUERADE
A similar command would be used on gwB.
It's up to you to decide whether to do this or to use iroute instead. Generally speaking, using NAT has some configuration overhead at the client, whereas iroute can be entirely controlled at the server. (some clients might not be accessible for configuration, or might not even support NAT)
Don't assume that if there is only one client you don't need iroute. You still need it. However, if you are sure that there will always be only one client, and the setup is not going to change, you can work around the need to use iroute by explicitly using local and remote on both peers, and a mostly static configuration. Something like this:
# server tls-server local x.x.x.x remote y.y.y.y port 1194 ifconfig 10.0.0.1 255.255.255.0 push "ifconfig 10.0.0.2 255.255.255.0" ....
# client client local y.y.y.y remote x.x.x.x port 1194 ...
This way, there is no ambiguity as OpenVPN is forced to use the address in remote for all traffic, so in this setup you can just set external (ie, routing table) routes and traffic will be forwarded without the need for iroute.