Skip to content
 

OpenSSH-based VPNs

This is a poorly documented yet really useful feature of Openssh. It allows you to connect two tun/tap interfaces together, to create a layer-2 or layer-3 network between remote machines. This results in OpenVPN-like VPNs (but much simpler and, admittedly, less scalable).

Preparation

The ssh server must be configured to support tunnels. In practice, this means adding the directive

PermitTunnel yes

to the /etc/ssh/sshd_config file and restarting the ssh server daemon. You can also use the keyword ethernet or point-to-point instead of yes if you want to permit only a specific type of tunnel (see below for the details).

Usage

As I said before, the tunnel can be layer-2 (ethernet) or layer-3 (point to point). To specify what kind of tunnel you want on the client side, you have two choices:

  • Use the -o Tunnel=<tunnel_type> option when running ssh to connect to the server; or
  • Use the Tunnel <tunnel_type> directive in the /etc/ssh/ssh_config client configuration file (or, equivalently, in the per-user ~/.ssh/config file).

In both cases, the <tunnel_type> can be ethernet for an L2 link, or point-to-point for a L3 (ie, IP) link.
Openssh assumes an interface name of "tap" if you create an ethernet tunnel, and "tun" if you create an IP tunnel. AFAICT (but corrections are welcome) there is currently no means of specifying a different interface name.
Please note that, for the following examples to work, you must disable any iptables rule that might prevent tun/tap interfaces from sending or receiving traffic.

Creating an ethernet tunnel

Let's say you want to create an L2 tunnel between host local and host remote (the ssh server is running on remote), and you want to connect interface tap4 on local to interface tap6 on remote.
First of all, the necessary tap interfaces must already be present on both hosts, so you have to create them beforehand, for example with the following commands:

local # tunctl -t tap4
Set 'tap4' persistent and owned by uid 0

remote # tunctl -t tap6
Set 'tap6' persistent and owned by uid 0

tunctl is part of the uml-utilities package (the name might be slightly different depending on the distribution, but in any case it's the set of utilities that come with User Mode Linux). Another command that can create tun/tap interfaces is openvpn, for example:

local # openvpn --mktun --dev tap4
Thu Nov 12 21:59:13 2009 TUN/TAP device tap4 opened
Thu Nov 12 21:59:13 2009 Persist state set to: ON

remote # openvpn --mktun --dev tap6
Thu Nov 12 22:11:51 2009 TUN/TAP device tap6 opened
Thu Nov 12 22:11:51 2009 Persist state set to: ON

Of course, if you want a regular user to be able to use the interfaces, use the -u <owner> option with tunctl, or the --user <owner> option with openvpn. To remove the interface, use

# tunctl -d <ifname>

or

# openvpn --rmtun <ifname>

respectively.

Now, to test the tunnel, let's assign IP addresses to the interfaces:

local # ifconfig tap4 10.0.0.1 netmask 255.255.255.0 up

remote # ifconfig tap6 10.0.0.2 netmask 255.255.255.0 up

A better way is to use the ip command, part of the iproute2 package:

local # ip link set tap4 up
local # ip addr add 10.0.0.1/24 dev tap4
local # ip route add 10.0.0.0/24 dev tap4   # this might not be necessary

remote # ip link set tap6 up
remote # ip addr add 10.0.0.2/24 dev tap6
remote # ip route add 10.0.0.0/24 dev tap6  # this might not be necessary

Now we can connect the two interfaces using Openssh:

local # ssh -o Tunnel=ethernet -w 4:6 remote
Password: *********
remote #

Test the tunnel:

local # ping -c 4 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=79.5 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=37.3 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=39.5 ms
64 bytes from 10.0.0.2: icmp_seq=4 ttl=64 time=38.5 ms

--- 10.0.0.2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3005ms
rtt min/avg/max/mdev = 37.311/48.759/79.407/17.792 ms
local #

Although not apparent from the above, since we are using an L2 tunnel, full ethernet frames (ie, up to 1514 bytes long if using the default MTU) are passing through the encrypted tunnel.

Creating a point to point IP tunnel

The scenario is the same as above, but this time we want to setup an IP tunnel. This means that IP packets (up to 1500 bytes long if using the default MTU), and not ethernet frames, will be flowing through the tunnel.
The steps needed to create and bring up the interfaces are the same as the previous example. Just change the names of the interfaces to tun4 and tun6 in all commands.
To connect the two interfaces with Openssh, use the following command:

local # ssh -o Tunnel=point-to-point -w 4:6 remote
Password: *********
remote #

Test the tunnel:

local # ping -c 4 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=36.4 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=39.0 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=37.4 ms
64 bytes from 10.0.0.2: icmp_seq=4 ttl=64 time=44.8 ms

--- 10.0.0.2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3005ms
rtt min/avg/max/mdev = 36.405/39.439/44.847/3.261 ms
local #

The output looks exactly the same as the previous example, but, as I said above, the difference is in the type and size of packets passing through the tunnel. Now the tunnel is carrying IP packets rather than ethernet frames.

Some example applications

Those that follow are certainly not the only useful applications of Openssh tun/tap tunnels. They are only meant to demonstrate some examples of what can be done, and can (and should) be extended in the way you like.

VPN using a point-to-point tunnel

Suppose your LAN at work uses IP subnet 172.16.10.0/24, so it's not directly reachable from the Internet. You are at home, and the only thing you can do is login via ssh onto a box on the LAN which uses a public IP address (this will probably be a router or a firewall, but not necessarily). What you can do is set up a L3 tunnel between your box and the remote host, and access the internal LAN without having to resort to nasty tricks.
You need to choose an IP subnet that will be used for the point-to-point link between your box and the remote host (better: between the tun interface on your box and the tun interface on the remote host); for this example, we'll use 192.168.0.0/30, which leaves only two usable IP addresses. The tun1 interface on your box will use 192.168.0.1, and the tun1 interface on the remote box will use 192.168.0.2.
Create the interfaces on your box and on the remote host as described in the Creating a point to point IP tunnel example above, assigning the correct interface names and IP addresses (local tun1: 192.168.0.1, remote tun1: 192.168.0.2). Here, of course, local is your home box and remote is the remote host. Now connect to the remote host:

homebox # ssh -o Tunnel=point-to-point -w 1:1 remotehost
Password: *********
remotehost #

So far you just have a working and pingable link between your box's tun1 interface and the remote tun1 interface. To access the internal remote LAN, you have to add some static routes. At a minimum, you need to tell your home box that network 172.16.10.0/24 is reachable through interface tun1, and you must enable routing on the remote box (if it's not already enabled):

homebox # ip route add 172.16.10.0/24 via 192.168.0.2 dev tun1

remotehost # echo 1 > /proc/sys/net/ipv4/conf/all/forwarding

Now you should be able to ping a host internal to the LAN from your home box:

homebox # ping -c 4 172.16.10.44
PING 172.16.10.44 (172.16.10.44) 56(84) bytes of data.
64 bytes from 172.16.10.4: icmp_seq=1 ttl=64 time=37.8 ms
64 bytes from 172.16.10.4: icmp_seq=2 ttl=64 time=36.4 ms
64 bytes from 172.16.10.4: icmp_seq=3 ttl=64 time=40.5 ms
64 bytes from 172.16.10.4: icmp_seq=4 ttl=64 time=40.2 ms

--- 172.16.10.44 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2998ms
rtt min/avg/max/mdev = 36.449/38.754/40.530/1.696 ms

This means that you can now access and use your work LAN's internal resources, such as HTTP servers, mail severs, network disks, etc., all using their private IP addresses, as they appear on the LAN.
Note that this example assumes that the hosts in the remote LAN use the remote host as their default gateway (which will generally be true if it's a router or a firewall). If this is not true, then they will receive the packets you send them from home, but they won't be able to reply, since they have no route for the VPN link (192.168.0.0/30). A possible solution is to add a route to 192.168.0.0/30 via the remote box to each internal host you need to access, but it's a bit awkward. A better solution is to use NAT towards the internal LAN on the remote host, so packets from your home box will appear on the remote LAN as originating from the remote box (which other hosts in the LAN know how to reach). For example, do something like the following:

remotehost # iptables -t nat -A POSTROUTING -s 192.168.0.0/30 -o eth0 -j MASQUERADE

(if the remote host's LAN interface is not eth0, substitute with the appropriate interface name). This should get things going even when the remote box is not the default gateway for hosts in the remote LAN.

IPv6 at home

Again, suppose your LAN at work has IPv6 connectivity (either native or tunneled). Your home ISP service does not (yet) offer IPv6 connectivity, but you want or need to use IPv6 from home. What we are going to do is to connect to the work LAN at the ethernet level, so our home box will be able to receive the ethernet multicast messages needed to perform IPv6 autoconfiguration. To do this, we'll use a tap interface and an Openssh ethernet tunnel.
The remote machine we connect to need not be the default gateway on the remote LAN. In fact, we can choose any machine that is reachable via ssh (over IPv4 of course), is IPv6-enabled, and has ethernet bridging support in the kernel. On the home box, we just need to create a tap interface:

homebox # tunctl -t tap1
Set 'tap1' persistent and owned by uid 0
homebox # ip link set tap1 up

On the remote host, we need to create a tap interface, and bridge it with the host's interface on the LAN (we assume eth0 in this example). WARNING: don't run the following commands through a remote session (eg, ssh) connected through interface eth0, since we need to remove IP addresses from the eth0 interface for a brief period of time, and you'll be disconnected. Creating a script that does all the operations at once might work, but you have been warned. First, create the tap interface:

remotehost # tunctl -t tap1
Set 'tap1' persistent and owned by uid 0
remotehost # ip link set tap1 up

Now, create the bridge (br0) and add interfaces tap1 and eth0 to the bridge. But before doing that, we must remove all the IP addresses from it, and set it to be in promiscuous mode. We assume the IPv4 address of the remote host's eth0 interface is 10.0.0.1/24 (with a default gateway of 10.0.0.254), and that its IPv6 address is 2001:db8:1:1::1/64, but you must use the actual addresses.

remotehost # ip addr del 10.0.0.1/24 dev eth0
remotehost # ip -6 addr del 2001:db8:1:1::1/64 dev eth0
remotehost # ip -6 route del 2001:db8:1:1::/64 dev eth0
remotehost # ip -6 route del default dev eth0
remotehost # ip link set eth0 promisc on
remotehost # ip link set tap1 promisc on
remotehost # brctl addbr br0
remotehost # brctl addif br0 eth0 tap1
remotehost # ip addr add 10.0.0.1/24 dev br0
remotehost # ip route add default via 10.0.0.254 dev br0

We don't readd IPv6 addresses to interface br0, since it will autoconfigure them again as soon as a router advertisement packet is received.
What we have now is an ethernet bridge (or switch if you prefer) that joins together the ethernet segments connected to the interfaces eth0 and tap1. This is what enables us to send ethernet frames (including router advertisements and neighbor discovery) to the home box. For more information about linux kernel bridge read the official documentation. Note that we don't even need to assign IPv4 addresses to tap interfaces.
Now, connect the tap interface on the home box to the tap interface on the remote box:

homebox # ssh -o Tunnel=ethernet -w 1:1 remotehost
Password: *********
remotehost #

If we execute ifconfig tap1 a few times on the home box, we can (hopefully) see the RX packet counter incrementing (meaning that ethernet frames are being received from the remote LAN):

homebox # ifconfig tap1
tap1      Link encap:Ethernet  HWaddr 00:FF:AF:04:C7:92
inet6 addr: fe80::2ff:afff:fe04:c792/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:24 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:6 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:1652 (1.6 Kb)  TX bytes:0 (0.0 b)

homebox # ifconfig tap1
tap1      Link encap:Ethernet  HWaddr 00:FF:AF:04:C7:92
inet6 addr: fe80::2ff:afff:fe04:c792/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:37 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:6 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:3970 (3.8 Kb)  TX bytes:0 (0.0 b)

After a while (depending on the configuration of the IPv6 router on the remote LAN) the tap interface on your home box will receive a router advertisement frame (through the Openssh ethernet tunnel), and will autoconfigure its IPv6 address and default gateway:

homebox # ifconfig tap1
tap1      Link encap:Ethernet  HWaddr 00:FF:AF:04:C7:92
inet6 addr: 2001:db8:1:1:2ff:afff:fe04:c792/64 Scope:Global
inet6 addr: fe80::2ff:afff:fe04:c792/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:64 errors:0 dropped:0 overruns:0 frame:0
TX packets:1 errors:0 dropped:6 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:5862 (5.7 Kb)  TX bytes:78 (78.0 b)

Note the single packet sent, which represents a neighbor solicitation frame (used for duplicate address detection).
That's it. Now try pinging an IPv6 site from home:

homebox # ping6 -c 4 -n www.ipv6.org
PING www.ipv6.org(2001:6b0:1:ea:202:a5ff:fecd:13a6) 56 data bytes
64 bytes from 2001:6b0:1:ea:202:a5ff:fecd:13a6: icmp_seq=1 ttl=51 time=150 ms
64 bytes from 2001:6b0:1:ea:202:a5ff:fecd:13a6: icmp_seq=2 ttl=51 time=115 ms
64 bytes from 2001:6b0:1:ea:202:a5ff:fecd:13a6: icmp_seq=3 ttl=51 time=116 ms
64 bytes from 2001:6b0:1:ea:202:a5ff:fecd:13a6: icmp_seq=4 ttl=51 time=111 ms

--- www.ipv6.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 111.829/123.484/150.095/15.474 ms

Figuring out the path taken by IPv6 packets and frames sent from the home box to the IPv6 Internet is left as an exercise for the reader :-)

Conclusions

Of course, you can do many other things with Openssh VPN tunnels beyond those described here. This document is meant to be just an introduction to the feature, which I feel is not covered terribly well by the official documentation and man page.
One thing to keep in mind is that Openssh VPNs, as those described in this document, use TCP (unlike OpenVPN, which uses UDP - by default). This might lead to some problems, as described here: Why TCP Over TCP Is A Bad Idea. However, usually this is an issue only if your connection is slow, and YMMV.

6 Comments

  1. pabut says:

    Also ... make certain you disable SELINUX .... that led to some frustrating moments till I tried that. I'm not an SELINUX fan but if anyone knows how to enable tunneling while SELINUX is running I'd love to hear about it.

  2. Theunis says:

    ok I have found my problem :)

    silly me, as you said - The tun device must be owned by the user (resp. local and remote) or the group to which the user belongs

    my permissions on both machines was:

    $ ls -la /dev/net/tun
    crw-rw---- 1 root vpn 10, 200 Jun 5 09:25 /dev/net/tun

    when I created the tun0 device using tunctl I did this:

    tunctl -n -u root -g vpn -t tun0

    This seemed to have caused the problem. Because I for some reason specified that I had to set the user to be root too, thinking it had to be exactly the same as the permissions on /dev/net/tun. Instead of only setting group id to vpn. So it appears that I masked it to that specific user + group. DOH!

    when I ran tunctl -d tun0 as root to remove the tun0 device and ran it with:

    tunctl -n -g vpn -t tun0 on both the client and server, and it worked as an ordinary user. how strange... by leaving out -u username

    I also got it working by doing the following on both machines:

    as root:

    tunctl -d tun0
    chown username:vpn /dev/net/tun
    tunctl -n -u username -g vpn -t tun0

    this then makes it specific for that only

    then logging on as ordinary user to remote machine as ordinary user works too.

    FYI for those that have been knocking their head against the wall like I did :)

  3. Theunis says:

    remote machine: 2.6.32-openvz-belyayev
    client machine: 2.6.31-gentoo-r6

    iproute2 tool ip
    remote: iproute2-ss080725
    client: iproute2-ss091226

    remote machine also contains PermitTunnel yes in /etc/ssh/sshd_config, sshd was restarted too.

    tun device driver part of the kernel (remote)
    tun device was inserted using modprobe tun (client)

    ownership was adapted on both client and server
    chown root:vpn /dev/net/tun

    crw-rw---- 1 root vpn 10, 200 Nov 8 09:30 /dev/net/tun

    users on both client and server belongs to the group: vpn

    my iproute2 package does not seem to support creating tuntap devices on both client and server

    I used tunctl (tunctl-1.5) command with tunctl -n -u root -g vpn -t tun0 on both machines and added ips like mentioned in your post above. This does work when I run as root on the client and ssh root@remote-server -w 0:0, but not as ordinary user that belongs to the vpn group. Even if the devices are up and configured.

    What version of iproute2 do you have and what is your kernel version and what openssh server version do you have?

    I also added this to my udev rules to make the permissions persistent:

    # cat /etc/udev/rules.d/50-udev.rules
    KERNEL=="tun", NAME="net/%k", GROUP="vpn", MODE="0660"

    So far I only noticed your ip command could create tuntap devices + your permissions seems to be 0666 where as mine is 0660

    output when using (root) client : ssh root@remote -w 0:0

    $ ssh remote-server -C -w 0:0
    Password:
    Last login: Mon Nov 8 09:57:31 SAST 2010 from client on pts/1
    # ping 10.0.0.1 -c 1
    PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
    64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=183 ms

    --- 10.0.0.1 ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 183.090/183.090/183.090/0.000 ms
    # exit

    Now for ordinary user (client) to ordinary user on (remote)
    devices still exist and configured on client and server.

    $ ssh remote-server -C -w 0:0 (using a key)
    Tunnel device open failed.
    Could not request tunnel forwarding.

    $ ping 10.0.0.1 -c 1
    PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.

    --- 10.0.0.1 ping statistics ---
    1 packets transmitted, 0 received, 100% packet loss, time 0ms

    $ ping 10.0.0.2 -c 1
    PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
    64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms

    --- 10.0.0.2 ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms

    $ exit

    So it appears I can only ping the ip on the remote machine when I'm logged onto the remote machine as the ordinary user.

    here is what happens when I run ssh as root on the client machine and connect to remote as ordinary user

    # ssh username@remote-machine -C -o Tunnel=point-to-point -w 0:0
    Password:
    channel 0: open failed: administratively prohibited: open failed
    Last login: Mon Nov 8 10:14:43 SAST 2010 from client on pts/1

    output of when I run as ordinary user on local and connect to root at remote machine:

    $ ssh root@remote-machine -C -w 0:0
    Password:
    Tunnel device open failed.
    Could not request tunnel forwarding.
    Last login: Mon Nov 8 10:09:53 SAST 2010 from client on pts/1

    Hope this does give some insight. into my problem. My client however does not have PermitTunnel yes in /etc/sshd/sshd_config.

    Again just stressing, it all works when running as root@client to root@remote

    both machines are restarted since added to vpn group, by typing in groups on both machines it confirms that the ordinary users are part of the vpn group.

  4. Theunis says:

    I tried doing it as an ordinary user, but I keep on getting this:

    Tunnel device open failed.
    Could not request tunnel forwarding.

    created group called vpn, on client and remote
    assigned on both hosts the permissions:
    crw-rw---- 1 root vpn 10, 200 Nov 3 14:55 /dev/net/tun

    used the command : tunctl -n -u root -g vpn -t tun0
    successfully creates the device, but an ordinary user is still unable to bind to this device.

    it works only if the client and remote logs on as root.
    client: OpenSSH_5.5p1-lpk, OpenSSL 1.0.0a 1 Jun 2010
    remote: OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009

    • waldner says:

      There are quite a few conditions that have to be true for normal users to be able to use the tunnel:

      - The remote server must have PermitTunnel yes in /etc/ssh/sshd_config (this is true regardless of the user)
      - The tun/tap devices must already exist
      - The local user must have write permission on /dev/net/tun on the local machine, and the remote user must have write permission on /dev/net/tun on the remote machine
      - The tun device must be owned by the user (resp. local and remote) or the group to which the user belongs

      And of course, even if the above conditions are met, a normal user still can't add IP addresses to the tun/tap device. So I would add a fourth condition if IP addresses are wanted:

      - IP addresses must be preconfigured on the local and remote tun/tap interfaces.

      The following works for me (note that my local normal user is member of the group "users" on both machines):

      root@local # chown :users /dev/net/tun && chmod o-rw /dev/net/tun
      root@local # ls -l /dev/net/tun
      crw-rw---- 1 root users 10, 200 Nov 5 22:27 /dev/net/tun
      root@local # ip tuntap add dev tun7 mode tun group users && ip link set tun7 up

      root@remote # chown :users /dev/net/tun && chmod o-rw /dev/net/tun
      root@remote # ls -l /dev/net/tun
      crw-rw---- 1 root users 10, 200 Nov 5 22:29 /dev/net/tun
      root@remote # ip tuntap add dev tun7 mode tun group users && ip link set tun7 up

      waldner@local $ ssh -w 7:7 waldner@remote
      Last login: Sat Nov 6 00:52:21 GMT 2010 from 10.8.0.210 on pts/8

      As said, to use the tunnel at the IP level, you also need to assign IP addresses to the two interfaces as part of the preparation (not shown above).