Skip to content
 

Geolocation-aware DNS with Bind

So it might happen that you are offering a service on a number of identically configured servers that are geographically distributed (for example, VPN servers, streaming media servers,mirrors for software downloads), and you want it to be reachable using a single URL, but at the same time you want to send users to the server that is "closer" to them ("closer" according to geolocation information).

Probably the cleanest, most scalable and most resilient way to implement that is by using anycast routing, associating the URL with a single IP address and announcing that same IP address from all the locations into the Internet, thus letting users go to the location that's closest to them in terms of routing. For example, some root DNS servers use this technique. This also works to transparently redirect users to another server when one location is down and the advertisement is withdrawn from there. However, this requires that you run dynamic routing (BGP) at all the locations, or you need your ISP's cooperation to announce the addresses on your behalf. Furthermore, you'll probably need a large enough block if IP addresses to be announced, which might not be the case if you don't own many public IPs. Finally, the BGP method is not problem-free anyway, since routing convergence in the Internet at large could take time, and there might be some devices that filter advertisements of the same addresses coming from different locations (hopefully those should be very few or none though).

So another, less optimal but probably easier, way to get a similar result, assuming you control the authoritative DNS server(s) for the name in the URL, is to have the DNS server look at the source IP address of the queries it gets, and serve different replies based on that. For this example, we're going to assume that there are three servers, one in Europe (IP 192.168.0.1), one in Asia (IP 10.0.0.1), and one in the US (IP 172.16.0.1), and they should all be accessible using the name mirror.example.com. So ideally our goal here is to have the DNS server resolve the name to 192.168.0.1 if the query comes from a european IP, 10.0.0.1 if the query comes from an asian IP, and 172.16.0.1 if it's from America.

Note that our DNS server will almost never be queried directly by the end clients that need to access the service; rather, it will be queried by other DNS servers that want to resolve the names on behalf of their clients. However, it's reasonable to assume that end clients will generally be using DNS servers that are geographically close to them (for example, their company's or their ISP's DNS). Sure, there will be exceptions, but the worst that can happen is that, say, an asian client is sent to the european server, for example, so it's not really something to worry about, and geoIP information cannot be 100% accurate anyway, so those things would probably happen in any case.

Bind views

Bind has just the perfect feature to implement the strategy described above, and it's called views. Views allow to define different "virtual" configurations within the same server, and also to specify who should see which configuration. A typical use of views is to provide the so-called split (or split-horizon) DNS service. For example in an enterprise when you want internal clients to be able to use and resolve internal names that should not be visible outside, you define a view for the internal clients that serves a zone containing the internal names, and another view for external clients that serves a zone with only the official names that should be visible from the Internet. The term split DNS seems to be used for any scenario where the server can give different answers for the same queries depending on some characteristics of the query, so our geographic DNS project can indeed be classified as an example of split DNS.

As said, the key is that the server uses the source IP address of the query to select which one of the defined views should be consulted to answer. This means that there should be a way to associate a view with one or more source IP addresses, and that is indeed the case. When a client sends a query, the view that matches its IP address is used. (There are other selectors that can be used, but they are not relevant here.)

In practice, this means doing something like this in named.conf:

view "internal" {
  match-clients { 10.0.0.0/8; 192.168.33.0/24; };
  // some configuration fragment we want the internal clients to see
  // ...
};

view "internet" {
  match-clients { any; };
  // some (possibly different) configuration fragment we want Internet clients to see
  // ...
};

The good news is that lists of IP addresses can be given names (ACL in Bind terms), and the names rather than the addresses can be used in the match-clients directive. Also note that the simpler allow-query directive could be used in place of match-clients.

So back to our scenario, the idea is to define three views, one per server, each of which will serve a different IP address for mirror.example.com. To do this, we'll create three zone files, each resolving the name to the IP address of the server in the particular region (Europe, Asia, America).

Sources of geolocation information

So what needs to be done is to generate suitable ACLs for each view. Roughly speaking, we should be able to associate all the european IPs to the european view and so on. That is quite a lot of data (we will limit the example to IPv4, but it shouldn't be too hard to extend it to IPv6). What we need is some big database that associates an IPv4 address or (much better) address block to a specific country. With that and some text processing, it should be possible to automatically create a number of (more or less huge) ACLs, one per country. Those ACLs could then be aggregated by region (Europe, Asia, America) and associated with the relevant view. Queries coming from countries that are not exactly located in one of the three regions (eg African countries, Greenland and the like) can either be sent to a default catch-all view, or those countries can be listed in one of the three main views, perhaps based on distance.

That said, there are a few sources of geolocation information; see for example MaxMind or WIPmania. They provide geolocation files in various formats; for our purposes, we will use WIPmania's textual format, downloadable from the "CIDR" link in the above page, which is probably the easiest to parse. Essentially, the file contains a lot of lines in the format

IP address/mask country;

for example

139.92.66.0/24 EU;
139.92.67.0/24 SE;
139.92.68.0/22 FR;
139.92.72.0/21 FR;
139.92.80.0/22 FR;
...
158.193.0.0/16 SK;
158.194.0.0/16 CZ;
158.195.0.0/16 SK;
158.196.0.0/16 CZ;
158.197.0.0/16 SK;
158.198.0.0/16 JP;
158.201.0.0/16 JP;
...

ACL generation

The information in the above format can very easily be turned into a list of Bind ACLs with this Perl code using hashes (note I'm not a Perl programmer):

#!/usr/bin/perl

# do_acl.pl
# Outputs Bind ACLs from geoIP information

use warnings;
use strict;

my $h;
my %ip;   # each hash element is an array reference
my $country;

open $h,$ARGV[0] or die "Error opening input file";

while(<$h>) {
  /^(\S+) (\S+);$/;
  push @{$ip{$2}}, $1;
}

close $h or die "Error closing file";

# Traverse the hash and print the ACLs
for $country (keys %ip) {
  print "acl \"$country\" {\n";
  print "  $_;\n" for (@{$ip{$country}});
  print "};\n\n";
}

The script can be run like this:

$ do_acl.pl worldip.conf > geo_acl.conf

If everything went fine, geo_acl.conf should contains something like this:

acl "GL" {
  88.83.0.0/19;
  194.177.224.0/19;
};
acl "DJ" {
  41.189.225.0/24;
  41.189.232.0/23;
  193.251.143.0/24;
  193.251.167.0/26;
  193.251.167.64/27;
  193.251.167.96/28;
  193.251.224.0/25;
  193.251.224.128/26;
  193.251.224.192/28;
  193.251.224.208/29;
  196.201.192.0/20;
  213.187.131.168/29;
};
acl "JM" {
  63.75.234.0/23;
  65.183.0.0/20;
  66.36.201.0/24;
  ...

The order of the ACL might not be the same as shown as they come from a hash, but that's not really important.

Putting all together

So now we come to the interesting bits. Essentially, we should decide which countries are closer to which server, and put the ACLs for those countries in the relevant server's view. That might sound like a lot of work, but it has to be done only once. Here's a minimal example of a named.conf:

...
include "geo_acl.conf";

view "europe" {
  match-clients { FR; DE; IT; AT; ...other european ACLs here... };
  zone "example.com" {
    type master;
    file "/path/to/db-europe.example.com";
  };
};

view "america" {
  match-clients { US; CA; MX; ...other american ACLs here... };
  zone "example.com" {
    type master;
    file "/path/to/db-america.example.com";
  };
};

view "asia" {
  match-clients { JP; IN; BD; ...other asian ACLs here... };
  zone "example.com" {
    type master;
    file "/path/to/db-asia.example.com";
  };
};

And, as one might expect, the zone files are something like

# cat db-asia.example.com
...
mirror   IN   A   10.0.0.1
...

# cat db-america.example.com
...
mirror   IN   A   172.16.0.1
...

# cat db-europe.example.com
...
mirror   IN   A   192.168.0.1
...

Even if you decided to list all the countries in some view, it's probably a good safety belt to define another view after those above as a catch-all default in case some query is coming from an IP that is not in any ACL:

// "unknown" clients are sent to the asian server...
view "default" {
  match-clients { any; };
  zone "example.com" {
    type master;
    file "/path/to/db-asia.example.com";
  };
};

In this example these clients are sent to the asian server, but any other server can of course be used. You can also omit the default view, and just put the view you want to use as default last (eg "asia" here), and use match-clients { any; }; there. I prefer to be explicit and have a "default" view, also because it makes it slightly easier to automate the generation of the configuration in case you want to do it with a script starting with some mapping between countries and areas.

Testing

Probably the perfect site to test this kind of setup is http://just-ping.com. Just enter the name you want to test in the textbox (ie mirror.example.com for this example), and click on "ping!". The website tries to ping (and thus resolve) the name you supply from several locations around the world, so you can check that the same name resolves to different IPs in different locations. If you see that happening, then it's working!
You can also use this website to find out which other sites use geolocation DNS techniques, for example if you try pinging www.yahoo.com or www.google.com you'll see that those names are resolved to different IPs based on the location. (And, btw, it's nice to see how www.google.com actually resolves:

www.google.com.         16258   IN      CNAME   www.l.google.com.
www.l.google.com.       60      IN      CNAME   www-tmmdi.l.google.com.
www-tmmdi.l.google.com. 59      IN      A       216.239.59.104
www-tmmdi.l.google.com. 59      IN      A       216.239.59.147
www-tmmdi.l.google.com. 59      IN      A       216.239.59.99
www-tmmdi.l.google.com. 59      IN      A       216.239.59.103

note the CNAME chain)

Caveats

Provide explicit names

It's useful to include specific DNS names to explicitly use a given server, like for example mirror-europe.example.com, mirror-asia.example.com, mirror-america.example.com, and have them resolve to the IP of the specific server, regardless of the source of the query. That can be useful for testing, and also the alternate names could be provided to users to be used as last resort when the generic name isn't working (for example, because the server in their area to which they would be sent is down). So one could do

# cat db-asia.example.com
...
mirror-america   IN   A     172.16.0.1
mirror-asia      IN   A     10.0.0.1
mirror-europe    IN   A     192.168.0.1
mirror           IN   A     10.0.0.1
...

or even

# cat db-asia.example.com
...
mirror-america   IN   A     172.16.0.1
mirror-asia      IN   A     10.0.0.1
mirror-europe    IN   A     192.168.0.1
mirror           IN   CNAME mirror-asia.example.com.
...

and equivalent things for the other zone files.

Zones not in any view

It's important to note that when views are used, there cannot be zones "outside" of any view, or Bind will complain. You can check that this is the case if you see in the log messages like these:

Dec 10 20:41:17 kermit named[1126]: loading configuration from '/etc/bind/named.conf'
Dec 10 20:41:18 kermit named[1126]: /etc/bind/named.conf:7: when using 'view' statements, all zones must be in views

The solution is to put all the zones in each view, including the root hint "." zone, the "localhost" zone, the "127.in-addr.arpa" zone and friends.

Slave servers

What we have so far is just a single DNS server, and good practices dictate that at least two (or even three) should be available (see RFC 2182, section 5, "How many secondaries?"). One could either use multiple master servers, or set up slaves. Setting up a slave server when views are involved isn't too difficult, but there are some rules to follow if you want all the views replicated in the slave.

The problem with a "normal" slave is that it's seen by the master as just another client, and when a zone transfer is requested the master will send the zone corresponding to the view that matches the slave's IP address, and only that zone. With old versions of Bind, working around the problem required to assign multiple IPs to the slave, and set up things such that it issued multiple zone transfer requests to the master, using a different source IP for each request. For this to work, the master needed to explicitly associate the slave's various source IPs to the various views.

With reasonably recent versions of Bind, there is a much cleaner mechanism available, and it's based on TSIG (Transaction SIGnature) keys. Altough TSIG keys were introduced mainly to be used for dynamic DNS, they work just fine for the purposes of triggering the transfer of zones in different views.

TSIG is based on the communicating parties sharing a secret. What it does is to "sign" DNS transactions by calculating an HMAC over the messages and appending it, so the receiver can authenticate them by recalculating it and see it if matches the transmitted one. While that is certainly good, for our goal here the important thing is that a TSIG-signed message includes the name of the key used to sign it, and that name can be used as a special ACL and thus associated to a specific view.

Essentially, in our example three keys can be defined, one per view, and the servers are configured to use those keys when communicating with their peer. This way, the master will send different zones depending on the key with which the slave signs the *XFR request. Conversely, the slave will accept NOTIFY messages from the master and assign them to the view corresponding to the TSIG key used by the master. The RFC suggests to name a key after the servers that will use it; here, also considering that all the keys will be used between the same pair of servers, for simplicity the convention is relaxed and the name just identifies the view.

Here is an example of how to do that in our scenario, assuming the master and the slave DNS have addresses 10.10.10.10 and 10.20.20.20 respectively:

; On the master
key "key_europe" {
    algorithm hmac-md5;
    secret "YWJjZGVmZw==";
};

key "key_asia" {
    algorithm hmac-md5;
    secret "aGlqa2xtbg==";
};

key "key_america" {
    algorithm hmac-md5;
    secret "b3BxcnN0dQ==";
};

acl all_keys { key key_america; key key_europe; key key_asia; };

view "europe" {
  // this view matches either european IPs, or whoever uses the key key_europe
  match-clients { key key_europe; !all_keys; FR; DE; IT; AT; ...other european ACLs here... };
  // use this key to NOTIFY the right view on the slave
  server 10.20.20.20 { keys key_europe; };
  zone "example.com" {
    type master;
    file "/path/to/db-europe.example.com";
  };
};

view "asia" {
  // this view matches either asian IPs, or whoever uses the key key_asia
  match-clients { key key_asia; !all_keys; JP; IN; BD; ...other asian ACLs here... };
  // use this key to NOTIFY the right view on the slave
  server 10.20.20.20 { keys key_asia; };
  zone "example.com" {
    type master;
    file "/path/to/db-asia.example.com";
  };
};

view "america" {
  // this view matches either american IPs, or whoever uses the key key_america
  match-clients { key key_america; !all_keys; US; CA; MX; ...other american ACLs here... };
  // use this key to NOTIFY the right view on the slave
  server 10.20.20.20 { keys key_america; };
  zone "example.com" {
    type master;
    file "/path/to/db-america.example.com";
  };
};

; On the slave
key "key_europe" {
    algorithm hmac-md5;
    secret "YWJjZGVmZw==";
};

key "key_asia" {
    algorithm hmac-md5;
    secret "aGlqa2xtbg==";
};

key "key_america" {
    algorithm hmac-md5;
    secret "b3BxcnN0dQ==";
};

acl all_keys { key key_america; key key_europe; key key_asia; };

view "europe" {
  match-clients { key key_europe; !all_keys; FR; DE; IT; AT; ...other european ACLs here... };
  // use this key to get the right view from the master when requesting *XFR
  server 10.10.10.10 { keys key_europe; };
  zone "example.com" {
    type slave;
    masters { 10.10.10.10; };
    file "/path/to/db-europe.example.com";
  };
};

view "asia" {
  match-clients { key key_asia; !all_keys; JP; IN; BD; ...other asian ACLs here... };
  // use this key to get the right view from the master when requesting *XFR
  server 10.10.10.10 { keys key_asia; };
  zone "example.com" {
    type slave;
    masters { 10.10.10.10; };
    file "/path/to/db-asia.example.com";
  };
};

view "america" {
  match-clients { key key_america; !all_keys; US; CA; MX; ...other american ACLs here... };
  // use this key to get the right view from the master when requesting *XFR
  server 10.10.10.10 { keys key_america; };
  zone "example.com" {
    type slave;
    masters { 10.10.10.10; };
    file "/path/to/db-america.example.com";
  };
};

So with the above config, the slave will use the key "key_europe" when talking to the master; the master will recognize it as being a client for view "europe", and will transfer the zone in that view. In the same way, the other keys will trigger the transfer of the zones in the respective views. Similarly, the master will use the key "key_europe" when sending NOTIFY messages to the slave, and that will allow the salve to match the received NOTIFY message with the view "europe" (and again, the same for the other keys and views). Note the special ACL all_keys. This is absolutely essential for having zone transfers with views working correctly. If you omit the !all_keys clause in the match-clients directive, depending on the actual location of the servers, if a server presents a TSIG key that doesn't match the one for that view, but its IP address matches a geographic ACL for that view, then the wrong view will be selected and problems will most certainly occur (very hard to troubleshoot, btw). See this thread on bind-users for more information.

Remember that the "secret" string must be a valid base64-encoded string (here they are just "abcdefg", "hijklmn" and "opqrstu", you might want to set them to something a bit stronger, perhaps using dnssec-keygen as described for example here).

GSLB?

The setup described here could be described as a (very) poor man's version of DNS GSLB (Global Server Load Balancing). However, it lacks most of the more sophisticated features of GSLB, in particular the fact that a GSLB solution actively monitors the services and dynamically adjusts its configuration based on load, latency, availablility to name some.
However, even if we are poor, our solution could be partially automated, perhaps with scripts that monitor the services and in case of problems or failures could dynamically trigger a reconfiguration of the DNS server(s) and a zone reload. The new information would suffer from the TTL propagation delay associated to DNS records, but in most cases would still be better that having manual intervention only.

Complete the configuration

The above configuration fragments are for demonstration purposes only, and lack many parts that you want to have in a real configuration, such as the definition of logging, whether zone transfers, dynamic updates, recursive queries are allowed and from whom, etc.

9 Comments

  1. Peter says:

    Thanks very much for the wonderful tutorial, it really helped a lot!!

  2. Very nice overview on how to use Bind views to get request based DNS queries going. Thanks for taking the time to write this up as it was helpful and informative. Cheers!

  3. Dino says:

    Porco dio, sei un grande!!!
    Veramente fico.

    Extra cool!! Great stuff. Thank you!

  4. Manish says:

    Hi

    Great tutorial, I have followed same procedure but when I put the domain in view statement it doesn't resolve even using localhost IP on same server and if the view statement is removed then it works fine. I also tried in view option "match-clients { 0.0.0.0; 127.0.0.1; any; };" still it doesn't resolves. Nothing is coming up in error logs and named and zone config files are fine. I am using bind 9.6.3

    dig @127.0.0.1 www.domain.com

    ; <> DiG 9.6.3 <> @127.0.0.1 www.domain.com
    ; (1 server found)
    ;; global options: +cmd
    ;; connection timed out; no servers could be reached

    Any idea, what could be wrong and where I am missing something.

    • waldner says:

      Well, it may be too easy but check that you don't have any firewall rule that prevents queries to 127.0.0.1. Then, you can try running bind in the foreground with a high verbosity level (eg named -g -d 5 or so) to see if something obvious comes up, either during startup or during the query.

  5. Andreas says:

    Great guide!

    However, there are a few things that needs to be corrected.

    First, you cannot have the "server 10.10.10.10 ..." command within a zone, you need to move it to be in the view.
    Secondly, you are missing the "master" argument in the zones on the slave.

    Also, adding "notify no" on the zones on the slave could be a good idea, as well as allow-transfer and notify yes on the master server.

    Plus, setting up a new key for common zones so that zone transfers for that works as well.

    • waldner says:

      Hi Andreas,

      Yes you are right, I've made the necessary corrections. I was sure it was like that already, but obviously I was wrong :)
      Regarding "notify no" (and lots of other basic security-hardening directives): yes I agree that it's needed; I did not include them in the config because they are not strictly related to the point I want to make here. But I did add a note at the end of the article to remind readers that the configuration shown is "for demonstration purposes only, and lack many parts that you want to have in a real configuration, such as the definition of logging, whether zone transfers, dynamic updates, recursive queries are allowed and from whom, etc."
      Regarding common zones, yes again; it's just that I don't have those in this example.

      Thanks!

      • Andreas says:

        Thanks for the great article by the way, the setup is working like a charm!

        When I set it up, I noticed that I had to add the allow-transfer and notify yes parameters, else the slave servers wouldn't update its zone information at once. Perhaps I'll get spare time to write an article of my own some day :)

        Again, good work with the guide!

        • waldner says:

          Hi Andreas,

          maybe you can also do something like

          allow-transfer { key key_europe; };

          on the master and similarly

          allow-notify { key key_europe; };

          on the slave(s) (and the same for every other view/key). That way any attempt to perform those operations that is not signed with the correct TSIG key will be refused, thus automatically leaving only authorized servers able to perform them.

          Also you may consider using a multi-master setup (which is what I did in the end). That greatly simplifies the configuration (each server has exactly the same configuration file, which is good for automatic deployment) and you can lock it down even more as you can deny transfers and notifies altogether since they are simply not needed in that case. (You'll probably also deny recursive queries and dynamic updates, but that is true of a master-slave setup as well). Also, slaves have trouble if the master is unavailable for a long time (at some point they stop responding), whereas multi-master of course doesn't have that problem.

          But of course everything depends on your specific situation and needs.