How to improve the root – Run it locally

Image shows the locations of the root server IP Anycast instances.
Source: https://root-servers.org/

Current State of DNS Root Servers

The DNS root server system uses IP Anycast.There are 13 root server operators with a total of 1084 instances all over the world. Let’s look at some of the problems in the context of the root server system,

Decrease the round trip time to the root servers

The round trip time to the root servers is dependent on multiple factors. Availability of a root server instance within the country and optimal routing. While the first can be addressed by installing an instance of the root server in a country, the second one is a bit hard to address. Routing determines whether the traffic to the root server from the last mile will reach the instance which is local or take the transit route to an instance outside the country.

If the traffic is transiting outside the country, the result is increased latency and poor performance in the context of DNS resolution.

Case in point, in the context of India, Netnod which is a root server operator managing the i-root-servers.net has an Ancast IPv4 node in Mumbai.

A traceroute from AS9498 to i.root-servers.net shows that traffic is not hitting the local instance but taking the transit route.

traceroute from AS9498 to i.root-servers.net
The above image has been taken from a RIPE Atlas measurement.

Similarly, RIPE NCC is the root server operator managing the k.root-servers.net. Again, in the context of India, there is an Anycast node IPv6 node in Mumbai and Noida.

A traceroute from AS9498 to k.root-servers.net shows that traffic is not hitting the local instance but taking the transit route.

traceroute from AS9498 to k.root-servers.net
The above image has been taken from a RIPE Atlas measurement.

If you aren’t aware of the RIPE Atlas project, check the earlier post

Prevent snooping of queries

In the case of traditional DNS or DNS over 53( Do53), the traffic is unencrypted. In response to the privacy concerns and to secure DNS traffic between the client and the recursive resolver, IETF standardised DNS-over-HTTPS (DoH) and DNS-over-TLS (DoT). While both of the protocols secure the communication between the client and the recursive resolver, traffic between the recursive resolver and the root servers is still in the open i.e unencrypted.

Faster negative responses to queries for non-existent domains

I would like to point you to the earlier posts on Chromium based browsers and DNS, and Junk to the root as that would set the context for this one.

The recent study by ICANN OCTO reveals that a vast majority of the queries to the root servers are for names which do not exist in the root zone. By providing faster negative responses to non existent domains to the stub resolver, eliminate sending the junk queries to the root servers entirely.

Increase the resiliency of the root server system

In the context of DNS, the primary intention of using IP Anycast is to have the topologically closest server provide the answer. This model fails if there is suboptimal routing as seen in the examples of traceroute to the root servers earlier.

The additional benefit of using IP Anycast is that considering optimal routing, in the event of a DDoS attack, the impact is limited in effect as it gets confined to certain areas. In the past, IP Anycast has helped to mitigate attacks on the root server system where the attack became limited in scope to certain Anycast instances of the root server and caused a saturation of network connections.

On the contrary, Mirai botnet attack on Dyn Infrastructure also tells us that large scale attack can cause congestion across the Anycast instances resulting in unavailability of services.

Finally, we get to a set of broader questions – How do we increase resiliency against a DDoS on the root server system ?  Since the root server system doesn’t penalise abuse (period), should we continue abusing it ? 

A probable solution as proposed in RFC 7706 is to run a local copy of the full root zone on the loopback. What this essentially suggests is that the full root zone on the loopback will serve as upstream to the recursive resolver and the recursive resolver should be able to validate the zone from the upstream using DNSSEC.

In order to implement this, one first needs a copy of the root zone. The following root servers currently allow transfer of the root zone using AXFR over TCP,

Sl. NoRoot Server Operator
1b.root-servers.net
2c.root-servers.net
3d.root-servers.net
4f.root-servers.net
5g.root-servers.net
6k.root-servers.net
7lax.xfr.dns.icann.org & iad.xfr.dns.icann.org (L-root server)
Root Server Operators which support transfer of the root zone

The process of manually pulling the root zone has an operational issue – one needs to periodically check if the root zone has changed in the root zone copy at the upstream and then update the copy of the root zone configured to run on the loopback.

Even though RFC 7706 is Informational, recursive resolver software such as ISC BIND, Unbound, Knot Resolver have built-in support.

Slaving of the root zone – ISC BIND 9.16.3(stable)

Image of an excerpt from named.conf showing the slaving of the root zone configuration

Part II of this post will contain operational instructions for running a local copy of the root zone and document some of the pitfalls of doing so.

Jumping on the webinar bandwagon – Introduction to FreeBSD

Note – The FreeBSD Logo and the mark FreeBSD are registered trademarks of The FreeBSD Foundation and are used by Swapneel Patnekar with the permission of The FreeBSD Foundation.

With COVID-19 having disrupted NOG meetings, conferences and onsite trainings, I have decided to jump on the webinar bandwagon and experiment a bit.

On 8th June, I presented an introduction to FreeBSD to students and faculty from different colleges. 84 registered and about half of them attended the webinar.

The initial plan was to keep a time limit of 1 hour for the webinar including Q&A but that extended to roughly around 1 hour and 30 minutes.

In the webinar, I focussed on the FreeBSD Operating System but also provided a brief introduction to the FreeBSD project and the FreeBSD Foundation. In the key period of the session, I spent time on demonstrating installation of the FreeBSD operating system inside VirtualBox. The demo gods were with me on that day.

Based on some of feedback on social media as well as some which I received directly from known people, the webinar seems to have been well received.

There were good number of questions and most have shown interest in learning more advanced topics in FreeBSD like Jails & ZFS. I definitely intend to address that in the coming weeks. Along with FreeBSD Stay safe!

DNS RPZ (Response Policy Zones) – Using DNS as a layer of defence – Part I

Update (06/08/2020)APNIC has published this post on their blog. Robbie Mitchell from APNIC was of great help in correcting a few things and polishing the article. You can read the Part 1 on the APNIC blog here

DNS(Domain Name System) is the crucial & ubiquitous fabric of the Internet.  While on the surface, users rely on accessing websites, apps, email etc underneath it’s the DNS database which provides the map for the Internet.

It’s fair to say that everything on the Internet begins with a DNS query. This means that the DNS is used for legitimate purposes and as well as abused by bad actors.

Adding a layer of security to a flat network

In the context of COVID-19, where most of us are working from home, security of the the devices & data being accessed from a hostile home network has become a major talking point over the last couple of months. The home network is atypical from an enterprise network from a security perspective and apart from its inherent flaws, it’s a flat network.

flat network is a computer network design approach that aims to reduce cost, maintenance and administration.[1] Flat networks are designed to reduce the number of routers and switches on a computer network by connecting the devices to a single switch instead of separate switches. Unlike a hierarchical network design, the network is not physically separated using different switches.

The topology of a flat network is not segmented or separated into different broadcast areas by using routers.

Wikipedia

Here is a representation of a flat network design,

The constraints of a flat network are,

  • No segmentation of traffic – Single broadcast domain
  • Easy & rapid propagation of malicious traffic within the network

One of the layers of security that can be brought into a flat network at an economical cost is by leveraging DNS. Before we look into how that can be implemented, here is a DNS primer for what happens when a domain name is accessed in a network,

Shift of the recursive resolvers

In the above diagrammatic representation, the part which is doing the most heavy lifting is the Recursive DNS Server or Recursive resolver. At the very beginning of the Internet, users themselves ran recursive resolvers on the machines or in the network. This model slowly shifted to the network operators (ISP’s) offering this as a bundled, free of cost offering along with the service. And the model has moved DNS resolution further away from the user with the advent of the Cloud/Quad DNS providers. To name a notable few, Google Public DNS (8.8.8.8, 8.8.4.4), Cloudflare (1.1.1.1, 1.0.0.1), Quad9(9.9.9.9) etc.

While each of these open resolvers services promote faster dns resolution, in reality they are still further away from the user from a round trip metric. Even though all of these open resolver services use IP Anycast, the proximity to the user cannot compete with a local resolver. In obvious terms, the recursive resolver which is in the users network or even the resolver provided by the Internet Service Provider will always be closest.

The one definitive advantage that the cloud/quad DNS open resolvers provide is the availability of a large cache.

If you aren’t convinced yet on running your own DNS resolver instead of outsourcing it to the cloud/quad DNS providers, I would urge you to read Why should I run my own DNS resolver?

And most importantly, if you want to leverage DNS Response Policy Zones (DNS Firewall) to add a layer of security in your network, you need to run a recursive resolver.

What is DNS Response Policy Zones(RPZ) ?

  • It’s currently an Internet-draft and not a standard yet. The latest draft is available here
  • It’s a vendor neutral – BIND, Unbound, PowerDNS Recursor support it
  • Allows policy to be applied to DNS queries. Set a differentiated route for the bad domains
  • Economical solution – a RaspberryPi can act as recursive resolver with DNS RPZ for the entire network – especially useful & low cost solution for home networks, SOHO etc

Just like the functioning of a firewall, RPZ is made up of TRIGGERS & ACTIONS.

This is all good but without threat intelligence data, a DNS Firewall doesn’t add any value.

Threat intelligence RPZ feeds

While there are many threat intelligence providers which provide a DNS RPZ feed, below are some of the free/community ones,

Part II of this post will contain instructions for configuring a RPZ feed in ISC BIND9.

Junk to the root

DNS root servers are the heart of the DNS infrastructure. Although there are just 13 of them, the actual number comprises of 1084 instances in Anycast operated by 12 independent root server operators.

A recent study by ICANN OCTO on Analysis of the Effects of COVID-19-Related Lockdowns on IMRS Traffic shed some light on DNS traffic patterns before COVID-19 and during. While the study looked at the ICANN Managed Root Server Instance (IMRS) i.e a few instances of the L-Root Server ( l.root-servers.net), I wouldn’t be surprised if the pattern is similar for other root servers as well.

One stark observation in the study was the amount of DNS traffic for non-existent TLDs. As every DNS transaction begins with a query to the root server and goes down the delegation chain, queries for non-existent records are also sent to the root servers.

Topping the chart is browsers based on Chromium. Not surprising since Chromium based browsers send a 7-15 character three random strings on startup to check if the browser is sitting behind captive portal. Check my earlier blog post Chromium based browsers & DNS for more information on the topic.

So, I had sent in a question to the Ask Mr. DNS podcast asking if they knew if there was a formal specification/guidelines for consequences of excessively abusing the root servers. And guess what,

Oh, and the guys (or Matt, really) answer a really good question from Swapneel Patnekar about an ICANN paper on the effects of COVID-19 on the root name servers.

I would urge you to listen to the entire episode as it contains juicy bits by Kim Davies about the Root Key Signing Key Ceremony, but if you’re the impatient lot & !DNS Geek, skip to 31:48 to tune in for my few seconds of fame 😀

Chromium based browsers & DNS

While this is not something new, it perhaps has more significance because of the ever increasing market share of more than 60% of Chromium based browsers.

Chromium based browsers have a very uncanny method to check if the web browser is sitting behind a captive portal. And if you’re running a recursive resolver in your network with a large user base running Chromium based browsers (Google Chrome, Brave etc), it might even startle you if you observe the recursive resolver logs.

Here is a snippet from my unbound resolver as soon as I start Google Chrome on the machine(192.168.0.188),

Jun  3 11:16:31 root unbound: [1283:0] info: 192.168.0.188 pwpsfrn. A IN
Jun  3 11:16:31 root unbound: [1283:0] info: 192.168.0.188 yeytluindg. A IN
Jun  3 11:16:31 root unbound: [1283:0] info: 192.168.0.188 zkgtcrxrpfjcjxr. A IN

A research project at USC What’s In A Name? goes into some detail with the classification.

Here is the summary of the study,

Though the root server system handles this application-specific load sufficiently, it is clear that Chrome’s trick of using randomly generated names to discover whether it’s behind a captive portal contributes significantly to the traffic received at the root zone.

What’s in a name? – Wes Hardaker