How to improve the root – Run it locally

Image shows the locations of the root server IP Anycast instances.
Source: https://root-servers.org/

Current State of DNS Root Servers

The DNS root server system uses IP Anycast.There are 13 root server operators with a total of 1084 instances all over the world. Let’s look at some of the problems in the context of the root server system,

Decrease the round trip time to the root servers

The round trip time to the root servers is dependent on multiple factors. Availability of a root server instance within the country and optimal routing. While the first can be addressed by installing an instance of the root server in a country, the second one is a bit hard to address. Routing determines whether the traffic to the root server from the last mile will reach the instance which is local or take the transit route to an instance outside the country.

If the traffic is transiting outside the country, the result is increased latency and poor performance in the context of DNS resolution.

Case in point, in the context of India, Netnod which is a root server operator managing the i-root-servers.net has an Ancast IPv4 node in Mumbai.

A traceroute from AS9498 to i.root-servers.net shows that traffic is not hitting the local instance but taking the transit route.

traceroute from AS9498 to i.root-servers.net
The above image has been taken from a RIPE Atlas measurement.

Similarly, RIPE NCC is the root server operator managing the k.root-servers.net. Again, in the context of India, there is an Anycast node IPv6 node in Mumbai and Noida.

A traceroute from AS9498 to k.root-servers.net shows that traffic is not hitting the local instance but taking the transit route.

traceroute from AS9498 to k.root-servers.net
The above image has been taken from a RIPE Atlas measurement.

If you aren’t aware of the RIPE Atlas project, check the earlier post

Prevent snooping of queries

In the case of traditional DNS or DNS over 53( Do53), the traffic is unencrypted. In response to the privacy concerns and to secure DNS traffic between the client and the recursive resolver, IETF standardised DNS-over-HTTPS (DoH) and DNS-over-TLS (DoT). While both of the protocols secure the communication between the client and the recursive resolver, traffic between the recursive resolver and the root servers is still in the open i.e unencrypted.

Faster negative responses to queries for non-existent domains

I would like to point you to the earlier posts on Chromium based browsers and DNS, and Junk to the root as that would set the context for this one.

The recent study by ICANN OCTO reveals that a vast majority of the queries to the root servers are for names which do not exist in the root zone. By providing faster negative responses to non existent domains to the stub resolver, eliminate sending the junk queries to the root servers entirely.

Increase the resiliency of the root server system

In the context of DNS, the primary intention of using IP Anycast is to have the topologically closest server provide the answer. This model fails if there is suboptimal routing as seen in the examples of traceroute to the root servers earlier.

The additional benefit of using IP Anycast is that considering optimal routing, in the event of a DDoS attack, the impact is limited in effect as it gets confined to certain areas. In the past, IP Anycast has helped to mitigate attacks on the root server system where the attack became limited in scope to certain Anycast instances of the root server and caused a saturation of network connections.

On the contrary, Mirai botnet attack on Dyn Infrastructure also tells us that large scale attack can cause congestion across the Anycast instances resulting in unavailability of services.

Finally, we get to a set of broader questions – How do we increase resiliency against a DDoS on the root server system ?  Since the root server system doesn’t penalise abuse (period), should we continue abusing it ? 

A probable solution as proposed in RFC 7706 is to run a local copy of the full root zone on the loopback. What this essentially suggests is that the full root zone on the loopback will serve as upstream to the recursive resolver and the recursive resolver should be able to validate the zone from the upstream using DNSSEC.

In order to implement this, one first needs a copy of the root zone. The following root servers currently allow transfer of the root zone using AXFR over TCP,

Sl. NoRoot Server Operator
1b.root-servers.net
2c.root-servers.net
3d.root-servers.net
4f.root-servers.net
5g.root-servers.net
6k.root-servers.net
7lax.xfr.dns.icann.org & iad.xfr.dns.icann.org (L-root server)
Root Server Operators which support transfer of the root zone

The process of manually pulling the root zone has an operational issue – one needs to periodically check if the root zone has changed in the root zone copy at the upstream and then update the copy of the root zone configured to run on the loopback.

Even though RFC 7706 is Informational, recursive resolver software such as ISC BIND, Unbound, Knot Resolver have built-in support.

Slaving of the root zone – ISC BIND 9.16.3(stable)

Image of an excerpt from named.conf showing the slaving of the root zone configuration

Part II of this post will contain operational instructions for running a local copy of the root zone and document some of the pitfalls of doing so.

Jumping on the webinar bandwagon – Introduction to FreeBSD

Note – The FreeBSD Logo and the mark FreeBSD are registered trademarks of The FreeBSD Foundation and are used by Swapneel Patnekar with the permission of The FreeBSD Foundation.

With COVID-19 having disrupted NOG meetings, conferences and onsite trainings, I have decided to jump on the webinar bandwagon and experiment a bit.

On 8th June, I presented an introduction to FreeBSD to students and faculty from different colleges. 84 registered and about half of them attended the webinar.

The initial plan was to keep a time limit of 1 hour for the webinar including Q&A but that extended to roughly around 1 hour and 30 minutes.

In the webinar, I focussed on the FreeBSD Operating System but also provided a brief introduction to the FreeBSD project and the FreeBSD Foundation. In the key period of the session, I spent time on demonstrating installation of the FreeBSD operating system inside VirtualBox. The demo gods were with me on that day.

Based on some of feedback on social media as well as some which I received directly from known people, the webinar seems to have been well received.

There were good number of questions and most have shown interest in learning more advanced topics in FreeBSD like Jails & ZFS. I definitely intend to address that in the coming weeks. Along with FreeBSD Stay safe!

DNS RPZ (Response Policy Zones) – Using DNS as a layer of defence – Part I

Update (06/08/2020)APNIC has published this post on their blog. Robbie Mitchell from APNIC was of great help in correcting a few things and polishing the article. You can read the Part 1 on the APNIC blog here

DNS(Domain Name System) is the crucial & ubiquitous fabric of the Internet.  While on the surface, users rely on accessing websites, apps, email etc underneath it’s the DNS database which provides the map for the Internet.

It’s fair to say that everything on the Internet begins with a DNS query. This means that the DNS is used for legitimate purposes and as well as abused by bad actors.

Adding a layer of security to a flat network

In the context of COVID-19, where most of us are working from home, security of the the devices & data being accessed from a hostile home network has become a major talking point over the last couple of months. The home network is atypical from an enterprise network from a security perspective and apart from its inherent flaws, it’s a flat network.

flat network is a computer network design approach that aims to reduce cost, maintenance and administration.[1] Flat networks are designed to reduce the number of routers and switches on a computer network by connecting the devices to a single switch instead of separate switches. Unlike a hierarchical network design, the network is not physically separated using different switches.
The topology of a flat network is not segmented or separated into different broadcast areas by using routers.

Wikipedia

Here is a representation of a flat network design,

The constraints of a flat network are,

  • No segmentation of traffic – Single broadcast domain
  • Easy & rapid propagation of malicious traffic within the network

One of the layers of security that can be brought into a flat network at an economical cost is by leveraging DNS. Before we look into how that can be implemented, here is a DNS primer for what happens when a domain name is accessed in a network,

Shift of the recursive resolvers

In the above diagrammatic representation, the part which is doing the most heavy lifting is the Recursive DNS Server or Recursive resolver. At the very beginning of the Internet, users themselves ran recursive resolvers on the machines or in the network. This model slowly shifted to the network operators (ISP’s) offering this as a bundled, free of cost offering along with the service. And the model has moved DNS resolution further away from the user with the advent of the Cloud/Quad DNS providers. To name a notable few, Google Public DNS (8.8.8.8, 8.8.4.4), Cloudflare (1.1.1.1, 1.0.0.1), Quad9(9.9.9.9) etc.

While each of these open resolvers services promote faster dns resolution, in reality they are still further away from the user from a round trip metric. Even though all of these open resolver services use IP Anycast, the proximity to the user cannot compete with a local resolver. In obvious terms, the recursive resolver which is in the users network or even the resolver provided by the Internet Service Provider will always be closest.

The one definitive advantage that the cloud/quad DNS open resolvers provide is the availability of a large cache.

If you aren’t convinced yet on running your own DNS resolver instead of outsourcing it to the cloud/quad DNS providers, I would urge you to read Why should I run my own DNS resolver?

And most importantly, if you want to leverage DNS Response Policy Zones (DNS Firewall) to add a layer of security in your network, you need to run a recursive resolver.

What is DNS Response Policy Zones(RPZ) ?

  • It’s currently an Internet-draft and not a standard yet. The latest draft is available here
  • It’s a vendor neutral – BIND, Unbound, PowerDNS Recursor support it
  • Allows policy to be applied to DNS queries. Set a differentiated route for the bad domains
  • Economical solution – a RaspberryPi can act as recursive resolver with DNS RPZ for the entire network – especially useful & low cost solution for home networks, SOHO etc

Just like the functioning of a firewall, RPZ is made up of TRIGGERS & ACTIONS.

This is all good but without threat intelligence data, a DNS Firewall doesn’t add any value.

Threat intelligence RPZ feeds

While there are many threat intelligence providers which provide a DNS RPZ feed, below are some of the free/community ones,

Update: Please refer to this blog post for an updated list of feeds.

Part II of this post will contain instructions for configuring a RPZ feed in ISC BIND9.

Junk to the root

DNS root servers are the heart of the DNS infrastructure. Although there are just 13 of them, the actual number comprises of 1084 instances in Anycast operated by 12 independent root server operators.

A recent study by ICANN OCTO on Analysis of the Effects of COVID-19-Related Lockdowns on IMRS Traffic shed some light on DNS traffic patterns before COVID-19 and during. While the study looked at the ICANN Managed Root Server Instance (IMRS) i.e a few instances of the L-Root Server ( l.root-servers.net), I wouldn’t be surprised if the pattern is similar for other root servers as well.

One stark observation in the study was the amount of DNS traffic for non-existent TLDs. As every DNS transaction begins with a query to the root server and goes down the delegation chain, queries for non-existent records are also sent to the root servers.

Topping the chart is browsers based on Chromium. Not surprising since Chromium based browsers send a 7-15 character three random strings on startup to check if the browser is sitting behind captive portal. Check my earlier blog post Chromium based browsers & DNS for more information on the topic.

So, I had sent in a question to the Ask Mr. DNS podcast asking if they knew if there was a formal specification/guidelines for consequences of excessively abusing the root servers. And guess what,

Oh, and the guys (or Matt, really) answer a really good question from Swapneel Patnekar about an ICANN paper on the effects of COVID-19 on the root name servers.

I would urge you to listen to the entire episode as it contains juicy bits by Kim Davies about the Root Key Signing Key Ceremony, but if you’re the impatient lot & !DNS Geek, skip to 31:48 to tune in for my few seconds of fame 😀

Chromium based browsers & DNS

While this is not something new, it perhaps has more significance because of the ever increasing market share of more than 60% of Chromium based browsers.

Chromium based browsers have a very uncanny method to check if the web browser is sitting behind a captive portal. And if you’re running a recursive resolver in your network with a large user base running Chromium based browsers (Google Chrome, Brave etc), it might even startle you if you observe the recursive resolver logs.

Here is a snippet from my unbound resolver as soon as I start Google Chrome on the machine(192.168.0.188),

Jun  3 11:16:31 root unbound: [1283:0] info: 192.168.0.188 pwpsfrn. A IN
Jun  3 11:16:31 root unbound: [1283:0] info: 192.168.0.188 yeytluindg. A IN
Jun  3 11:16:31 root unbound: [1283:0] info: 192.168.0.188 zkgtcrxrpfjcjxr. A IN

A research project at USC What’s In A Name? goes into some detail with the classification.

Here is the summary of the study,

Though the root server system handles this application-specific load sufficiently, it is clear that Chrome’s trick of using randomly generated names to discover whether it’s behind a captive portal contributes significantly to the traffic received at the root zone.

What’s in a name? – Wes Hardaker

33,384 open resolvers in India

The Shadowserver Foundation releases and updates a scan report containing results for open resolvers on the Internet. Open resolvers basically respond to DNS queries from anyone on the Internet. Open resolvers are bad for the Internet primarily because they are a catalyst in a DNS amplification attack.

A Domain Name Server (DNS) Amplification attack is a popular form of Distributed Denial of Service (DDoS), in which attackers use publicly accessible open DNS servers to flood a target system with DNS response traffic. The primary technique consists of an attacker sending a DNS name lookup request to an open DNS server with the source address spoofed to be the target’s address. When the DNS server sends the DNS record response, it is sent instead to the target.

Source

At the time of writing this, from an India perspective, there are 33,384 open resolvers. The number was 72,736 a couple of weeks ago.

Of the quantum, at that time,

ASNAS NameCount
AS9829BSNL-NIB National Internet Backbone77,736

So, what’s going on here ? Most likely, it’s a broken configuration in the CPE(Customer Premise Equipment) of AS9829 which is allowing DNS requests on the WAN IP address and performing recursion.

Most of the cheap Consumer Premise Equipment(CPE) devices that are bundled with the Internet connection run dnsmasq and the firmware never sees an update.

Interestingly, when I compare this with my own measurements, the number of IP addresses responding to port 53 in my results is much higher – 260,886. Though, I haven’t filtered the responses for IP addresses which are performing recursion. There could be IP addresses in the results which are configured as authoritative name servers and that’s perfectly valid.

For some reason, if you are running a DNS resolver on the Internet, strongly suggest that you restrict access by IP address/network.

A better approach is perhaps to configure the DNS resolver software on a RFC1918 private IP address & configure Wireguard/openvpn. Using this approach, the resolver is never exposed to the Internet while at the same time, devices can send DNS queries via the wireguard/openvpn tunnel.

If you found this blog post useful, you might find Shodan geoping and geodns – check ping & DNS resolution interesting.

Educational & Research Institutions in India having their own ASN

A few months ago, Pranesh had asked if there are any universities in India that have their own ASN.

I think the answer warrants a few more details.

AS132785Shiv Nadar University
AS137282KIIT University
AS133552B.M.S College Of Engineering
AS38872Indian School of Business
AS137617Indian Institute Of Management, Ahmedabad
AS136304Institute Of Physics, Bhubaneswar
AS138231Indian Institute Of Information Technology, Allahabad
AS137956Indian Institute of Technology, Ropar
AS134901Indian Institute Of Science Education And Research
AS132749Indraprastha Institute of Information Technology, Delhi
AS2697ERNET (Education and Research Network) India (Also peers with AS55824 – NKN Core Network)

ASN’s part of NKN(National Knowledge Network) Core Network (AS55824)

AS59163GLA University
AS138155Jawaharlal Nehru University
AS55566Inter University Centre for Astronomy and Astrophysics
AS134023Aligarh Muslim University
AS132995South Asian University
AS58758Tata Institute of Fundamental Research (Also has AS4755 as IPv4 peer)
AS134934Institute For Stem Cell Biology And Regenerative Medicine (Also has AS45820 AS IPv4 peer)
AS134322Tata Institute of Fundamental Research (Also has AS9498 as IPv4 peer)
AS132524Tata Institute of Fundamental Research (Also has AS18101 as IPv4 peer)
AS23770Tata Institute of Fundamental Research (Also has AS45820 as IPv6 peer)
AS137136Indian Agricultural Statistics Research Institute
AS136005Raman Research Institute
AS135730Datta Meghe Institute Of Medical Sciences
AS133723Institute for Plasma Research
AS133313Saha Institute of Nuclear Physics
AS133273Tata Institute of Social Sciences
AS133002Indian Institute of Tropical Meteorology
AS132780Indian Institute of Technology, Delhi
AS131226Indian Institute Of Technology, Roorkee

While the data on NKN’s website mentions about 1622 connected institutions, apart from the list above, the majority of them do not have an ASN.

I will visit this post every few months and update the data.

RIPE Atlas software probe – Host one in your network

tl;dr This post outlines information on the RIPE Atlas software probe. Also, have a look at Shodan geodns and geoping for running measurements from vantage points.

RIPE Atlas is a global network of devices, called probes and anchors, that actively measure Internet connectivity. RIPE Atlas users can also perform customised measurements to gain valuable data about their networks. 

At the time of writing, 12,000+ probes were connected. The total number of probes connected may be higher, as probes go offline due to Internet disconnections and power issues, especially in underdeveloped/developing countries.

All this while, the RIPE Atlas probes have been hardware devices.

That changed sometime in February 2020, when the RIPE NCC released a software version of the RIPE Atlas probe. This is super useful (apart from the fact that the hardware probe costs money to manufacture and ship and most importantly Indian customs 😢 ) as you can run the software probe on RaspberryPi along with many other supported operating systems(CentOS7, CentOS8, Debian 9, Debian 10 and Docker). 

For more information about installing the software probe and registration, please click the following link.

Here is a video that was recorded by RIPE NCC as part of a webinar that I did for them.

If anyone needs any help in installing/registering the probe, feel free to ping 🙂