I run Technitium DNS server at home in a container. It supports DoH, DoT, multiple upstream resolvers (and multiple upstream queries, adblock support and a sleep of other goodies (API). If you're self hosting an internal resolver I highly recommend checking it out. I prefer it to pihole.
Technitium is a full fledged authoritative recursive DNS resolver with adblock, whereas pihole is more focused solely on the adblock experience. Technitium supports full zone management, DNSSEC, and all DNS record types. Pihole is limited to a handful of types and can't do custom zones and the management is somewhat clunky IMO. Technitium has full DNS logging, statistics, conditional forwarding, and a full rest API for management.
I think Pihole is a great project, but Technitium caters better to power users and people that want/need more complex control over their internal DNS infrastructure (while still remaining relatively simple from a management perspective).
Simply put, Pihole is adblock with some DNS sprinkled in, Technitium is a DNS server first with adblock support. I don't think you can go wrong either way, but if you're going to need more advanced DNS capabilities at home, roll Technitium to save yourself a migration down the line.
I also love Technitium, and excited about some of the features that are coming soon. PostgreSQL for logs, aggregated view for viewing activity across multiple servers. The developer is also very active on Reddit and Patreon.
I’ve been running pihole for 10-15 years, don’t even remember. Will take a look at Technitium. Looks like they have a Mac build, so I can just try it before attempting to replace pihole.
Looking online, seems some people run both, not sure if that’s worth it though.
> I’ve been running pihole for 10-15 years, don’t even remember. Will take a look at Technitium. Looks like they have a Mac build, so I can just try it before attempting to replace pihole.
Likewise. I remember setting it up first in the mid 2010s when pis went mainstream. Although I've also used AdGuard home, but I keep a pi-hole running for 'high availability.'
I set up authoritative nameservers at home using unbound, which appears to be considerably easier than configuring BIND, but I still can't say that I fully understand it. DNS (and networking in general) is a bit of a dark art.
You can't go too far wrong with unbound and it is seriously fast and light.
Real men cry into their text editors with BIND and PowerDNS but you do get the whole toy box with these beasties. I've whizzed up many BIND daemons. I once ran a pair of PDNS servers with a MySQL replicated back end.
I currently have an internet exposed and rather locked down PDNS for ACME DNS-01 (Lets Encrypt). The CA consortium are insisting on SSL certs going down to 40 odd day lifetimes within about three years. I look after quite a few SSL certs for my customers. Anyway.
For home labbers, you might consider a Pi Hole (doesn't have to run on a Pi - a VM will do) or, a bit more hard core: https://technitium.com/dns/ (web GUI - yay!) pfSense has Unbound built in and I think OPNSense does too - both are fine choices of router. OpenWRT probably has unbound in it.
When I say, you can't go too far wrong with unbound, I mean it. If it works then it is almost certainly configured correctly.
I am just using adguard home as my dns server (installed as a plugin in opnsense). Am I naively doing something wrong, or is that a relatively decent choice as well?
Not doing anything wrong, different flavours for different folks. I tried Adguard Home but, found myself liking PiHole a little more. They're both excellent, and both are open source. I'd suggest, anyone that says AdGuard Home or PiHole is betterm, is as objective as saying "starberry is the most superior flavour of ice cream". :)
That said! I haven't used AdGuard Home in a very long time, might be time for me to revisit.
DNS really is pretty easy. The problem is that Bind zone files are an absolutely godawful interface which makes it seem 10x harder than it actually is. I'm not too upset about it (because it's free software and I figure I can't complain too much), but compare to other DNS solutions and it's night and day how easy they are. For example, running a Windows DNS server (while not something you'd do at home) is dead simple because Microsoft has polished up the user experience. I'm sure some more polished alternatives exist for Linux too, I'm just not familiar enough with what's out there to point to one as an example.
I have this as well, but run a heavily locked down and isolated BIND server with NSD and Unbound for external authoritative and internal caching DNS respectively.
Its easy to feed an RBL to unbound to do pi-hole type work, I use pf to transparently redirect all external DNS requests to my local unbound server but I get the bind automation around things like DNSSEC, DHCP ddns and ACME cert renewals.
The sheer luxury of two B channels at 64kBps each and if you were cunning, the D channel at 16k (I wasn't cunning and didn't bother)! Yay, double phone charges if you raised the second channel. That was a BRI. A PRI was lots of channels (30) and an even more eye watering bill.
A customer dumped their BRI that was acting as a backup to SIP n that about six months ago. That's the last one I know of.
A trick some ISPs used in the 90's was a "data over voice" call, which ran at 56K but was charged voice rates instead of data rates. That meant the call was generally free. The improved latency of ISDN made a huge difference compared to a 56K modem.
It can be tricky with certain sites to track down the correct domains to whitelist without giving a whole swath of ad domains the keys to the kingdom. Getting weather.com working was a bit of a bear in this regard (I know the information they present is available elsewhere ad-free, but I find the way they package that information convenient and I'm nothing if not lazy).
Because ISC DHCP was discontinued, I switched to dnsmasq, which also caused me to switch my home DNS server from unbound to dnsmasq so that local dhcp hostname registration would continue to work.
I always thought of dnsmasq as a bit of a toy, but I have to admit I've been impressed. So far it's worked flawlessly, and I'm especially impressed that you can reconfigure it without restarting the process.
My only complaint is not specific to dnsmasq, and that's with ipv6. Devices assign themselves essentially random addresses, so it's impossible to correlate DNS lookups from those addresses with what actual device is making the request. The obvious solution to this, a fully managed DHCP6 setup, does not seem to be well supported by dnsmasq, but it wouldn't matter even if it was because so many devices don't support DHCP6, only slaac. So the whole thing is a bit of a mess.
> The obvious solution to this, a fully managed DHCP6 setup, does not seem to be well supported by dnsmasq
I'm using dnsmasq for DHCPv6 and it seems to work fairly well for me. "dig <device-name> AAAA" returns the correct addresses for my DHCPv6-supporting devices.
> but it wouldn't matter even if it was because so many devices don't support DHCP6, only slaac.
This should theoretically work with "--dhcp-range=slaac,ra-names" [0], but it doesn't seem to actually do anything for me.
I just made my own router in the last month for the first time and chose isc-dhcpd. My understanding is it would be more accurate to call the software "finished" - the codebase is very mature and the DHCP protocol isn't exactly a moving target. It does everything I need in a LAN DHCP server, and it integrates very easily with BIND. Given I expect to never need to update this thing besides basic security updates to FreeBSD/pf, is there a strong reason for switching?
> The obvious solution to this, a fully managed DHCP6 setup, does not seem to be well supported by dnsmasq, but it wouldn't matter even if it was because so many devices don't support DHCP6, only slaac.
DNS & DHCP are generally short lived transactions that are very easy to restart and retry, so as long as it restarts very quickly that seems like a reasonable trade off in implementation complexity to be honest.
We found out the scaling issues with dnsmasq when we had about 20k blade servers hitting it for dhcp. UDP traffic caused it to fall over on a fairly beefy server. Switching to Kea solved the issue.
At home I have an openbsd box as my network gateway running unbound and nsd. Unbound handles the caching and recursion, nsd handles the local name resolution.
I have a small utility (made up of two shell scripts and a python script) which watches /var/db/dhcpd.leases for changes and parses it to produce the zonefiles for nsd.
Y’know the script approach sounds like a good idea.
I also have an OpenBSD box similar to what you describe, but I run ISC dhcpd and BIND because it’s the only setup that does old-school dynamic DNS where the dhcp server sends zone updates to BIND when a lease happens.
But I hate BIND, and not to mention this setup doesn’t work with DHCPv6 (no idea why, it should in principal…) maybe I should just do the “script to read the leases and generate the zone file” approach instead.
The world has been waiting for a DHCP and content DNS server that simply share a common database back-end, meaning no notifications/updates/scripts, for decades. See https://news.ycombinator.com/item?id=44395279 for more.
I host my own name servers and have a custom dynamic DNS script with email notifications and everything. Honestly, wish I could replace the domain registrar as well and just have my own public TLD. For 50 million USD though (give or take) and the .sh TLD could be mine
He fails to mention how many systems (Android, iOS, Firefox) feel entitled to ignore the LAN's choice of DNS server and use various DoH or similar solution, which means split DNS no longer works when your resolver is getting its results from CloudFlare et al, with NXDOMAIN or external IP addresses as a result.
> I really should use the official .internal TLD (Top Level Domain) for my homelab network, but I decided against it. This introduces the risk of name resolution problems, should someone offer a public .jhw TLD in future. It’s a risk I am willing to accept in exchange for using a 3 letter TLD at home. Don’t be like me! Use .internal instead. With that out of the way, let’s continue.
Why not a subdomain under one of the public domains he already has?
For interactive use you'd typically only use part of the domain anyway, with correctly set up search list. Also has the advantage of easily making some hosts available via IPv6 to the outside - or with split horizon DNS and a gateway host expose specific services, where inside connection directly goes to the specific host, and outside via a reverse proxy.
Overall he's just describing a typical simple internal DNS setup - from the title was expecting him to talk about how he got a stable authoritative DNS server for his public domain running at home (and how he got around the "two nameservers" requirement).
On the plus side, that made me realize that my current home connection _is_ stable enough to host one of my three authoritative DNS servers, which should save me about 7 EUR per month.
My preference is to register a publicly resolvable domain and then just only use it internally. Then you can still get publicly trusted TLS certificates for it, in case you want them.
Doesn’t stop you from using your own private CA, either, but at least you have the option.
Given how modern browsers are increasingly hostile to long-lived, self-signed certs, I've resigned to paying the .com tax every year for a real domain. There's so many ACME clients now (e.g. HomeAssistant has a plugin), that it's fairly easy to have legitimate certs on internal devices. A side benefit is having a subdomain that can be used as a dynamic DNS record.
Cloudflare (and probably others) let you enter non-routable IPs into their DNS, so myhomeserver.mydomain.com can point to 192.168.1.45 on your LAN without having to run your own DNS/hosts.
Are they? Browsers treat long-lived self-signed certs pretty much exactly how they always have, from what I’ve seen: if you’ve trusted the cert in your system trust store, it just works. If you haven’t, you get a red warning page and have to click to proceed.
The key concept is learning from the mistakes of others, instead of repeating them. The past several decades provide numerous examples of people picking "internal" top-level domain names that they were 100% positive no-one else would ever use … until someone else did, sometimes as a result of the exact same thinking.
There is no RFC AFAIK, but it has certainly seen some adoption over the past decade. Mikrotik devices use `router.lan` as a default domain name for their routers, for instance. Home labbers on YouTube seem to like to use `.lan`, too.
Would it be fair to think there is a chance `.lan` might get an RFC of its' own given the popularity? Or that's completely irrelevant in case with RFCs? Hard to tell what's the reasoning there - `.home.arpa` seems excessivly long and inconvenient.
Would be a real shame and a bummer if `.lan` ends up becoming public :')
I agree it would be great to get some of the vendor pushed / common domains put into an accepted standard.
In my interaction with IETF standards they are created / implemented in two ways:
1. They set the forward direction for a new technology before it is wide spread.
2. They wait for a technology to become popular / accepted and start to set standards from that baseline.
Both are reasonable paths of implementation given how the pace of changes in technology.
I doubt .lan, .local, .home, etc will either become public or a standard just based on existing devices that default to these domains and documentation or books that might reference them as example domains.
I personally find Bind to be such an awful DNS server to configure. It's a bit like setting up Arch or Gentoo; tons of configuration so you can get down to the details and learn about every single part of the system, but ultimately there are only a few fields that you generally need to touch.
My DNS server of choice remains PowerDNS. I also find the API easier to use with certbot and the available web UIs.
And it would have done the same job the person was looking for. This binds to all interfaces, avoids explicitly respecifying the default paths as a lot of the config lines on the site do, logs what most people care to log to syslog, and forwards requests from any private subnet or the local machine. Alternatively, the distro probably comes with a default file with any distro specific customization you may wish to align to and just needs these 3 lines added.
For the next 8% where people operate "real" dns servers I agree the zone definition syntax is a bit verbose (especially if you're doing many domains or reverse lookup zones) but not necessarily that complicated. The last 2% probably care about all of the syntax that starts to look like mumbo-jumbo which bind documentation focuses on. Oh, I will complain about bind expecting you to manually increment serial numbers in your zonefiles though... but most deployments like this (or even ones acting as the nameserver for some domains) don't actually need that anyways.
No complaints about choosing PowerDNS though. Hard to go wrong with it for this either.
Do you really want to do forwarding in 2025 when there's all kinds of DNS censorship going on? I'd think you want your DNS server to do the recursive lookup itself.
If you're talking about "censorship from what the forwarder will resolve": You're free to pick any forwarders you'd like (in this case), not just ones that censor, and a forwarder is likely to perform better for most people's use.
If you're talking about "censorship of unencrypted DNS traffic in general": The censorship (or security/privacy risk or whatever your reason for caring it's unencrypted) doesn't care if you're sending traffic to a root nameserver for recursive resolution or traffic to a forwarder. What you need is something like encrypted DNS over another commonly encrypted channel that won't be blocked (e.g. DoH), which actually fits better with using a forwarder since most servers you'll recurse to don't support such transports.
Recursive resolution of public domains is really not as useful as it may sound for most people. The folks it perhaps helps the most are those interested in having the fewest external dependencies in their infrastructure. I have another comment about how to maximize that goal more than just resursing to the public root servers.
All public resolvers censor - some more than others. That's why you should run your own resolver. If you're already running unbound, just delete your forwarder configuration and it will be a resolver by default (I think).
Have you had issues with the .jhw TLD on Apple devices? I have my own DNS for my homelab with CoreDNS with house.hill as my domain. My house is on a hill. But .hill is not a TLD, and both my macbook and iphone stopped resolving it quite a while ago.
No. Both MacOS and iOS happily resolve and connect to the machines in my homelab.jhw domain. I did add the root cert of my CA (Certificate Authority) to the trust store on MacOS and iOS, so I can also enjoy TLS connections. Scroll to the "Add the certificate" part of https://jan.wildeboer.net/2025/08/Create-SMIME-Cert-stepca/ for the HOWTO that worked for me.
That generally suggests they’re not pointing at the resolver you have set to handle that domain. Otherwise your apple devices can’t tell a valid TLD from an invalid one: they just launch the DNS lookup and let the server tell them.
The exception to this is .local, which you shouldn’t use for internal systems because it will confuse the heck out of them in weird ways, because .local is by RFC not meant to be used in that way.
If you have Advanced Tracking and Fingerprinting Protection enabled for Safari, it will ignore your system resolver. iCloud Private Relay also ignores it unless DNS is set using configuration profiles.
Doesn't the use of .internal and the likes preclude the use of ACME/certbot for your internal https services? Unless you want the pain of running your own internal CA but then some OSes complain about internal CAs these days.
Yes on the preclusion, because ACME is based on you proving you are in some way in control of the public domain you're trying to get a cert for, but using ACME/certbot for internal homelabs is not the same walk in the park as it is for publicly exposed servers anyways.
The easiest solution I've found is to not play the game. I.e. just use HTTP for your homelab, and if the service doesn't let you use anything but HTTPS then bind it to 127.0.0.1 and set up Caddy to reverse proxy and ignore the cert in a few lines. If you want to expose things externally and do happen to own a domain then set up a single external *.yourdomain.tld record which points to your public IP, bind an instance of Caddy to it, and reverse proxy to the internal HTTP only services. The internal service DNS entries can still use .internal so you won't have to deal with split-horizon DNS either.
I tried BIND, Dnsmasq, unbound, adguard.
At the end I picked CoreDNS outside kubernetes. It can do everything (including wildcards and ACL), requires few resources and easy to configure.
One interesting aspect of this is how using BIND puts focus upon serial numbers.
Serial numbers were a bane 3 decades ago. When Daniel J. Bernstein invented djbdns, xe made the software (tinydns-data) auto-generate the serial numbers from the last modification timestamp of the source file, and made several observations on the subject of serial numbers that are well known, or at least easy to work out for onesself with a modicum of thought.
This article warns the reader thrice, in all-capitals, about remembering to manually increment serial numbers. That's still after all these years reasonable advice for BIND users and a still habit to form if one uses BIND. The numbering scheme used here will only allow 100 changes per day, of course.
But nothing in the article, nor the planned parts 2 and 3 described in the ensuing FediVerse discussion, actually needs zone serial numbers to be incremented at all, as there's no replication of any kind, let alone "zone transfer", here and parts 2 and 3 (if they end up as stated, they not having been published yet) will not encompass replication either. It's heavily stressed because it has been a common pitfall for BIND users for decades; but ironically the article series does not actually need it.
It's a shame, really, because the things that should be emphasized as much if not more here, are reduced in comparison. internal. still not being an IANA special-use domain name is one. (See RFC 8375 for home.arpa., which is a special-use domain name.) The way that this setup will leak 192.168.0.0/16 reverse lookups outwith 192.168.1.0/24 to the world at large, and 172.16.0.0/12 lookups outwith 172.16.0.0/16, is another. (named.rfc1912.zones does not cover any of the requisite domain apices, RFC 1912 not being RFC 1918, and is in any case a RedHatism that one cannot rely upon on even, say, Debian, let alone on a non-Linux-based operating system.) The pitfalls of using a superdomain that one does not own, e.g. homelab.jhw. here, is a third that is glossed over. (Anyone who has tried to set up an organization's domain naming will know of the pitfalls that this entails; this is as much of a bad habit to avoid gaining in the first place as updating BIND's serial numbers is a habit to learn.)
Furthermore, making "everything at home just work, even with no internet connection" involves something further, missing from this and from the described forthcoming parts 2 and 3: a private root content DNS server. There's a surprising amount of stuff that relies on at minimum getting negative answers from the . content DNS servers for top-level domains that do not exist, and various blackholed domains.
I'm not sure why he didn't use a subdomain of his wildeboer.net domain for his home lab? I put all that stuff under lab.example.com (where example.com is an actual domain that I own, of course.) One nice thing about this is you can then use letsencrypt with a DNS-01 challenge and get real TLS certs for it.
Author here. I used the homelab.jhw mainly as part of my tests and experiments with my own certificate authority and to avoid going into split horizon DNS setup.
I did this as well. I have terraform interconnected so that I make TF entries in my Unifi repo which become DHCP reservations, and the AWS terraform makes DNS records for them.
"This way works well for most people but, your ISP can see and control what website you can visit even when the website employ HTTPS security. Not only that, some ISPs can redirect, block or inject content into websites you visit even when you use a different DNS provider like Google DNS or Cloudflare DNS. Having Technitium DNS Server configured to use DNS-over-TLS, DNS-over-HTTPS, or DNS-over-QUIC encrypted DNS protocols with forwarders, these privacy & security issues can be mitigated very effectively."
And if using Technitium as encouraged/promoted/facilitated,^1 now "DNS provider like Google DNS or Cloudflare DNS [or IBM Quad9 DNS]... _can_ see and control what website you can visit even when the website employ[s] HTTPS security" and "_can_ redirect, block or inject content into websites you visit"
If the ISP seeing "what website you can visit" is the issue, then Technitium is not going to help. A DNS lookup does not necessarily mean a "visit" but even assuming it does, then the ISP can already see the "visit" (the actual TCP connection not the DNS lookup) via the SNI extension in the TLS ClientHello
If the issue is ISP "control [over] what website you can visit", then Technitium _might_ help
But if the issue is _any_ third party "control [over] what website you can visit", then all that has been done with Technitium is to substitute a "DNS provider" other than the ISP as the third party
Technitium uses the word "can" to describe the issue: "your ISP can see and control..." and "some ISPs can redirect, block or inject..."
Maybe this is purely coincidental, who knows
The fact is some ISPs _have_ redirected, blocked or injected content by utilising their control over DNS
It has happened and, certainly with redirection at least, it is ongoing. Why use the word "can". ISPs do it
As far as we know neither Google nor Cloudflare has done it
It may or may not have happened. But either way, they _can_ do it
There is nothing that stops third parties from doing it in the future, whether it's an ISP or an advertising services and surveillance company or a TLS MiTM CDN
Excerpt from RFC 1035:
"Name servers manage two kinds of data. The first kind of data held in sets called zones; each zone is the complete database for a particular "pruned" subtree of the domain space. This data is called authoritative. A name server periodically checks to make sure that its zones are up to date, and if not, obtains a new copy of updated zones from master files stored locally or in another name server. The second kind of data is cached data which was acquired by a local resolver. This data may be incomplete, but improves the performance of the retrieval process when non-local data is repeatedly accessed. Cached data is eventually discarded by a timeout mechanism."
The "DNS provider" is not a source for authoritative data
Many self-proclaimed "technical" people use it this way; hopefully they understand exactly what they are doing
For example, according to RFC 1035 the operator of the nameserver for example.com controls the DNS data for example.com, not the ISP, not Google, not Cloudflare or any other third party "DNS provider"
The internet user can run his own authoritative nameserver and/or a cache^2 and decide what the DNS data for example.com should be served when his computers ask for it; he could thereby eliminate the need for third parties
He could obtain that DNS data from the nameserver(s) for example.com
He could also obtain it from an "upstream" third party cache, such as an ISP, Google, or Cloudflare; i.e., he could rely on a third party
But according to RFC 1035 only the nameserver for example.com is the authoritative source^3
If you decide not to use a forwarder, the DNS server will be truly independent.
The DNS server will contact the Root servers for the TLD namesevers of a domain, the TLD nameservers and then the actual authoritative nameserver for the particular domain.
No forwarder needed.
This means you bypass any DNS based filtering any DNS ‘forwarder’ may have in place.
I've always felt it makes sense to either use a forwarder you trust or just operate the root zone yourself. Going to the root zone dynamically is certainly the most technically correct, but if your goals involve either "independence" or "retaining some measure of the performance of using forwarders while still resolving things directly yourself" then you can just pull the root zone daily and operate your own root server https://www.iana.org/domains/root/files. Of course, IANA would rather you just use DNS as technically correct as possible because, well, that's what they exist for, but they don't attempt to roadblock operating your own copy of the root.
It's hard to go much deeper than that in practice as the zonefiles for TLDs are massively larger, massively more dynamic (i.e. syncing once a day isn't usually enough), and much harder to get ahold of (if it all, sometimes).
Regardless of how you go about not using a forwarder, if that's the path you choose then I also heavily recommend considering setting up some additional things like cached entry prefetching so recently used expiring entries don't get "hitches" in latency.
There are actually several additional subdomains of arpa. that one can also replicate, not on that list, which are largely invariant.
And really it's not about technical correctness. It has been known how to set up private roots since the 20th century. Some of us have had them for almost that long. Even the IETF has glacially slowly now come around to the view that idea is a good one, with there now being an RFC on the subject.
The underlying problem for most of that time has been that they're difficult to do with BIND, at least a lot more difficult to do than with other content DNS server softwares, if one clings, as exhibited even here in the headlined article, to a single server vainly wearing all of the hats at once.
All of the people commenting here that they use unbound and nsd, or dnscache and tinydns, or PowerDNS and the PowerDNS Recursor, have already overcome the main BIND Think obstacle that makes things difficult.
It's technically incorrect in that IANA would like you to have your DNS server use the DNS protocol's built in system of record querying and expiry rather than pull a static file at your own interval (IIRC I don't think root servers support AXFR for performance reasons?) as there is no predefined fixed schedule for root zone updates. Practically, root zone update changes are absolutely glacial and minuscule (the "real" root servers only get 1-2 updates per day anyways) so pulling the file once per day is effectively good enough to never care it's not as DNS would intend you to get the record updates.
Setting this up in bind should be no more difficult than adding a `zone "."` entry pointing to this file, the named.conf need not be more than ~a dozen lines long. It's easy to make bind config complicated though (much like this article), but I'm not sure that was the barrier vs just being comfortable enough about DNS to be aware the endeavour is even something one could seek to do - let alone set out to.
The general root servers generally don't support AXFR, but if you want to AXFR the root, you can do so from lax.xfr.dns.icann.org or iad.xfr.dns.icann.org.
Root hints are enough for most use cases. In 30 years of running my own DNS servers, I never once needed to replicate the the root zone. Unless you have a totally crap internet connection you're not going to notice those extra lookups.
I used to do that, but that has the downside of sending all your DNS requests unencrypted over the network. By using a forwarder you have the option to use DoT or DoH.
There is work coming at the IETF to help with this.
- Draft: DELEG (a new way of doing delegations, replacing the NS/DS records).
- A draft to follow: Using the extensible mechanisms of DELEG to allow you to specify alternative transports for those nameservers (eg: DoH/DoT/DoQ).
This would allow a recursive server to make encrypted connections to everything it talks to (that has those DELEG records and supports encrypted transports) as part of resolution.
Of course, traffic analysis still exists. If you are talking to the nameservers of bigtittygothgirls.com, and the only domains served by those name servers are bigtittygothgirls ...
Cool setup, but seems quite complex and a lot to manage. I’m a UniFi fanboy and they support custom DNS records now. Just configure DHCP to send out your routers IP as DNS and done. Need device static IP assignment? That also is done at the UniFi router instead of having to configure client side static ips.
> EVERY HOSTNAME RECORD ENDS WITH A . YOU WILL FORGET THIS. YOU WILL FIX THIS.
> WITH A DOT AT THE END. DO NOT FORGET THE DOT!
> WITH THE DOT AT THE END. AND DID YOU UPDATE THE SERIAL? :)
I just love these comments. You made my day! :D
https://technitium.com/dns/
Can you talk more as to why?
Technitium is a full fledged authoritative recursive DNS resolver with adblock, whereas pihole is more focused solely on the adblock experience. Technitium supports full zone management, DNSSEC, and all DNS record types. Pihole is limited to a handful of types and can't do custom zones and the management is somewhat clunky IMO. Technitium has full DNS logging, statistics, conditional forwarding, and a full rest API for management.
I think Pihole is a great project, but Technitium caters better to power users and people that want/need more complex control over their internal DNS infrastructure (while still remaining relatively simple from a management perspective).
Simply put, Pihole is adblock with some DNS sprinkled in, Technitium is a DNS server first with adblock support. I don't think you can go wrong either way, but if you're going to need more advanced DNS capabilities at home, roll Technitium to save yourself a migration down the line.
Looking online, seems some people run both, not sure if that’s worth it though.
Likewise. I remember setting it up first in the mid 2010s when pis went mainstream. Although I've also used AdGuard home, but I keep a pi-hole running for 'high availability.'
Real men cry into their text editors with BIND and PowerDNS but you do get the whole toy box with these beasties. I've whizzed up many BIND daemons. I once ran a pair of PDNS servers with a MySQL replicated back end.
I currently have an internet exposed and rather locked down PDNS for ACME DNS-01 (Lets Encrypt). The CA consortium are insisting on SSL certs going down to 40 odd day lifetimes within about three years. I look after quite a few SSL certs for my customers. Anyway.
For home labbers, you might consider a Pi Hole (doesn't have to run on a Pi - a VM will do) or, a bit more hard core: https://technitium.com/dns/ (web GUI - yay!) pfSense has Unbound built in and I think OPNSense does too - both are fine choices of router. OpenWRT probably has unbound in it.
When I say, you can't go too far wrong with unbound, I mean it. If it works then it is almost certainly configured correctly.
That said! I haven't used AdGuard Home in a very long time, might be time for me to revisit.
Its easy to feed an RBL to unbound to do pi-hole type work, I use pf to transparently redirect all external DNS requests to my local unbound server but I get the bind automation around things like DNSSEC, DHCP ddns and ACME cert renewals.
I'm surprised this isn't a more common stack.
The trouble starts when you want to provide ALL domains I guess. I wonder what database would be best for that; just MySQL with int to name table?
The trouble with DNS is that you need a fixed external IP that has port 53 open.
Not easy to get at home cheaply.
The sheer luxury of two B channels at 64kBps each and if you were cunning, the D channel at 16k (I wasn't cunning and didn't bother)! Yay, double phone charges if you raised the second channel. That was a BRI. A PRI was lots of channels (30) and an even more eye watering bill.
A customer dumped their BRI that was acting as a backup to SIP n that about six months ago. That's the last one I know of.
Dynamic routing is fun :)
To me a huge benefit of unbound is that it allows to return whatever you want for wildcards.
Including TLD wildcards.
Seychelles DNS has been hijacked as a whole and only serves malware? Null route the entire .sc.
.ru ? Nah, that won't resolve at my place.
etc.
Then unbound is at ease, even on an old Raspberry Pi, with blocklists made of hundreds of thousands of lines.
I always thought of dnsmasq as a bit of a toy, but I have to admit I've been impressed. So far it's worked flawlessly, and I'm especially impressed that you can reconfigure it without restarting the process.
My only complaint is not specific to dnsmasq, and that's with ipv6. Devices assign themselves essentially random addresses, so it's impossible to correlate DNS lookups from those addresses with what actual device is making the request. The obvious solution to this, a fully managed DHCP6 setup, does not seem to be well supported by dnsmasq, but it wouldn't matter even if it was because so many devices don't support DHCP6, only slaac. So the whole thing is a bit of a mess.
I'm using dnsmasq for DHCPv6 and it seems to work fairly well for me. "dig <device-name> AAAA" returns the correct addresses for my DHCPv6-supporting devices.
> but it wouldn't matter even if it was because so many devices don't support DHCP6, only slaac.
This should theoretically work with "--dhcp-range=slaac,ra-names" [0], but it doesn't seem to actually do anything for me.
[0]: https://thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html#:~:t...
Relevant reading: https://issuetracker.google.com/issues/36949085
Android Public Tracker - Support for DHCPv6 (RFC 3315) - Status: wontfix
https://manned.org/man/dnsmasq#head6
Probably.
I have a small utility (made up of two shell scripts and a python script) which watches /var/db/dhcpd.leases for changes and parses it to produce the zonefiles for nsd.
Edit: https://paste.rs/vgr7t.txt
I also have an OpenBSD box similar to what you describe, but I run ISC dhcpd and BIND because it’s the only setup that does old-school dynamic DNS where the dhcp server sends zone updates to BIND when a lease happens.
But I hate BIND, and not to mention this setup doesn’t work with DHCPv6 (no idea why, it should in principal…) maybe I should just do the “script to read the leases and generate the zone file” approach instead.
https://paste.rs/vgr7t.txt
Enjoy
https://arstechnica.com/information-technology/2024/02/doing...
I never took the plunge - dnsmasq, adguardhome, and let's encrypt for me..
The author has a randomized username but is actively posting today. If you see this, 1vuio0pswjnm7, thanks for the tip!
[0] https://news.ycombinator.com/item?id=31385133
Why not .lan? The key word is official?
For interactive use you'd typically only use part of the domain anyway, with correctly set up search list. Also has the advantage of easily making some hosts available via IPv6 to the outside - or with split horizon DNS and a gateway host expose specific services, where inside connection directly goes to the specific host, and outside via a reverse proxy.
Overall he's just describing a typical simple internal DNS setup - from the title was expecting him to talk about how he got a stable authoritative DNS server for his public domain running at home (and how he got around the "two nameservers" requirement).
On the plus side, that made me realize that my current home connection _is_ stable enough to host one of my three authoritative DNS servers, which should save me about 7 EUR per month.
Doesn’t stop you from using your own private CA, either, but at least you have the option.
Cloudflare (and probably others) let you enter non-routable IPs into their DNS, so myhomeserver.mydomain.com can point to 192.168.1.45 on your LAN without having to run your own DNS/hosts.
* https://jdebp.uk/FGA/dns-use-domain-names-that-you-own.html
* https://news.ycombinator.com/item?id=45144631
You can find the case of dev. in particular discussed in umpteen places here on Hacker News over the years.
https://datatracker.ietf.org/doc/html/rfc8375
Thanks, didn't even know it existed.
> I didn't see a specific RFC that reserved .lan
There is no RFC AFAIK, but it has certainly seen some adoption over the past decade. Mikrotik devices use `router.lan` as a default domain name for their routers, for instance. Home labbers on YouTube seem to like to use `.lan`, too.
Would it be fair to think there is a chance `.lan` might get an RFC of its' own given the popularity? Or that's completely irrelevant in case with RFCs? Hard to tell what's the reasoning there - `.home.arpa` seems excessivly long and inconvenient.
Would be a real shame and a bummer if `.lan` ends up becoming public :')
In my interaction with IETF standards they are created / implemented in two ways:
Both are reasonable paths of implementation given how the pace of changes in technology.I doubt .lan, .local, .home, etc will either become public or a standard just based on existing devices that default to these domains and documentation or books that might reference them as example domains.
My DNS server of choice remains PowerDNS. I also find the API easier to use with certbot and the available web UIs.
For the next 8% where people operate "real" dns servers I agree the zone definition syntax is a bit verbose (especially if you're doing many domains or reverse lookup zones) but not necessarily that complicated. The last 2% probably care about all of the syntax that starts to look like mumbo-jumbo which bind documentation focuses on. Oh, I will complain about bind expecting you to manually increment serial numbers in your zonefiles though... but most deployments like this (or even ones acting as the nameserver for some domains) don't actually need that anyways.
No complaints about choosing PowerDNS though. Hard to go wrong with it for this either.
If you're talking about "censorship of unencrypted DNS traffic in general": The censorship (or security/privacy risk or whatever your reason for caring it's unencrypted) doesn't care if you're sending traffic to a root nameserver for recursive resolution or traffic to a forwarder. What you need is something like encrypted DNS over another commonly encrypted channel that won't be blocked (e.g. DoH), which actually fits better with using a forwarder since most servers you'll recurse to don't support such transports.
Recursive resolution of public domains is really not as useful as it may sound for most people. The folks it perhaps helps the most are those interested in having the fewest external dependencies in their infrastructure. I have another comment about how to maximize that goal more than just resursing to the public root servers.
The exception to this is .local, which you shouldn’t use for internal systems because it will confuse the heck out of them in weird ways, because .local is by RFC not meant to be used in that way.
The easiest solution I've found is to not play the game. I.e. just use HTTP for your homelab, and if the service doesn't let you use anything but HTTPS then bind it to 127.0.0.1 and set up Caddy to reverse proxy and ignore the cert in a few lines. If you want to expose things externally and do happen to own a domain then set up a single external *.yourdomain.tld record which points to your public IP, bind an instance of Caddy to it, and reverse proxy to the internal HTTP only services. The internal service DNS entries can still use .internal so you won't have to deal with split-horizon DNS either.
Serial numbers were a bane 3 decades ago. When Daniel J. Bernstein invented djbdns, xe made the software (tinydns-data) auto-generate the serial numbers from the last modification timestamp of the source file, and made several observations on the subject of serial numbers that are well known, or at least easy to work out for onesself with a modicum of thought.
This article warns the reader thrice, in all-capitals, about remembering to manually increment serial numbers. That's still after all these years reasonable advice for BIND users and a still habit to form if one uses BIND. The numbering scheme used here will only allow 100 changes per day, of course.
But nothing in the article, nor the planned parts 2 and 3 described in the ensuing FediVerse discussion, actually needs zone serial numbers to be incremented at all, as there's no replication of any kind, let alone "zone transfer", here and parts 2 and 3 (if they end up as stated, they not having been published yet) will not encompass replication either. It's heavily stressed because it has been a common pitfall for BIND users for decades; but ironically the article series does not actually need it.
It's a shame, really, because the things that should be emphasized as much if not more here, are reduced in comparison. internal. still not being an IANA special-use domain name is one. (See RFC 8375 for home.arpa., which is a special-use domain name.) The way that this setup will leak 192.168.0.0/16 reverse lookups outwith 192.168.1.0/24 to the world at large, and 172.16.0.0/12 lookups outwith 172.16.0.0/16, is another. (named.rfc1912.zones does not cover any of the requisite domain apices, RFC 1912 not being RFC 1918, and is in any case a RedHatism that one cannot rely upon on even, say, Debian, let alone on a non-Linux-based operating system.) The pitfalls of using a superdomain that one does not own, e.g. homelab.jhw. here, is a third that is glossed over. (Anyone who has tried to set up an organization's domain naming will know of the pitfalls that this entails; this is as much of a bad habit to avoid gaining in the first place as updating BIND's serial numbers is a habit to learn.)
Furthermore, making "everything at home just work, even with no internet connection" involves something further, missing from this and from the described forthcoming parts 2 and 3: a private root content DNS server. There's a surprising amount of stuff that relies on at minimum getting negative answers from the . content DNS servers for top-level domains that do not exist, and various blackholed domains.
And if using Technitium as encouraged/promoted/facilitated,^1 now "DNS provider like Google DNS or Cloudflare DNS [or IBM Quad9 DNS]... _can_ see and control what website you can visit even when the website employ[s] HTTPS security" and "_can_ redirect, block or inject content into websites you visit"
If the ISP seeing "what website you can visit" is the issue, then Technitium is not going to help. A DNS lookup does not necessarily mean a "visit" but even assuming it does, then the ISP can already see the "visit" (the actual TCP connection not the DNS lookup) via the SNI extension in the TLS ClientHello
If the issue is ISP "control [over] what website you can visit", then Technitium _might_ help
But if the issue is _any_ third party "control [over] what website you can visit", then all that has been done with Technitium is to substitute a "DNS provider" other than the ISP as the third party
Technitium uses the word "can" to describe the issue: "your ISP can see and control..." and "some ISPs can redirect, block or inject..."
Maybe this is purely coincidental, who knows
The fact is some ISPs _have_ redirected, blocked or injected content by utilising their control over DNS
It has happened and, certainly with redirection at least, it is ongoing. Why use the word "can". ISPs do it
As far as we know neither Google nor Cloudflare has done it
It may or may not have happened. But either way, they _can_ do it
There is nothing that stops third parties from doing it in the future, whether it's an ISP or an advertising services and surveillance company or a TLS MiTM CDN
Excerpt from RFC 1035:
"Name servers manage two kinds of data. The first kind of data held in sets called zones; each zone is the complete database for a particular "pruned" subtree of the domain space. This data is called authoritative. A name server periodically checks to make sure that its zones are up to date, and if not, obtains a new copy of updated zones from master files stored locally or in another name server. The second kind of data is cached data which was acquired by a local resolver. This data may be incomplete, but improves the performance of the retrieval process when non-local data is repeatedly accessed. Cached data is eventually discarded by a timeout mechanism."
The "DNS provider" is not a source for authoritative data
Many self-proclaimed "technical" people use it this way; hopefully they understand exactly what they are doing
For example, according to RFC 1035 the operator of the nameserver for example.com controls the DNS data for example.com, not the ISP, not Google, not Cloudflare or any other third party "DNS provider"
The internet user can run his own authoritative nameserver and/or a cache^2 and decide what the DNS data for example.com should be served when his computers ask for it; he could thereby eliminate the need for third parties
He could obtain that DNS data from the nameserver(s) for example.com
He could also obtain it from an "upstream" third party cache, such as an ISP, Google, or Cloudflare; i.e., he could rely on a third party
But according to RFC 1035 only the nameserver for example.com is the authoritative source^3
1. "Self-host" is not the default
DnsServer-master/Apps/AdvancedForwardingApp/dnsApp.config
2. These do not have to be the same program, as is the case with Pi-Hole (dnsmasq) and Technitium
3. As it happens, many website operators use third parties for authoritative DNS service
The DNS server will contact the Root servers for the TLD namesevers of a domain, the TLD nameservers and then the actual authoritative nameserver for the particular domain.
No forwarder needed.
This means you bypass any DNS based filtering any DNS ‘forwarder’ may have in place.
It's hard to go much deeper than that in practice as the zonefiles for TLDs are massively larger, massively more dynamic (i.e. syncing once a day isn't usually enough), and much harder to get ahold of (if it all, sometimes).
Regardless of how you go about not using a forwarder, if that's the path you choose then I also heavily recommend considering setting up some additional things like cached entry prefetching so recently used expiring entries don't get "hitches" in latency.
* https://news.ycombinator.com/item?id=44318136
There are actually several additional subdomains of arpa. that one can also replicate, not on that list, which are largely invariant.
And really it's not about technical correctness. It has been known how to set up private roots since the 20th century. Some of us have had them for almost that long. Even the IETF has glacially slowly now come around to the view that idea is a good one, with there now being an RFC on the subject.
The underlying problem for most of that time has been that they're difficult to do with BIND, at least a lot more difficult to do than with other content DNS server softwares, if one clings, as exhibited even here in the headlined article, to a single server vainly wearing all of the hats at once.
All of the people commenting here that they use unbound and nsd, or dnscache and tinydns, or PowerDNS and the PowerDNS Recursor, have already overcome the main BIND Think obstacle that makes things difficult.
It's technically incorrect in that IANA would like you to have your DNS server use the DNS protocol's built in system of record querying and expiry rather than pull a static file at your own interval (IIRC I don't think root servers support AXFR for performance reasons?) as there is no predefined fixed schedule for root zone updates. Practically, root zone update changes are absolutely glacial and minuscule (the "real" root servers only get 1-2 updates per day anyways) so pulling the file once per day is effectively good enough to never care it's not as DNS would intend you to get the record updates.
Setting this up in bind should be no more difficult than adding a `zone "."` entry pointing to this file, the named.conf need not be more than ~a dozen lines long. It's easy to make bind config complicated though (much like this article), but I'm not sure that was the barrier vs just being comfortable enough about DNS to be aware the endeavour is even something one could seek to do - let alone set out to.
- Draft: DELEG (a new way of doing delegations, replacing the NS/DS records).
- A draft to follow: Using the extensible mechanisms of DELEG to allow you to specify alternative transports for those nameservers (eg: DoH/DoT/DoQ).
This would allow a recursive server to make encrypted connections to everything it talks to (that has those DELEG records and supports encrypted transports) as part of resolution.
Of course, traffic analysis still exists. If you are talking to the nameservers of bigtittygothgirls.com, and the only domains served by those name servers are bigtittygothgirls ...