We don't. If we did, we'd have it by now. It's been over 25 years of making appeals like this.
It's a fun site! I'm not entirely sure why the protagonist is a green taco, but I can see why a DNS provider would make a cartoon protocol explainer. It's just that this particular protocol is not as important as the name makes it sound.
It is important. This is unfortunate rhetoric that is harming the safety of the internet.
"For instance, in April 2018, a Russian provider announced a number of IP prefixes (groups of IP addresses) that actually belong to Route53 Amazon DNS servers."
By BGP hijacking Route53, attackers were not only able to redirect a website to different IPs, globally, but also generate SSL certificates for that website. They used this to steal $152,000 in cryptocurrency. (I know I know, "crypto", but this can happen to any site: banking, medical, infrastructure)
Also, before you say, RPKI doesn't solve this either, although a step in the right direction. DNSSEC is a step in the right direction as well.
The idea is important. What it aims to protect is important. The current implementation is horrible, far too complex and fraught with so many landminds that no one wants to touch it.
A common reason, if not the vast majority of cases, is that people mix up which key they publish and which key they are actually using. I don't doubt there are a lot of things they could do to improve the protocol, but this very common problem is fairly difficult to solve on a protocol level.
I remember back in the days when people discouraged people from using encrypted disks because of the situation that could happen if the user lost their passwords. No disk encryption algorithm can solve the issue if the user does not have the correct password, and so the recommendation was to not use it. Nowadays people usually have TPMs or key management software to manage keys, so people can forget the password and still access their encrypted disks.
DNSSEC software is still not really that developed that they automatically include basic tests and verification tools to make sure people don't simply mix up keys. They assume that people write those themselves. Too many times this happens after incidents rather than before (heard this in so many war stories). It also doesn't help that dns is full of caching and caching invalidation. A lot of the insane step-by-step plans comes from working around TTL's, lack of verification, basic tooling, and that much of the work is done manually.
> No disk encryption algorithm can solve the issue if the user does not have the correct password, and so the recommendation was to not use it.
This problem is accurate, but it's the framing that makes it wrong.
No disk encryption algorithm can simultaneously protect and not protect something encrypted, what you're missing is the protocol/practices around that, and those are far less limited.
There is heaps of encryption around these days, there are people losing access to their regular keys, and yet procedures that recover access to their data while not entirely removing the utility of having encrypted data/disks.
A TPM is absolutely not a reliable way to store your key. Think about how often you get asked for a BitLocker recovery code, and imagine if every time that happened, you lost all your data.
What parts do you agree about? Someone making an argument that we should return to the drawing board and come up with a new protocol, one that doesn't make the "offline signers and authenticated denial" tradeoffs DNSSEC makes, would probably be saying something everybody here agrees with --- though I still don't think it would be one of the 5 most important security things to work on.
But the person you're replying to believes we should hasten deployment of DNSSEC, the protocol we have now.
I would love to go to back to the drawing board and solve the security pitfalls in BGP & DNS. I wish the organizations and committees involved did a better job back then.
Sadly, we live in this reality for now, so we do what we can with what we have. We have DNSSEC.
You understand that it is a little difficult for people to take seriously a claim that you're interested in going back to the drawing board while at the same time very stridently arguing that hundreds of millions of dollars of work should go in to getting a 1994 protocol design from 4% deployment to 40% deployment. The time to return to the drawing board is now.
I don't read that reply as them saying we should hasten deployment of DNSSEC. If that was the intention of the comment then no, I don't agree with that aspect of it.
I saying say I agree with the statement "I am saying it is dishonest to discount the real security threat of not having DNSSEC."
I believe we do need some way to secure/harden DNS against attacks, we can't pretend that DNS as it stands is OK. DNSSEC is trying to solve a real problem - I do think we need to go back to the drawing board on how we solve it though.
They definitely believe we should hasten deployment of DNSSEC --- read across the thread. For instance: Slack was taken down for a half a day owing to a deployment of DNSSEC that a government contract obligated them to undertake, and that commenter celebrated the contract.
It's fine that we all agree on some things and disagree on others! I don't think DNS security is a priority issue, but I'm fine with it conceptually. My opposition is to the DNSSEC protocol itself, which is a dangerous relic of premodern cryptography designed at a government-funded lab in the 1990s. The other commenter on this thread disagrees with that assessment.
slightly later
(My point here is just clarity about what we do and don't agree about. "Resolving" this conflict is pointless --- we're not making the calls, the market is. But from an intellectual perspective, understanding our distinctive positions on Internet security, even if that means recognizing intractable disputes, is more useful than just pretending we agree.)
The proposal is to make LE certs 9 days long or something. Which means if LE is down for even a short time thousands and millions of certs will expire.
The eventual plan is to limit certs to 48 hours (AFAIR), right now they're already allowing 6-day certs: https://letsencrypt.org/2025/02/20/first-short-lived-cert-is... In this scenario, if Let's Encrypt goes down for just a couple of days, a lot of certs will expire.
There are also operational risks, as Let's Encrypt has to have their secret key material in close proximity to web-facing services. Of course, they use HSMs, but it might not be enough of a barrier for nation-state level attackers.
The offline signing feature of DNSSEC allows the root zone and, possibly, the TLDs to be signed fully offline.
That's why in my ideal world I want to keep DNSSEC as-is for the root zone and the TLD delegation records, but use something like DoH/DoT for the second-level domains. The privacy impact of TLD resolution is pretty much none, and everything else can be protected fully.
That is not why DNSSEC has offline signers. DNSSEC has offline signers because when the protocol was designed, its authors didn't believe computers would be able to keep up with the signing work. Starting sometime in the middle of the oughts, people started to retcon security rationales onto it, but that's not the purpose of the design.
I'm sorry, this is just such an incredibly fine-tuned threat model for me to take it seriously.
You start with a BGP hijack, which lets you impersonate anybody, but assume that the hijacker is only so powerful as being able to impersonate a specific DNS server and not the server that the DNS server tells you about. You then use that specific control to get a CA to forge a certificate for you (and if the CA is capable of using any information to detect that this might be a forgery, the attack breaks).
And of course, the proposed solution doesn't do anything to protect against other kinds of DNS hijacking--impersonating somebody to the nameserver and getting the account switched over to them.
> I'm sorry, this is just such an incredibly fine-tuned threat model for me to take it seriously.
You claim it is fine-tuned, but it has happened in the real world. It is actually even better for attackers that it is "obscure", because that means it is harder to detect.
> but assume that the hijacker is only so powerful as being able to impersonate a specific DNS server and not the server that the DNS server tells you about.
Yes, all layers of the stack need to be secure. I am not making assumptions about the other layers - this thread is about DNS.
> if the CA is capable of using any information to detect that this might be a forgery
They are not. The only mitigation is "multi-perspective validation", which only addresses a subset of this attack.
> And of course, the proposed solution doesn't do anything to protect against other kinds of DNS hijacking
Yes, because other kinds of DNS hijacking are solved by HTTPS TLS. If TLS and CAs are broken, nothing is secure.
> You claim it is fine-tuned, but it has happened in the real world.
Sure, but it seems like his comment is still responsive; if DNSSEC is deployed, they perform a BGP hijack & can impersonate everyone, and they just impersonate the server after the DNS step?
If that's the threat model you want to mitigate, it seems like DNSSEC won't address it.
> and they just impersonate the server after the DNS step?
Yes, there are different mitigations to prevent BGP hijacking the webserver itself. Preventing a rogue TLS certificate from being issued is the most important factor. CAA DNS records can help a bit with this. DNS itself however is easiest solved by DNSSEC.
There are a lot of mitigations to prevent BGP hijacks that I won't get too much into. None are 100%, but they are good enough to ensure multi-perspective validation refuses to issue a TLS certificate. The problem is that if those same mitigations are not deployed on your DNS servers (or you outsource DNS and they have not deployed these mitigations) it is a weak link.
I don't see you responding to the question. You're fixating on protections for DNS servers, because that is the only circumstance in which DNSSEC could matter for these threat actors, not because they can't target the address space of the TLS servers themselves (they can), but because if you concede that they can do this, DNSSEC doesn't do anything anymore; attackers will just leave DNS records intact, and intercept the "authentic" server IPs.
So far your response to this has been "attackers can't do this to Cloudflare". I mean, stipulated? Good note? Now, can you draw the rest of the owl?
I am focusing on DNS because this thread is about DNSSEC. The topic of doing it in to the TLS servers themselves is a tangent not relevant to this thread.
No, I'm sorry, that's not the case. You're focusing on DNS servers as the target for BGP4 attacks because if you didn't, you wouldn't have a rebuttal for the very obvious question of "why wouldn't BGP4 attackers just use BGP4 to intercept legitimate ALPN challenges".
> You start with a BGP hijack, which lets you impersonate anybody, but assume that the hijacker is only so powerful as being able to impersonate a specific DNS server and not the server that the DNS server tells you about.
An attacker impersonating a DNS server still won't be able to forge the DNSSEC signatures.
An attack against BGP where the attacker takes over traffic for an IP address isn't at all prevented by DNSSEC.
The sequence there is:
1. I hijack traffic destined for an IP address
2. Anything whose DNS resolves to that IP, regardless of whether or not they use DNSSEC, starts coming to me
In this model, I don't bother trying to hijack the IP of a DNS server: that's a pain because with multi-perspective validation, I plausibly have to hijack a bunch of different IPs in a bunch of different spots. So instead I just hijack the IP of the service I want to get a malicious cert for, and serve up responses to let me pass the ALPN ACME challenge.
Sure. But you won't have a TLS certificate for that address, if the host uses a DNS-based ACME challenge and prohibits the plain HTTP challenge: https://letsencrypt.org/docs/caa/
Ok, so deploying DNSSEC would specifically solve the threat model of an attacker who can perform a BGP hijack of IP addresses, but doesn’t want to hijack multiple DNS server IPs because that’s more work, for a domain that has CAA records and disallows validation by ALPN.
That feels like a pretty narrow gain to justify strapping all this to all my zones and eating the operational cost and risk that if I mess it up, my site stops existing for a while
There are two things mixed up. "We need secure DNS" != "we need DNSSEC".
There is a huge demand for securing DNS-related things, but DNSSEC seems to be a poor answer. DoH is a somehow better answer, with any shortcomings it may have, and it's widely deployed.
I suspect that a contraption that would wrap the existing DNS protocol into TLS in a way that would be trivial to put in front of an existing DNS server and an existing DNS client (like TLS was trivial to put in front of an HTTP server), might be a runaway success. A solution that wins is a solution which is damn easy to deploy, and not easy to screw up. DNSSEC is not it, alas.
Yes. But DoH was built in a way which is reasonably easy to adopt, and offers obvious benefits, hence it was adopted. DNSSEC lacks this quality, and I think this quality is essential.
TLS internally does not depend on a domain in the DNS sense, it basically certifies a chain of signatures bound to a name. That chain can be verified, starting from the root servers.
The problem is more in the fact that TLS assumes creation of a long-living connection with an ephemeral key pair, while DNS is usually a one-shot interaction.
Encrypting DNS would require caching of such key pairs for some time, and refreshing them regularly but not too often. Same for querying and verifying certificates.
> It is important. This is unfortunate rhetoric that is harming the safety of the internet.
DNSSEC was built for exactly one use case: we have to put root/TLD authoritative servers in non-Western countries. It is simply a method for attesting that a mirror of a DNS server is serving what the zone author intended.
What people actually want and need is transport security. DNSCrypt solved this problem, but people were bamboozled by DNSSEC. Later people realized what they wanted was transport security and DoH and friends came into fashion.
DNSSEC is about authentication & integrity. DNSCRYPT/DOH is about privacy. They solve completely different problems and have nothing to do with one another.
If you have secure channels from recursers all the way back to authority servers (you don't, but you could) then in fact DoH-like protocols do address most of the problems --- which I contend are pretty marginal, but whatever --- that DNSSEC solves.
What's more, it's a software-only infrastructure upgrade: it wouldn't, in the simplest base case, require zone owners to reconfigure their zones, the way DNSSEC does. It doesn't require policy decisionmaking. DNS infrastructure operators could just enable it, and it would work --- unlike DNSSEC.
(Actually getting it to work reliably without downgrade attacks would be more work, but notably, that's work DNSSEC would have had to do too --- precisely the work that caused DANE-stapling to founder in tls-wg.)
I'd love to see DoH/DoT that uses a stapled DNSSEC-authenticated reply containing the DANE entry.
There's still a chicken-and-egg problem with getting a valid TLS certificate for the DNS server, and limiting DNSSEC just for that role might be a valid approach. Just forget that it exists for all other entry types.
Stapling is dead: nobody could agree on a threat model, and they ultimately ended up at an HPKP-style cached "this endpoint must staple DANE" model that TLS people rejected (reasonably).
But if you have DoH chaining all the way from the recurser to the authority, it's tricky to say what stapled DANE signatures are even buying you. The first consumers of that system would be the CAs themselves.
BGP attacks change the semantic meaning of IP addresses themselves. DNSSEC operates at a level above that. The one place this matters in a post-HTTPS-everywhere world is at the CAs, which are now all moving to multi-perspective validation.
As you should be aware, multi-perspective validation does not solve anything if your BGP hijack is accepted to be global. You will receive 100% of the traffic.
DNSSEC does greatly assist with this issue: It would have prevented the cited incident.
1. Hijack the HTTP/HTTPS server. For some IP ranges, this is completely infeasible. For example, hijacking a CloudFlare HTTP/HTTPS range would be almost impossible theoretically based on technical details that I won't go through listing.
2. Hijack the DNS server. Because there's a complete apathy towards DNS server security (as you are showing) this attack is very frequently overlooked. Which is exactly why in the cited incident attackers were capable of hijacking Amazon Route53 with ease. *DNSSEC solves this.*
If either 1 or 2 work, you have yourself a successful hijack of the site. Both need to be secure for you to prevent this.
In summation, you propose a forklift upgrade of the DNS requiring hundreds of millions of dollars of effort from operators around the world, introducing a system that routinely takes some of the most sophisticated platforms off the Internet entirely when its brittle configuration breaks, to address the problem of someone pulling off a global hijack of all the Route53 addresses.
At this point, you might as well just have the CABForum come up with a new blessed verification method based on RDAP. That might actually happen, unlike DNSSEC, which will not. DNSSEC has lost signed zones in North America over some recent intervals.
I do like that the threat model you propose is coherent only for sites behind Cloudflare, though.
"I do like that the threat model you propose is coherent only for sites behind Cloudflare, though."
The threat model I proposed is coherent for Cloudflare because they have done a lot of engineering to make it almost impossible to globally BGP hijack their IPs. This makes the multi-perspective validation actually help. Yes, other ISPs are much more vulnerable than Cloudflare, is there a point?
You are not saying DNSSEC doesn't serve a real purpose. You are saying it is annoying to implement and not widely deployed as such. That alone makes me believe your argument is a bit dishonest and I will abstain from additional discussion.
No, I'm saying it doesn't serve a real purpose. I've spent 30 years doing security work professionally and one of the basic things I've come to understand is that security is at bottom an economic problem. The job of the defender is to asymmetrically raise costs for attackers. Look at how DNS zones and certificates are hijacked today. You are proposing to drastically raise defender costs in a way that doesn't significantly alter attacker costs, because they aren't in the main using the exotic attack you're fixated on.
If we really wanted to address this particular attack vector in a decisive way, we'd move away, at the CA level, from relying on the DNS protocol browsers use to look up hostnames altogether, and replace it with direct attestation from registrars, which could be made _arbitrarily_ secure without the weird gesticulations DNSSEC makes to simultaneously serve mass lookups from browsers and this CA use case.
But this isn't about real threat models. It's about a tiny minority of technologists having a parasocial relationship with an obsolete protocol.
Counterpoint: no it isn't, which is why virtually nobody uses it. Even the attack this thread centers on --- BGP hijacking of targeted DNSSEC servers to spoof CA signatures --- is a rounding error sidenote compared to the way DNS zones actually get hijacked in practice (ATO attacks against DNS providers).
If people were serious about this, they'd start by demanding that every DNS provider accept U2F and/or Passkeys, rather than the halfhearted TOTP many of them do right now. But it's not serious; it's just motivated reasoning in defense of DNSSEC, which some people have a weird stake in keeping alive.
You are again ignoring the fact that DNSSEC would have prevented a $152,000 hack. Yes, we are aware organizations are not always serious about security. For those that are though, DNSSEC is a helpful tool.
No, it isn't. It attempts and mostly fails to address one ultra-exotic attack, at absolutely enormous expense, principally because the Internet standards community is so path-dependent they can't take a bad cryptosystem designed in the mid-1990s back to the drawing board. You can't just name call your way to getting this protocol adopted; people have been trying to do that for years, and the net result is that North American adoption fell.
The companies you're deriding as unserious about security in general spend drastically more on security than the companies that have adopted it. No part of your argument holds up.
Citation? A BGP hijack can be done for less than $100.
"You can't just name call your way to getting this protocol adopted"
I do not care if you adopt this protocol. I care that you accurately inform others of the documented risks of not adopting DNSSEC. There are organizations that can tolerate the risk. There are also organizations that are unaware because they are not accurately informed (due to individuals like yourself), and it is not covered by their security audits. That is unfortunate.
> Counterpoint: no it isn't, which is why virtually nobody uses it. Even the attack this thread centers on --- BGP hijacking of targeted DNSSEC servers to spoof CA signatures
Wait, wait, wait. How can you hijack a DNSSEC server? Its keys are enrolled in the TLD, and you can't spoof the TLD server, because its keys in turn are enrolled in the root zone. And the root zone trust anchor is statically configured on all machines.
And at least Let's encrypt actually verifies DNSSEC before issuing certificates. IIRC it will become mandatory for all CA's soon. DNSSEC for a domain plus restrictive CAA rules should ensure that no reputable CA would issue a rogue cert.
"Most domains". Yes, it is possible that nobody bothers to DNS hijack your domains. Sadly I've worked for organizations where it did happen, and now they have DNSSEC.
I invite anybody who thinks this is a mic drop to pull down the Tranco research list of most popular/important domains on the Internet --- it's just a text file of zones, one per line --- and write the trivial bash `for` loop to `dig +short ds` each of those zones and count how many have DNSSEC.
For starters you could try `dig +short ds google.com`. It'll give you a flavor of what to expect.
And you still can't seem to make your mind up on whether this is because DNSSEC is still in its infancy or if it's because they all somehow already studied DNSSEC and ended up with the exact same opinion as you. I'm gonna go out on a limb and say that it's not the latter.
What do I have to make my mind up about? I worked on the same floor as the TIS Labs people at Network Associates back in the 1990s. They designed DNSSEC and set the service model: offline signers, authenticated denial. We then went through DNSSEC-bis (with the typecode roll that allowed for scalable signing, something that hadn't been worked out as late as the mid-1990s) and DNSSEC-ter (NSEC3, whitelies). From 1994 through 2025 the protocol has never seen double-digit percentage adoption in North America or in the top 1000 zones, and its adoption has declined in recent years.
You're not going to take my word for it, but you could take Geoff Huston's, who recently recorded a whole podcast about this.
That quote is interesting because all of the period reporting I’ve seen says that the attackers did NOT successfully get an HTTPS certificate and the only people affected were those who ignored their browsers’ warnings.
How about another incident in 2022? Attackers BGP hijacked a dependency hosting a JS file, generated a rogue TLS certificate, and stole $2 million. Keep in mind: these are incidents we know about, not including incidents that went undetected.
Noteworthy: "Additionally, some BGP attacks can still fool all of a CA’s vantage points. To reduce the impact of BGP attacks, we need security improvements in the routing infrastructure as well. In the short term, deployed routing technologies like the Resource Public Key Infrastructure (RPKI) could significantly limit the spread of BGP attacks and make them much less likely to be successful. ... In the long run, we need a much more secure underlying routing layer for the Internet."
You know why I'm not coming back at you with links about registrar ATOs? Because they're so common that nobody writes research reports about them. I remember after Laurent Joncheray wrote his paper about off-path TCP hijacking back in 1995; for awhile, you'd have thought the whole Internet was going to fall to off-path TCP hijacking. (It did not.)
The argument counter dnssec is that if you are trying to find some random A record for a server, to know if it is the right one, TLS does that fine provided you reasonably trust domain control validation works i.e. CAs see authentic DNS.
An argument for DNSSEC is any service configured by SRV records. It might be totally legitimate for the srv record of some thing or other to point to an A record in a totally different zone. From a TLS perspective you can't tell, because the delegation happened by SRV records and you only know if that is authentic if you either have a signed record, or a direct encrypted connection to the authoritative server (the TLS connection to evil.service.example would be valid).
Yes but it is still possible to execute BGP hijacks that capture 100% of traffic, rendering multi-perspective validation useless. RPKI sadly only solves naive "accidental" BGP hijacks, not malicious BGP hijacks. That's a different discussion though.
I agree and apparently so does the CA/B forum: SC085: Require DNSSEC for CAA and DCV Lookups is currently in intellectual property review.
DCV is CA/B speak for domain-control validation; CAA = these are my approved CAs.
This seems to be optional in the sense that: if a DNS zone has DNSSEC, then validation must succeed. But if DNSSEC is not configured it is not required.
If the threat model is BGP hijacking, is DNSSEC actually the answer? If you can hijack a leaf server, can't you hijack a root server? As far as I can tell, the root of DNSSEC trust starts with DNSKEY records for the "." zone that are rotated quarterly. This means every DNSSEC-validating resolver has to fetch updates to those records periodically, and if I can hijack even one route to one of the fixed anycast IPs of [a-m].root-servers.net then I can start poisoning the entire DNSSEC trust hierarchy for some clients, no?
Now, this kind of attack would likely be more visible and thus detected sooner than one targeted at a specific site, and just because there is a threat like this doesn't mean other threats should be ignored or downplayed. But it seems to me that BGP security needs to be tackled in its own right, and DNSSEC would just be a leaky band-aid for that much bigger issue.
The much-more-important problem is that the most important zones on the Internet are, in the DNSSEC PKI, irrevocably coupled to zones owned by governments that actively manipulate the DNS to achieve policy ends.
This is a problem with TLS too, of course, because of domain validation. But the WebPKI responded to that problem with Certificate Transparency, so you can detect and revoke misissued certificates. Not only does nothing like that exist for the DNS, nothing like it will exist for the DNS. How I know that is, Google and Mozilla had to stretch the bounds of antitrust law to force CAs to publish on CT logs. No such market force exists to force DNS TLD operators to do anything.
My (ahem) "advocacy" against DNSSEC is understandably annoying, but it really is the case that every corner of this system you look around, there's some new goblin. It's just bad.
I agree that the problem lies with BGP and we definitely need a solution. You can also say the problem is with TLS CA verification not being built on a solid foundation. Even with that said, solving those problems will take time, and DNSSEC is a valid precaution for today.
Not saying they are malicious actors, but easy answer would be any Public WiFi anywhere. They all intercept DNS, less than 1% intercept SNI.
It is also public knowledge that certain ISPs (including Xfinity) sniff and log all DNS queries, even to other DNS servers. TLS SNI is less common, although it may be more widespread now, I haven't kept up with the times.
Popular web browsers send SNI by default regardless of whether it is actually needed. For example, HTTPS-enabled websites not hosted at a CDNs may have no need for SNI. But popular web browsers will send it anyway.
every single ISP in the world. it was a well documented abused channel.
they not only intercepted your traffic for profiling but also injected redirects to their branded search. honestly curious if you're just too young or was one of the maybe 10 people who never experienced this.
sending traffic to a third party like quad9 is much safer than to a company who have your name/address/credit card.
We very definitely do have IPv6. I'm using IPv6 right now. Last numbers I saw, over 50% of North American hits to Google were IPv6. DNSSEC adoption in North America is below 4%, and that's by counting zones, most of which don't matter --- the number gets much lower if you filter it down to the top 1000 zones.
One can hope that someone will give the ISPs in my country a metaphorical hefty kick up the arse, especially as some of the more niche ones have been happily providing IPv6, and business customers can get IPv6, and of course other countries are happily embracing IPv6. So I wouldn't say never.
But the clear evidence is that past promises of it arriving at those major ISPs are very hollow indeed.
It's not the same with DNSSEC in the U.K., though. Many WWW hosting services (claim to) support that right now. And if anything, rather than there being years-old ineffective petition sites clamouring for IPv6 to be turned on, it is, even in 2025, the received wisdom to look to turning DNSSEC off in order to fix problems.
One has to roll one's eyes at how many times the-corporation-disables-the-thread-where-customers-repeatedly-ask-for-simple-moderen-stuff-for-10-years is the answer. It was the answer for Google Chrome not getting SRV lookup support. Although that was a mere 5 years.
As a joke, it’s not easily distinguishable from trolling and since IPv6 is approaching half of all traffic, more in many areas, the humor value is limited.
> […] since IPv6 is approaching half of all traffic, more in many areas, the humor value is limited.
And yet every article on IPv6 has entire brigades of folks of folks going on about IPv6 is DOA, and "I've been here about it for thirty years, where is it?", and "they should have done IPv4 just with larger addresses, adoption would have been much faster and more compatible".
I'm simply pointing out the parallel argument that was made between DNSSEC and IPv6.
Having also been online for thirty years, I’ve seen those jokes soooo many times but only one of them is funny. DNSSEC’s 0.3% usage[1] is within a rounding error of zero but IPv6 is close to half of all internet traffic in many countries. It’s not funny so much as not updating your priors for decades, like joking about how Windows crashes constantly or saying Python is a hobbyist language.
It's a parallel argument, but it's not a good one, because IPv6 is now around ~50% of traffic depending on the service and details, and DNSSEC was introduced earlier and doesn't seem to be going anywhere.
IPv6 probably could have been done better and rolled out faster, and whoever works on IPvNEXT should study what went wrong, but eventually it became better than alternative ways of dealing with a lack of IPv4 addresses, and it started getting real deployment.
If you want the net to support end to end connectivity we need IPv6. Otherwise you'll end up with layers and layers of NAT and it will become borderline impossible.
A lot of protocols get unstable behind layers of NAT too, even if they're not trying to do end to end / P2P. It adds all kinds of unpredictable timeouts and other nonsense.
Unfortunately the basic thought process is that "it is a security feature, therefore it must be enabled" it is very hard to argue against that, but it is pretty similar to "we must have the latest version of everything as that is secure" and "we should add more passwords with more requirements and expire it often". One of those security theatres that optimizes for reducing accountability, but may end up with almost no security gains and huge tradeoffs that may end up even compromising security through secondary effects (how secure is a system that is down?)
Note that without DNS security, whoever controls your DNS server, or is reliably in the path to your DNS server, can issue certificates for your domain. The only countermeasure against this is certificate transparency, which lets you yell loudly that someone's impersonating you but doesn't stop them from actually doing it.
In this case, there's an avalanche of money and resources backing up the problem domain DNSSEC attempts to make contributions in, and the fact that it's deployed in practically 0% of organizations with large security teams is telling.
I would say it is more a testament to the unfortunate state of cybersecurity. These "theoretical" attacks happen. Everyone just thinks it won't be them.
My rebuttal is that the DNSSEC root keys could hit Pastebin tonight and in almost every organization in the world nobody would need to be paged. That's not hyperbole.
You are mostly right, but I would hope that certain core security companies and organizations would get paged. Root CAs and domain registrars and such should have DNSSEC validation.
Unfortunately, DNSSEC is a bit expensive in terms of support burden, additional bugs, reduced performance, etc. It will take someone like Apple turning DNSSEC validation on by default to shake out all the problems. Or it will take an exploitable vulnerability akin to SIM-swapping to maybe convince Let's Encrypt! and similar services reliant on proof-by-dns that they must require DNSSEC signing.
SIM-swapping is a much more important attack vector than on-path/off-path traffic interception, and are closer to how DNS hijacking happens in practice (by account takeover at registrars).
If that happened, we'd revert to pre-DNSSEC security levels: an attack would still be hard to pull off (unless you own a root DNS server or are reliably in the path to one). It's like knowing the private key for news.ycombinator.com - it still doesn't do anything unless I can impersonate the Hacker News server. But that was still enough of a risk to justify TLS on the web. Mostly because ISPs were doing it to inject ads.
If the problem is path between registrar and CA, then deploying the fix to clients seems like an absolute overkill.
Just create a secure path from CA to registrar. RDAP-based or DoH-based, or something from scratch, does not really matter. It will only need to cover few thousand CAs and TLDs, so it will be vastly simpler that upgrading billions of internet devices.
One could argue the primary (not the only) risk addressed by DNSSEC is third party DNS service, i.e., shared caches accessible from the internet
If this is true, then one might assume DNSSEC is generally unnecessary if one is running their own unshared cache only accessible from the loopback or the LAN
Software like djb's dnscache, a personal favourite, has no support for DNSSEC
NLNet's unbound places a strong emphasis on supporting DNSSEC. The unbound documentation authors recommend using it
Dan Kaminsky showed us why we need DNSSEC. Without it, it's quite easy to MITM and/or spoof network traffic. Some governments like to do this, so they'll continue to make it difficult for DNSSEC to be fully adopted.
The original registrar, Network Solutions, doesn't even fully support DNSSEC. You can only get it if you pay them an extra $5/mo and let them serve your DNS records for you. So for $5/mo you get DNSSEC, but you defer control of your records to them, which isn't really secure.
It's trivial to spoof DNS even with DNSSEC set up, because DNSSEC is a server-to-server protocol. Your browser doesn't speak DNSSEC; it speaks plaintext DNS, and trusts a single bit in the response header that says whether the upstream caching resolver actually checked signatures.
Optional, alternative standards don't have visibility and don't get used.
Without a way to measure, nothing happens. There was once a few, UX-hostile DNSSEC & DANE browser extensions but these never worked well and were discontinued.
I’ve honestly never known which sites use DNSSEC and which don’t. Browsers don’t warn you when it’s missing, and most people probably wouldn’t even know where to look.
It’s hard to care about something like that, even if it really does matter behind the scenes.
This would theoretically be possible if browsers did DANE and didn't, because of middlebox fuckery, have to have a fallback path to the X.509 WebPKI because DNSSEC requests get dropped like 5% of the time. But because that is the case, no browser does DANE validation today, and when they did, many years ago, those DANE CA certs were effectively yet another CA; they actually expanded your attack surface rather than constricting it.
Even if that wasn't the case --- and it emphatically is --- you'd still be contending with a "personal CA" that in most cases would have its root of trust in a PKI operated by world governments, most of which have a demonstrated aptitude for manipulating the DNS.
We don't technically need ICANN the whole DNS system anymore.
Anyone could quickly build a public cryptographically secure blockchain-based DNS system where people could optionally sync and query their own nodes (without even going over the wire). People could buy and own their domain names on-chain using cryptocurrency instead of repeatedly renting them from some centralized entity.
You could easily build this today by creating a Chrome Extension with a custom URL/address bar which bypasses the main one and makes a call to blockchain nodes instead of a DNS resolver; it would convert a domain name into an IP address by looking up the blockchain. This system could scale without limit in terms of reads as you can just spin up more nodes.
I mean it'd be so easy it's basically a weekend project if you use an existing blockchain as the base. Actually Ethereum already did something like this with .ETH domains but I don't think anyone built a Chrome Extension yet; or at least I haven't heard, though it's possible to enable in Brave browser via settings (kind of hidden away). Also, there is Unstoppable Domains.
People have been doing that since roughly 2010, so the failures are important to learn why it’s not a weekend project.
Adoption is critical for alternate roots, so the first question has to be how something gets enough users for anyone to feel it’s worth the trouble of using: the failure mode of DNS is that links break, email bounces, and tons of things which do server-side validation reject it, so this really limits usage.
The other big problem is abuse. Names are long term investments, so there are the usual blockchain problems of treating security as an afterthought but you also have the problem that third-parties have a valid need to override the blockchain (e.g. someone registers Disney.bit and points it at a porn site or serves malware from GoogleChrome.eth). Solving that means that you’re back to trusting the entity which created the system or maybe a group of operators, so the primary appeal is going to be if you can make it cheaper than owning a traditional domain.
Parts of the inevitable Thomas Ptacek DNSSEC rant remind me of the years of denialism from C++ people before the period when they were "concerned" about safety and the past few years of at least paying lip service to the idea that C++ shouldn't be awful...
One thing I like about Thomas’ history on this issue has been the focus on UX. I think that “can probably be used safely by an expert who understands the domain” as a failure mode is something we should spend more time thinking about as an architecture failure rather than a minor frictional cost.
Sure, although in this space Thomas was already entirely happy with the early Web PKI which is completely terrible for this - similar conditions apply.
At work this week I was hand-holding a DB engineer who was installing some (corporate, not Web PKI) certs and it reminded me of those bad old days. He's got a "certificate" and because this isn't my first rodeo, I ask him to describe it in more details before he just sends me it to look at. Of course it's actually a PKCS#12 file, and so if he'd sent that the key which is inside it is compromised - but he doesn't know that, the whole system was introduced to him as a black box which renders him unable to make good decisions. Out of that conversation we got more secure systems (fixing some issues that pre-existed the fault I was there to help with) and an ally who understands what the technology is actually for and is now trying to help us deliver its benefits rather than just following rote instructions they don't understand.
Anyway, just as we went from "I dunno, I clicked a button and typed in the company credit card details, then I got emailed this PFX file to put on the server" to "Click yes when it asks if you want Let's Encrypt" with both an invisible improvement to delivered security and a simpler workflow, that's very much possible for DNSSEC if people who want to solve the problem put some work in, rather than contentedly announcing that it can't be fixed and we shouldn't worry about it.
I don’t know that I’ve read much support for “entirely happy” but the key difference is that DNSSEC is much harder to upgrade. You didn’t need to wait for my ISP to upgrade their DNS server before you could stop using PKCS12, and you definitely didn’t need me to upgrade my operating system.
The most important work to put in isn’t tweaking DNSSEC but changing it from the pre-PC Internet model where everyone completely trusted their network operators – pushing signature validation out to the client and changing the operating system APIs to make better user interfaces possible.
> possible for DNSSEC if people who want to solve the problem
But for several decades, DNSSEC proponents have been complaining about the vocal detractors, rather than actually addressing the problems that have been identified. That's a pretty significant track record on which to make a fair judgment about who is making the strongest case.
Someone above offered a link [1] that gives some pretty good reasons why nobody is stepping up to fix the problems.
(This is Geoff Huston, for what it's worth, who is an Internet operations luminary of the old breed, and a very long-time very enthusiastic DNSSEC proponent. He doesn't really "call time on" DNSSEC, though; the APNIC Ping podcast he did on this is worth listening to.)
We don't. If we did, we'd have it by now. It's been over 25 years of making appeals like this.
It's a fun site! I'm not entirely sure why the protagonist is a green taco, but I can see why a DNS provider would make a cartoon protocol explainer. It's just that this particular protocol is not as important as the name makes it sound.
It is important. This is unfortunate rhetoric that is harming the safety of the internet.
"For instance, in April 2018, a Russian provider announced a number of IP prefixes (groups of IP addresses) that actually belong to Route53 Amazon DNS servers."
By BGP hijacking Route53, attackers were not only able to redirect a website to different IPs, globally, but also generate SSL certificates for that website. They used this to steal $152,000 in cryptocurrency. (I know I know, "crypto", but this can happen to any site: banking, medical, infrastructure)
Also, before you say, RPKI doesn't solve this either, although a step in the right direction. DNSSEC is a step in the right direction as well.
[1] https://www.cloudflare.com/learning/security/glossary/bgp-hi...
DNSSec has caused so many outages at this point it's a joke.
You have to be so insanely careful and plan everything to the nth degree otherwise you break everything: https://internetnz.nz/news-and-articles/dnssec-chain-validat...
The idea is important. What it aims to protect is important. The current implementation is horrible, far too complex and fraught with so many landminds that no one wants to touch it.
If Geoff Huston is suggesting it might be time to stick a fork in DNSSec because it's done, then IMHO it's well cooked. https://blog.apnic.net/2024/05/28/calling-time-on-dnssec/
A common reason, if not the vast majority of cases, is that people mix up which key they publish and which key they are actually using. I don't doubt there are a lot of things they could do to improve the protocol, but this very common problem is fairly difficult to solve on a protocol level.
I remember back in the days when people discouraged people from using encrypted disks because of the situation that could happen if the user lost their passwords. No disk encryption algorithm can solve the issue if the user does not have the correct password, and so the recommendation was to not use it. Nowadays people usually have TPMs or key management software to manage keys, so people can forget the password and still access their encrypted disks.
DNSSEC software is still not really that developed that they automatically include basic tests and verification tools to make sure people don't simply mix up keys. They assume that people write those themselves. Too many times this happens after incidents rather than before (heard this in so many war stories). It also doesn't help that dns is full of caching and caching invalidation. A lot of the insane step-by-step plans comes from working around TTL's, lack of verification, basic tooling, and that much of the work is done manually.
> No disk encryption algorithm can solve the issue if the user does not have the correct password, and so the recommendation was to not use it.
This problem is accurate, but it's the framing that makes it wrong.
No disk encryption algorithm can simultaneously protect and not protect something encrypted, what you're missing is the protocol/practices around that, and those are far less limited.
There is heaps of encryption around these days, there are people losing access to their regular keys, and yet procedures that recover access to their data while not entirely removing the utility of having encrypted data/disks.
A TPM is absolutely not a reliable way to store your key. Think about how often you get asked for a BitLocker recovery code, and imagine if every time that happened, you lost all your data.
> You have to be so insanely careful and plan everything to the nth degree
If you're on the root/TLD end of things I agree. Certainly not if you're a domain owner or name server administrator on the customer end of things.
Yeah I'm by no means saying the implementation is good. RPKI is a joke as well in my opinion. But it's all we have right now.
I am saying it is dishonest to discount the real security threat of not having DNSSEC.
Right OK, I fully agree with you then.
What parts do you agree about? Someone making an argument that we should return to the drawing board and come up with a new protocol, one that doesn't make the "offline signers and authenticated denial" tradeoffs DNSSEC makes, would probably be saying something everybody here agrees with --- though I still don't think it would be one of the 5 most important security things to work on.
But the person you're replying to believes we should hasten deployment of DNSSEC, the protocol we have now.
I would love to go to back to the drawing board and solve the security pitfalls in BGP & DNS. I wish the organizations and committees involved did a better job back then.
Sadly, we live in this reality for now, so we do what we can with what we have. We have DNSSEC.
You understand that it is a little difficult for people to take seriously a claim that you're interested in going back to the drawing board while at the same time very stridently arguing that hundreds of millions of dollars of work should go in to getting a 1994 protocol design from 4% deployment to 40% deployment. The time to return to the drawing board is now.
I don't read that reply as them saying we should hasten deployment of DNSSEC. If that was the intention of the comment then no, I don't agree with that aspect of it.
I saying say I agree with the statement "I am saying it is dishonest to discount the real security threat of not having DNSSEC."
I believe we do need some way to secure/harden DNS against attacks, we can't pretend that DNS as it stands is OK. DNSSEC is trying to solve a real problem - I do think we need to go back to the drawing board on how we solve it though.
They definitely believe we should hasten deployment of DNSSEC --- read across the thread. For instance: Slack was taken down for a half a day owing to a deployment of DNSSEC that a government contract obligated them to undertake, and that commenter celebrated the contract.
It's fine that we all agree on some things and disagree on others! I don't think DNS security is a priority issue, but I'm fine with it conceptually. My opposition is to the DNSSEC protocol itself, which is a dangerous relic of premodern cryptography designed at a government-funded lab in the 1990s. The other commenter on this thread disagrees with that assessment.
slightly later
(My point here is just clarity about what we do and don't agree about. "Resolving" this conflict is pointless --- we're not making the calls, the market is. But from an intellectual perspective, understanding our distinctive positions on Internet security, even if that means recognizing intractable disputes, is more useful than just pretending we agree.)
>DNSSec has caused so many outages at this point it's a joke.
So has failing to renew TLS certificates. So what?
Unrenewed certs have not caused the kind of extended mass outages DNSSec has, it's not close.
If Let's Encrypt goes down, half of the Internet will stop working. Doubly so when we switch to certs that are valid for just a couple of days.
That is... not how certificates work. The part of LetsEncrypt that has to work continuously ships as part of your browser root store.
The proposal is to make LE certs 9 days long or something. Which means if LE is down for even a short time thousands and millions of certs will expire.
The eventual plan is to limit certs to 48 hours (AFAIR), right now they're already allowing 6-day certs: https://letsencrypt.org/2025/02/20/first-short-lived-cert-is... In this scenario, if Let's Encrypt goes down for just a couple of days, a lot of certs will expire.
There are also operational risks, as Let's Encrypt has to have their secret key material in close proximity to web-facing services. Of course, they use HSMs, but it might not be enough of a barrier for nation-state level attackers.
The offline signing feature of DNSSEC allows the root zone and, possibly, the TLDs to be signed fully offline.
That's why in my ideal world I want to keep DNSSEC as-is for the root zone and the TLD delegation records, but use something like DoH/DoT for the second-level domains. The privacy impact of TLD resolution is pretty much none, and everything else can be protected fully.
That is not why DNSSEC has offline signers. DNSSEC has offline signers because when the protocol was designed, its authors didn't believe computers would be able to keep up with the signing work. Starting sometime in the middle of the oughts, people started to retcon security rationales onto it, but that's not the purpose of the design.
I'm sorry, this is just such an incredibly fine-tuned threat model for me to take it seriously.
You start with a BGP hijack, which lets you impersonate anybody, but assume that the hijacker is only so powerful as being able to impersonate a specific DNS server and not the server that the DNS server tells you about. You then use that specific control to get a CA to forge a certificate for you (and if the CA is capable of using any information to detect that this might be a forgery, the attack breaks).
And of course, the proposed solution doesn't do anything to protect against other kinds of DNS hijacking--impersonating somebody to the nameserver and getting the account switched over to them.
> I'm sorry, this is just such an incredibly fine-tuned threat model for me to take it seriously.
You claim it is fine-tuned, but it has happened in the real world. It is actually even better for attackers that it is "obscure", because that means it is harder to detect.
> but assume that the hijacker is only so powerful as being able to impersonate a specific DNS server and not the server that the DNS server tells you about.
Yes, all layers of the stack need to be secure. I am not making assumptions about the other layers - this thread is about DNS.
> if the CA is capable of using any information to detect that this might be a forgery
They are not. The only mitigation is "multi-perspective validation", which only addresses a subset of this attack.
> And of course, the proposed solution doesn't do anything to protect against other kinds of DNS hijacking
Yes, because other kinds of DNS hijacking are solved by HTTPS TLS. If TLS and CAs are broken, nothing is secure.
> You claim it is fine-tuned, but it has happened in the real world.
Sure, but it seems like his comment is still responsive; if DNSSEC is deployed, they perform a BGP hijack & can impersonate everyone, and they just impersonate the server after the DNS step?
If that's the threat model you want to mitigate, it seems like DNSSEC won't address it.
> and they just impersonate the server after the DNS step?
Yes, there are different mitigations to prevent BGP hijacking the webserver itself. Preventing a rogue TLS certificate from being issued is the most important factor. CAA DNS records can help a bit with this. DNS itself however is easiest solved by DNSSEC.
There are a lot of mitigations to prevent BGP hijacks that I won't get too much into. None are 100%, but they are good enough to ensure multi-perspective validation refuses to issue a TLS certificate. The problem is that if those same mitigations are not deployed on your DNS servers (or you outsource DNS and they have not deployed these mitigations) it is a weak link.
I don't see you responding to the question. You're fixating on protections for DNS servers, because that is the only circumstance in which DNSSEC could matter for these threat actors, not because they can't target the address space of the TLS servers themselves (they can), but because if you concede that they can do this, DNSSEC doesn't do anything anymore; attackers will just leave DNS records intact, and intercept the "authentic" server IPs.
So far your response to this has been "attackers can't do this to Cloudflare". I mean, stipulated? Good note? Now, can you draw the rest of the owl?
I am focusing on DNS because this thread is about DNSSEC. The topic of doing it in to the TLS servers themselves is a tangent not relevant to this thread.
No, I'm sorry, that's not the case. You're focusing on DNS servers as the target for BGP4 attacks because if you didn't, you wouldn't have a rebuttal for the very obvious question of "why wouldn't BGP4 attackers just use BGP4 to intercept legitimate ALPN challenges".
The thread is right here for everyone to read.
Yes, DNSSEC is designed to prevent DNS MITM via integrity. BGP hijacks lead to MITM. I am not sure where the confusion is.
The weird thing you're doing where you pretend attackers won't just target ALPN challenges with BGP4?
> You start with a BGP hijack, which lets you impersonate anybody, but assume that the hijacker is only so powerful as being able to impersonate a specific DNS server and not the server that the DNS server tells you about.
An attacker impersonating a DNS server still won't be able to forge the DNSSEC signatures.
No, they can't. Why would they bother? They'll just impersonate the IP the CA uses for the ALPN challenge.
Well, this won't work with DNSSEC. It's a good argument for it.
An attack against BGP where the attacker takes over traffic for an IP address isn't at all prevented by DNSSEC.
The sequence there is:
1. I hijack traffic destined for an IP address
2. Anything whose DNS resolves to that IP, regardless of whether or not they use DNSSEC, starts coming to me
In this model, I don't bother trying to hijack the IP of a DNS server: that's a pain because with multi-perspective validation, I plausibly have to hijack a bunch of different IPs in a bunch of different spots. So instead I just hijack the IP of the service I want to get a malicious cert for, and serve up responses to let me pass the ALPN ACME challenge.
Sure. But you won't have a TLS certificate for that address, if the host uses a DNS-based ACME challenge and prohibits the plain HTTP challenge: https://letsencrypt.org/docs/caa/
So DNSSEC still offers protection here.
Ok, so deploying DNSSEC would specifically solve the threat model of an attacker who can perform a BGP hijack of IP addresses, but doesn’t want to hijack multiple DNS server IPs because that’s more work, for a domain that has CAA records and disallows validation by ALPN.
That feels like a pretty narrow gain to justify strapping all this to all my zones and eating the operational cost and risk that if I mess it up, my site stops existing for a while
There are two things mixed up. "We need secure DNS" != "we need DNSSEC".
There is a huge demand for securing DNS-related things, but DNSSEC seems to be a poor answer. DoH is a somehow better answer, with any shortcomings it may have, and it's widely deployed.
I suspect that a contraption that would wrap the existing DNS protocol into TLS in a way that would be trivial to put in front of an existing DNS server and an existing DNS client (like TLS was trivial to put in front of an HTTP server), might be a runaway success. A solution that wins is a solution which is damn easy to deploy, and not easy to screw up. DNSSEC is not it, alas.
DoH does not solve anything that DNSSEC solves. They have almost no overlap.
Yes. But DoH was built in a way which is reasonably easy to adopt, and offers obvious benefits, hence it was adopted. DNSSEC lacks this quality, and I think this quality is essential.
That's a pretty damning indictment of DNSSEC.
But TLS relies on having a domain If domain intern depends on tls you have chicken and egg problem
TLS internally does not depend on a domain in the DNS sense, it basically certifies a chain of signatures bound to a name. That chain can be verified, starting from the root servers.
The problem is more in the fact that TLS assumes creation of a long-living connection with an ephemeral key pair, while DNS is usually a one-shot interaction.
Encrypting DNS would require caching of such key pairs for some time, and refreshing them regularly but not too often. Same for querying and verifying certificates.
DoH is like VPNs, used by paranoids and criminals
> It is important. This is unfortunate rhetoric that is harming the safety of the internet.
DNSSEC was built for exactly one use case: we have to put root/TLD authoritative servers in non-Western countries. It is simply a method for attesting that a mirror of a DNS server is serving what the zone author intended.
What people actually want and need is transport security. DNSCrypt solved this problem, but people were bamboozled by DNSSEC. Later people realized what they wanted was transport security and DoH and friends came into fashion.
DNSSEC is about authentication & integrity. DNSCRYPT/DOH is about privacy. They solve completely different problems and have nothing to do with one another.
There is also no reason we can't have both.
If you have secure channels from recursers all the way back to authority servers (you don't, but you could) then in fact DoH-like protocols do address most of the problems --- which I contend are pretty marginal, but whatever --- that DNSSEC solves.
Yessir, that isn't possible yet but it would absolutely be a solution! I'd love to see it.
What's more, it's a software-only infrastructure upgrade: it wouldn't, in the simplest base case, require zone owners to reconfigure their zones, the way DNSSEC does. It doesn't require policy decisionmaking. DNS infrastructure operators could just enable it, and it would work --- unlike DNSSEC.
(Actually getting it to work reliably without downgrade attacks would be more work, but notably, that's work DNSSEC would have had to do too --- precisely the work that caused DANE-stapling to founder in tls-wg.)
I'd love to see DoH/DoT that uses a stapled DNSSEC-authenticated reply containing the DANE entry.
There's still a chicken-and-egg problem with getting a valid TLS certificate for the DNS server, and limiting DNSSEC just for that role might be a valid approach. Just forget that it exists for all other entry types.
Stapling is dead: nobody could agree on a threat model, and they ultimately ended up at an HPKP-style cached "this endpoint must staple DANE" model that TLS people rejected (reasonably).
But if you have DoH chaining all the way from the recurser to the authority, it's tricky to say what stapled DANE signatures are even buying you. The first consumers of that system would be the CAs themselves.
That's why I want to use it only for DoH/DoT queries. Which will be used by the CAs to issue the WebPKI certs.
This way, it can be used to guarantee the integrity of the resolution path without first getting a certificate from a CA.
I'm just saying, at that point, you're using a custom TLS protocol, because the real TLS protocol doesn't have a DANE-stapling extension.
BGP attacks change the semantic meaning of IP addresses themselves. DNSSEC operates at a level above that. The one place this matters in a post-HTTPS-everywhere world is at the CAs, which are now all moving to multi-perspective validation.
As you should be aware, multi-perspective validation does not solve anything if your BGP hijack is accepted to be global. You will receive 100% of the traffic.
DNSSEC does greatly assist with this issue: It would have prevented the cited incident.
A BGP attacker doesn't need to alter the DNS to intercept traffic; they're already intercepting targeted traffic at IP selectivity.
DNS can be used for non-IP lookups. TXT records and whatnot.
There are 2 ways to pull off this attack:
1. Hijack the HTTP/HTTPS server. For some IP ranges, this is completely infeasible. For example, hijacking a CloudFlare HTTP/HTTPS range would be almost impossible theoretically based on technical details that I won't go through listing.
2. Hijack the DNS server. Because there's a complete apathy towards DNS server security (as you are showing) this attack is very frequently overlooked. Which is exactly why in the cited incident attackers were capable of hijacking Amazon Route53 with ease. *DNSSEC solves this.*
If either 1 or 2 work, you have yourself a successful hijack of the site. Both need to be secure for you to prevent this.
In summation, you propose a forklift upgrade of the DNS requiring hundreds of millions of dollars of effort from operators around the world, introducing a system that routinely takes some of the most sophisticated platforms off the Internet entirely when its brittle configuration breaks, to address the problem of someone pulling off a global hijack of all the Route53 addresses.
At this point, you might as well just have the CABForum come up with a new blessed verification method based on RDAP. That might actually happen, unlike DNSSEC, which will not. DNSSEC has lost signed zones in North America over some recent intervals.
I do like that the threat model you propose is coherent only for sites behind Cloudflare, though.
"I do like that the threat model you propose is coherent only for sites behind Cloudflare, though."
The threat model I proposed is coherent for Cloudflare because they have done a lot of engineering to make it almost impossible to globally BGP hijack their IPs. This makes the multi-perspective validation actually help. Yes, other ISPs are much more vulnerable than Cloudflare, is there a point?
You are not saying DNSSEC doesn't serve a real purpose. You are saying it is annoying to implement and not widely deployed as such. That alone makes me believe your argument is a bit dishonest and I will abstain from additional discussion.
No, I'm saying it doesn't serve a real purpose. I've spent 30 years doing security work professionally and one of the basic things I've come to understand is that security is at bottom an economic problem. The job of the defender is to asymmetrically raise costs for attackers. Look at how DNS zones and certificates are hijacked today. You are proposing to drastically raise defender costs in a way that doesn't significantly alter attacker costs, because they aren't in the main using the exotic attack you're fixated on.
If we really wanted to address this particular attack vector in a decisive way, we'd move away, at the CA level, from relying on the DNS protocol browsers use to look up hostnames altogether, and replace it with direct attestation from registrars, which could be made _arbitrarily_ secure without the weird gesticulations DNSSEC makes to simultaneously serve mass lookups from browsers and this CA use case.
But this isn't about real threat models. It's about a tiny minority of technologists having a parasocial relationship with an obsolete protocol.
It does certainly make it easier. Sure, we can survive without it, but cryptographic signing of dns records is useful for a number of things.
Counterpoint: no it isn't, which is why virtually nobody uses it. Even the attack this thread centers on --- BGP hijacking of targeted DNSSEC servers to spoof CA signatures --- is a rounding error sidenote compared to the way DNS zones actually get hijacked in practice (ATO attacks against DNS providers).
If people were serious about this, they'd start by demanding that every DNS provider accept U2F and/or Passkeys, rather than the halfhearted TOTP many of them do right now. But it's not serious; it's just motivated reasoning in defense of DNSSEC, which some people have a weird stake in keeping alive.
You are again ignoring the fact that DNSSEC would have prevented a $152,000 hack. Yes, we are aware organizations are not always serious about security. For those that are though, DNSSEC is a helpful tool.
No, it isn't. It attempts and mostly fails to address one ultra-exotic attack, at absolutely enormous expense, principally because the Internet standards community is so path-dependent they can't take a bad cryptosystem designed in the mid-1990s back to the drawing board. You can't just name call your way to getting this protocol adopted; people have been trying to do that for years, and the net result is that North American adoption fell.
The companies you're deriding as unserious about security in general spend drastically more on security than the companies that have adopted it. No part of your argument holds up.
"at absolutely enormous expense"
Citation? A BGP hijack can be done for less than $100.
"You can't just name call your way to getting this protocol adopted"
I do not care if you adopt this protocol. I care that you accurately inform others of the documented risks of not adopting DNSSEC. There are organizations that can tolerate the risk. There are also organizations that are unaware because they are not accurately informed (due to individuals like yourself), and it is not covered by their security audits. That is unfortunate.
The cost I'm talking about is the defender's.
> Counterpoint: no it isn't, which is why virtually nobody uses it. Even the attack this thread centers on --- BGP hijacking of targeted DNSSEC servers to spoof CA signatures
Wait, wait, wait. How can you hijack a DNSSEC server? Its keys are enrolled in the TLD, and you can't spoof the TLD server, because its keys in turn are enrolled in the root zone. And the root zone trust anchor is statically configured on all machines.
And at least Let's encrypt actually verifies DNSSEC before issuing certificates. IIRC it will become mandatory for all CA's soon. DNSSEC for a domain plus restrictive CAA rules should ensure that no reputable CA would issue a rogue cert.
It absolutely will not. Most domains aren't hijacked by spoofing the DNS to begin with.
"Most domains". Yes, it is possible that nobody bothers to DNS hijack your domains. Sadly I've worked for organizations where it did happen, and now they have DNSSEC.
I invite anybody who thinks this is a mic drop to pull down the Tranco research list of most popular/important domains on the Internet --- it's just a text file of zones, one per line --- and write the trivial bash `for` loop to `dig +short ds` each of those zones and count how many have DNSSEC.
For starters you could try `dig +short ds google.com`. It'll give you a flavor of what to expect.
And you still can't seem to make your mind up on whether this is because DNSSEC is still in its infancy or if it's because they all somehow already studied DNSSEC and ended up with the exact same opinion as you. I'm gonna go out on a limb and say that it's not the latter.
What do I have to make my mind up about? I worked on the same floor as the TIS Labs people at Network Associates back in the 1990s. They designed DNSSEC and set the service model: offline signers, authenticated denial. We then went through DNSSEC-bis (with the typecode roll that allowed for scalable signing, something that hadn't been worked out as late as the mid-1990s) and DNSSEC-ter (NSEC3, whitelies). From 1994 through 2025 the protocol has never seen double-digit percentage adoption in North America or in the top 1000 zones, and its adoption has declined in recent years.
You're not going to take my word for it, but you could take Geoff Huston's, who recently recorded a whole podcast about this.
I've worked for these orgs on this exact problem.
It's the latter.
The primary DNSSEC standards, RFC 4033-4035, are 20 years old. It isn't "in its infancy."
That quote is interesting because all of the period reporting I’ve seen says that the attackers did NOT successfully get an HTTPS certificate and the only people affected were those who ignored their browsers’ warnings.
https://doublepulsar.com/hijack-of-amazons-internet-domain-s...
https://blog.cloudflare.com/bgp-leaks-and-crypto-currencies/
How about another incident in 2022? Attackers BGP hijacked a dependency hosting a JS file, generated a rogue TLS certificate, and stole $2 million. Keep in mind: these are incidents we know about, not including incidents that went undetected.
https://blog.citp.princeton.edu/2022/03/09/attackers-exploit...
Noteworthy: "Additionally, some BGP attacks can still fool all of a CA’s vantage points. To reduce the impact of BGP attacks, we need security improvements in the routing infrastructure as well. In the short term, deployed routing technologies like the Resource Public Key Infrastructure (RPKI) could significantly limit the spread of BGP attacks and make them much less likely to be successful. ... In the long run, we need a much more secure underlying routing layer for the Internet."
You know why I'm not coming back at you with links about registrar ATOs? Because they're so common that nobody writes research reports about them. I remember after Laurent Joncheray wrote his paper about off-path TCP hijacking back in 1995; for awhile, you'd have thought the whole Internet was going to fall to off-path TCP hijacking. (It did not.)
The argument counter dnssec is that if you are trying to find some random A record for a server, to know if it is the right one, TLS does that fine provided you reasonably trust domain control validation works i.e. CAs see authentic DNS.
An argument for DNSSEC is any service configured by SRV records. It might be totally legitimate for the srv record of some thing or other to point to an A record in a totally different zone. From a TLS perspective you can't tell, because the delegation happened by SRV records and you only know if that is authentic if you either have a signed record, or a direct encrypted connection to the authoritative server (the TLS connection to evil.service.example would be valid).
So it depends what you expect out of DNS.
TLS doesn't provide any security in this case because TLS certificates are generated based on DNS. See Lets Encrypt.
Isn’t this why validation is done from multiple locations on different networks? That blocked the 2018 attack and RPKI has made it even harder since.
Yes but it is still possible to execute BGP hijacks that capture 100% of traffic, rendering multi-perspective validation useless. RPKI sadly only solves naive "accidental" BGP hijacks, not malicious BGP hijacks. That's a different discussion though.
I agree and apparently so does the CA/B forum: SC085: Require DNSSEC for CAA and DCV Lookups is currently in intellectual property review.
DCV is CA/B speak for domain-control validation; CAA = these are my approved CAs.
This seems to be optional in the sense that: if a DNS zone has DNSSEC, then validation must succeed. But if DNSSEC is not configured it is not required.
If the threat model is BGP hijacking, is DNSSEC actually the answer? If you can hijack a leaf server, can't you hijack a root server? As far as I can tell, the root of DNSSEC trust starts with DNSKEY records for the "." zone that are rotated quarterly. This means every DNSSEC-validating resolver has to fetch updates to those records periodically, and if I can hijack even one route to one of the fixed anycast IPs of [a-m].root-servers.net then I can start poisoning the entire DNSSEC trust hierarchy for some clients, no?
Now, this kind of attack would likely be more visible and thus detected sooner than one targeted at a specific site, and just because there is a threat like this doesn't mean other threats should be ignored or downplayed. But it seems to me that BGP security needs to be tackled in its own right, and DNSSEC would just be a leaky band-aid for that much bigger issue.
The much-more-important problem is that the most important zones on the Internet are, in the DNSSEC PKI, irrevocably coupled to zones owned by governments that actively manipulate the DNS to achieve policy ends.
This is a problem with TLS too, of course, because of domain validation. But the WebPKI responded to that problem with Certificate Transparency, so you can detect and revoke misissued certificates. Not only does nothing like that exist for the DNS, nothing like it will exist for the DNS. How I know that is, Google and Mozilla had to stretch the bounds of antitrust law to force CAs to publish on CT logs. No such market force exists to force DNS TLD operators to do anything.
My (ahem) "advocacy" against DNSSEC is understandably annoying, but it really is the case that every corner of this system you look around, there's some new goblin. It's just bad.
I agree that the problem lies with BGP and we definitely need a solution. You can also say the problem is with TLS CA verification not being built on a solid foundation. Even with that said, solving those problems will take time, and DNSSEC is a valid precaution for today.
Related:
- ECC vs. non-ECC memory and bitsquatting when people said "oh, it doesn't matter and it's too expensive for no benefit."
- http:// was, for years, normalized pre-PRISM.
- Unsecured DNS over 53/tcp+udp (vs. DoH today) is a huge spoofing and metadata collection threat surface.
"Unsecured DNS over 53/tcp+udp (vs. DoH today) is a huge spoofing and metadata collection threat surface"
Genuinely curious:
What actor, in 2025, would exist in your threat model for DoH ... but wouldn't simultaneously be sniffing SNI ?
I can't think of any.
I cannot think of any good reason to be serious about DoH and DNS leakage in the presence of unencrypted SNI.
What am I missing ?
Not saying they are malicious actors, but easy answer would be any Public WiFi anywhere. They all intercept DNS, less than 1% intercept SNI.
It is also public knowledge that certain ISPs (including Xfinity) sniff and log all DNS queries, even to other DNS servers. TLS SNI is less common, although it may be more widespread now, I haven't kept up with the times.
Isn't the vast majority of TLS connections using SNI today?
Popular web browsers send SNI by default regardless of whether it is actually needed. For example, HTTPS-enabled websites not hosted at a CDNs may have no need for SNI. But popular web browsers will send it anyway.
Yes TLS SNI is ubiquitous. I am referring specifically to TLS SNI metadata collection.
Citing a situation where DNS interception is good for the user isn't the best way to defend it being bad.
every single ISP in the world. it was a well documented abused channel.
they not only intercepted your traffic for profiling but also injected redirects to their branded search. honestly curious if you're just too young or was one of the maybe 10 people who never experienced this.
sending traffic to a third party like quad9 is much safer than to a company who have your name/address/credit card.
tls1.3 exists
> We don't. If we did, we'd have it by now. It's been over 25 years of making appeals like this.
See also IPv6. ;)
Edit: currently at "0 points". People, it was a joke. Chill.
We very definitely do have IPv6. I'm using IPv6 right now. Last numbers I saw, over 50% of North American hits to Google were IPv6. DNSSEC adoption in North America is below 4%, and that's by counting zones, most of which don't matter --- the number gets much lower if you filter it down to the top 1000 zones.
Well for some value of "we" and some value of "have". (-:
> We very definitely do have IPv6. I'm using IPv6 right now.
I'm not. Neither is my home wireline PON ISP, even though they have it on their mobile network (but my previous ISP did).
Also, every time there's an IPv6 article on HN there are entire sub-threads of people saying it's never going to come along. ¯\_(ツ)_/¯
* https://news.ycombinator.com/item?id=44306792
One can hope that someone will give the ISPs in my country a metaphorical hefty kick up the arse, especially as some of the more niche ones have been happily providing IPv6, and business customers can get IPv6, and of course other countries are happily embracing IPv6. So I wouldn't say never.
But the clear evidence is that past promises of it arriving at those major ISPs are very hollow indeed.
It's not the same with DNSSEC in the U.K., though. Many WWW hosting services (claim to) support that right now. And if anything, rather than there being years-old ineffective petition sites clamouring for IPv6 to be turned on, it is, even in 2025, the received wisdom to look to turning DNSSEC off in order to fix problems.
* https://codepoets.co.uk/2025/its-always-dns-unbound-domain-s...
* https://www.havevirginmediaenabledipv6yet.co.uk
One has to roll one's eyes at how many times the-corporation-disables-the-thread-where-customers-repeatedly-ask-for-simple-moderen-stuff-for-10-years is the answer. It was the answer for Google Chrome not getting SRV lookup support. Although that was a mere 5 years.
As a joke, it’s not easily distinguishable from trolling and since IPv6 is approaching half of all traffic, more in many areas, the humor value is limited.
> […] since IPv6 is approaching half of all traffic, more in many areas, the humor value is limited.
And yet every article on IPv6 has entire brigades of folks of folks going on about IPv6 is DOA, and "I've been here about it for thirty years, where is it?", and "they should have done IPv4 just with larger addresses, adoption would have been much faster and more compatible".
I'm simply pointing out the parallel argument that was made between DNSSEC and IPv6.
Having also been online for thirty years, I’ve seen those jokes soooo many times but only one of them is funny. DNSSEC’s 0.3% usage[1] is within a rounding error of zero but IPv6 is close to half of all internet traffic in many countries. It’s not funny so much as not updating your priors for decades, like joking about how Windows crashes constantly or saying Python is a hobbyist language.
1. https://radar.cloudflare.com/dns
It's a parallel argument, but it's not a good one, because IPv6 is now around ~50% of traffic depending on the service and details, and DNSSEC was introduced earlier and doesn't seem to be going anywhere.
IPv6 probably could have been done better and rolled out faster, and whoever works on IPvNEXT should study what went wrong, but eventually it became better than alternative ways of dealing with a lack of IPv4 addresses, and it started getting real deployment.
If you want the net to support end to end connectivity we need IPv6. Otherwise you'll end up with layers and layers of NAT and it will become borderline impossible.
A lot of protocols get unstable behind layers of NAT too, even if they're not trying to do end to end / P2P. It adds all kinds of unpredictable timeouts and other nonsense.
Reminds me of:
https://serverfault.com/questions/1018543/dns-not-resolving-...
Unfortunately the basic thought process is that "it is a security feature, therefore it must be enabled" it is very hard to argue against that, but it is pretty similar to "we must have the latest version of everything as that is secure" and "we should add more passwords with more requirements and expire it often". One of those security theatres that optimizes for reducing accountability, but may end up with almost no security gains and huge tradeoffs that may end up even compromising security through secondary effects (how secure is a system that is down?)
We need a lot of things we don't have.
Note that without DNS security, whoever controls your DNS server, or is reliably in the path to your DNS server, can issue certificates for your domain. The only countermeasure against this is certificate transparency, which lets you yell loudly that someone's impersonating you but doesn't stop them from actually doing it.
In this case, there's an avalanche of money and resources backing up the problem domain DNSSEC attempts to make contributions in, and the fact that it's deployed in practically 0% of organizations with large security teams is telling.
I would say it is more a testament to the unfortunate state of cybersecurity. These "theoretical" attacks happen. Everyone just thinks it won't be them.
My rebuttal is that the DNSSEC root keys could hit Pastebin tonight and in almost every organization in the world nobody would need to be paged. That's not hyperbole.
You are mostly right, but I would hope that certain core security companies and organizations would get paged. Root CAs and domain registrars and such should have DNSSEC validation.
Unfortunately, DNSSEC is a bit expensive in terms of support burden, additional bugs, reduced performance, etc. It will take someone like Apple turning DNSSEC validation on by default to shake out all the problems. Or it will take an exploitable vulnerability akin to SIM-swapping to maybe convince Let's Encrypt! and similar services reliant on proof-by-dns that they must require DNSSEC signing.
SIM-swapping is a much more important attack vector than on-path/off-path traffic interception, and are closer to how DNS hijacking happens in practice (by account takeover at registrars).
Let's not fix anything other than the most popular attack, that'll be great for security.
(The biggest problem in this thread is that you can bypass the rate limit but nobody else can)
If that happened, we'd revert to pre-DNSSEC security levels: an attack would still be hard to pull off (unless you own a root DNS server or are reliably in the path to one). It's like knowing the private key for news.ycombinator.com - it still doesn't do anything unless I can impersonate the Hacker News server. But that was still enough of a risk to justify TLS on the web. Mostly because ISPs were doing it to inject ads.
We are demonstrably in "pre-DNSSEC" security levels today. DNSSEC has almost no serious adoption.
That’s false. Any organization that enables DNSSEC for their domains gains its security benefits and prevents any potential DNS hijacking.
At this point, your statements border on intentional dishonesty. Please be more truthful and responsible in your statements.
> That’s false. Any organization that enables DNSSEC for their domains gains its security benefits and prevents any potential DNS hijacking.
Hijacking by whom? Very few are really running a validating recursive resolver.
This is an engineering/technical discussion and you're not going to be able to name-call your way through it.
If the problem is path between registrar and CA, then deploying the fix to clients seems like an absolute overkill.
Just create a secure path from CA to registrar. RDAP-based or DoH-based, or something from scratch, does not really matter. It will only need to cover few thousand CAs and TLDs, so it will be vastly simpler that upgrading billions of internet devices.
"DNS resolvers are the ones in charge of tracking down this information for you."
If one uses them.
One can alternatively use iterative queries where no "DNS resolver", i.e., recursive resolver, is used.
Many years ago I wrote a system for interative resolution for own use, as an experiment. I learnt that it can be faster than recursive resolution.
People have since written software for iterative resolution, e.g., https://lizizhikevich.github.io/assets/papers/ZDNS.pdf
Unfortunately authoritative servers generally do not encrypt their responses. IMO this would be more useful than "DNSSEC".
"And that data is often provided by authoritative servers."
What are examples of data not provided by authoritative servers.
One could argue the primary (not the only) risk addressed by DNSSEC is third party DNS service, i.e., shared caches accessible from the internet
If this is true, then one might assume DNSSEC is generally unnecessary if one is running their own unshared cache only accessible from the loopback or the LAN
Software like djb's dnscache, a personal favourite, has no support for DNSSEC
NLNet's unbound places a strong emphasis on supporting DNSSEC. The unbound documentation authors recommend using it
https://unbound.docs.nlnetlabs.nl/en/latest/getting-started/...
Or run "unbound" as your own local recursive resolver.
Dan Kaminsky showed us why we need DNSSEC. Without it, it's quite easy to MITM and/or spoof network traffic. Some governments like to do this, so they'll continue to make it difficult for DNSSEC to be fully adopted.
The original registrar, Network Solutions, doesn't even fully support DNSSEC. You can only get it if you pay them an extra $5/mo and let them serve your DNS records for you. So for $5/mo you get DNSSEC, but you defer control of your records to them, which isn't really secure.
https://community.cloudflare.com/t/dnssec-on-network-solutio...
It's trivial to spoof DNS even with DNSSEC set up, because DNSSEC is a server-to-server protocol. Your browser doesn't speak DNSSEC; it speaks plaintext DNS, and trusts a single bit in the response header that says whether the upstream caching resolver actually checked signatures.
You can run your own DNSSEC-aware DNS server locally. It's not hard at all.
You can run a full-feed BGP4 bird configuration on your laptop too. Sounds awesome.
We don't. It's just an another PKI with operators you can never get rid of if they misbehave. That alone makes it not possible to start relying on it.
Everyone seems to miss the "single unbreakable pki" aspect that necessarily comes along with the design. :shrug:
DNSSEC offers Zero protection against state actors.
Anyone coming after you with acres of hardware will succeed.
Security is about being a sufficiently hardened target, so that the lazy attacker seeks easier prey.
Hardware has nothing to do with state actors and DNSSEC; the fact that DNSSEC is essentially a key escrow system does.
Optional, alternative standards don't have visibility and don't get used.
Without a way to measure, nothing happens. There was once a few, UX-hostile DNSSEC & DANE browser extensions but these never worked well and were discontinued.
Purveyors of functional DNSSEC: https://freebsd.org
I’ve honestly never known which sites use DNSSEC and which don’t. Browsers don’t warn you when it’s missing, and most people probably wouldn’t even know where to look.
It’s hard to care about something like that, even if it really does matter behind the scenes.
DNSSEC is very easy to setup on AWS Route53 and it lets you sign any txt record you have which can be very useful.
because I can have my certificate authority in my DNS records and my app can verify the CA cert is from a trusted/verified source
This would theoretically be possible if browsers did DANE and didn't, because of middlebox fuckery, have to have a fallback path to the X.509 WebPKI because DNSSEC requests get dropped like 5% of the time. But because that is the case, no browser does DANE validation today, and when they did, many years ago, those DANE CA certs were effectively yet another CA; they actually expanded your attack surface rather than constricting it.
Even if that wasn't the case --- and it emphatically is --- you'd still be contending with a "personal CA" that in most cases would have its root of trust in a PKI operated by world governments, most of which have a demonstrated aptitude for manipulating the DNS.
We don't technically need ICANN the whole DNS system anymore.
Anyone could quickly build a public cryptographically secure blockchain-based DNS system where people could optionally sync and query their own nodes (without even going over the wire). People could buy and own their domain names on-chain using cryptocurrency instead of repeatedly renting them from some centralized entity.
You could easily build this today by creating a Chrome Extension with a custom URL/address bar which bypasses the main one and makes a call to blockchain nodes instead of a DNS resolver; it would convert a domain name into an IP address by looking up the blockchain. This system could scale without limit in terms of reads as you can just spin up more nodes.
I mean it'd be so easy it's basically a weekend project if you use an existing blockchain as the base. Actually Ethereum already did something like this with .ETH domains but I don't think anyone built a Chrome Extension yet; or at least I haven't heard, though it's possible to enable in Brave browser via settings (kind of hidden away). Also, there is Unstoppable Domains.
People have been doing that since roughly 2010, so the failures are important to learn why it’s not a weekend project.
Adoption is critical for alternate roots, so the first question has to be how something gets enough users for anyone to feel it’s worth the trouble of using: the failure mode of DNS is that links break, email bounces, and tons of things which do server-side validation reject it, so this really limits usage.
The other big problem is abuse. Names are long term investments, so there are the usual blockchain problems of treating security as an afterthought but you also have the problem that third-parties have a valid need to override the blockchain (e.g. someone registers Disney.bit and points it at a porn site or serves malware from GoogleChrome.eth). Solving that means that you’re back to trusting the entity which created the system or maybe a group of operators, so the primary appeal is going to be if you can make it cheaper than owning a traditional domain.
Parts of the inevitable Thomas Ptacek DNSSEC rant remind me of the years of denialism from C++ people before the period when they were "concerned" about safety and the past few years of at least paying lip service to the idea that C++ shouldn't be awful...
One thing I like about Thomas’ history on this issue has been the focus on UX. I think that “can probably be used safely by an expert who understands the domain” as a failure mode is something we should spend more time thinking about as an architecture failure rather than a minor frictional cost.
Sure, although in this space Thomas was already entirely happy with the early Web PKI which is completely terrible for this - similar conditions apply.
At work this week I was hand-holding a DB engineer who was installing some (corporate, not Web PKI) certs and it reminded me of those bad old days. He's got a "certificate" and because this isn't my first rodeo, I ask him to describe it in more details before he just sends me it to look at. Of course it's actually a PKCS#12 file, and so if he'd sent that the key which is inside it is compromised - but he doesn't know that, the whole system was introduced to him as a black box which renders him unable to make good decisions. Out of that conversation we got more secure systems (fixing some issues that pre-existed the fault I was there to help with) and an ally who understands what the technology is actually for and is now trying to help us deliver its benefits rather than just following rote instructions they don't understand.
Anyway, just as we went from "I dunno, I clicked a button and typed in the company credit card details, then I got emailed this PFX file to put on the server" to "Click yes when it asks if you want Let's Encrypt" with both an invisible improvement to delivered security and a simpler workflow, that's very much possible for DNSSEC if people who want to solve the problem put some work in, rather than contentedly announcing that it can't be fixed and we shouldn't worry about it.
I don’t know that I’ve read much support for “entirely happy” but the key difference is that DNSSEC is much harder to upgrade. You didn’t need to wait for my ISP to upgrade their DNS server before you could stop using PKCS12, and you definitely didn’t need me to upgrade my operating system.
The most important work to put in isn’t tweaking DNSSEC but changing it from the pre-PC Internet model where everyone completely trusted their network operators – pushing signature validation out to the client and changing the operating system APIs to make better user interfaces possible.
> possible for DNSSEC if people who want to solve the problem
But for several decades, DNSSEC proponents have been complaining about the vocal detractors, rather than actually addressing the problems that have been identified. That's a pretty significant track record on which to make a fair judgment about who is making the strongest case.
Someone above offered a link [1] that gives some pretty good reasons why nobody is stepping up to fix the problems.
[1] https://blog.apnic.net/2024/05/28/calling-time-on-dnssec/
(This is Geoff Huston, for what it's worth, who is an Internet operations luminary of the old breed, and a very long-time very enthusiastic DNSSEC proponent. He doesn't really "call time on" DNSSEC, though; the APNIC Ping podcast he did on this is worth listening to.)