Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Google Chrome Chromium The Internet

Chromium's DNS-Hijacking Tests Accused of Causing Half of All Root Queries (zdnet.com) 84

ZDNet reports: In an effort to detect whether a network will hijack DNS queries, Google's Chrome browser and its Chromium-based brethren randomly conjures up three domain names between 7 and 15 characters to test, and if the response of two domains returns the same IP, the browser believes the network is capturing and redirecting nonexistent domain requests. This test is completed on startup, and whenever a device's IP or DNS settings change.

Due to the way DNS servers will pass locally unknown domain queries up to more authoritative name servers, the random domains used by Chrome find their way up to the root DNS servers, and according to Verisign principal engineer at CSO applied research division Matthew Thomas, those queries make up half of all queries to the root servers. Data presented by Thomas showed that as Chrome's market share increased after the feature was introduced in 2010, queries matching the pattern used by Chrome similarly increased.

"In the 10-plus years since the feature was added, we now find that half of the DNS root server traffic is very likely due to Chromium's probes," Thomas said in an APNIC blog post. "That equates to about 60 billion queries to the root server system on a typical day."

Thomas added that half the DNS traffic of the root servers is being used to support a single browser function, and with DNS interception being "certainly the exception rather than the norm", the traffic would be a distributed denial of service attack in any other scenario.

This discussion has been archived. No new comments can be posted.

Chromium's DNS-Hijacking Tests Accused of Causing Half of All Root Queries

Comments Filter:
  • by gavron ( 1300111 ) on Monday August 24, 2020 @03:51AM (#60434693)

    Verisign bought Network Solutions 10 years ago. https://money.cnn.com/2000/03/... [cnn.com]
    Two years later they sold some of the services, leaving the root server business to themselves. https://www.networkworld.com/a... [networkworld.com]

    To finish off the trifecta they signed an agreement with the US Department of Commerce and ICANN to charge for remaining services, and to increase that anually. https://www.icann.org/news/ann... [icann.org]

    And... now... 7 months later, they're complaining people are making use of their "free to all in the world to use" DNS servers is somehow like a DoS.

    Is "Speedtest" a DoS? It's not usable bandwidth... it's just "send lots of bytes... receive lots of bytes.... time it all, and drop them all on the floor." Other than providing the latency and bandwidth, it's a DoS, right? Just like looking up false domain names to see when they resolve [in]correctly and then they NXDOMAIN.

    Sounds to me like someone wants to "alter deal; pray I don't alter it further".

    DNS queries to public DNS servers when used for a purpose that is to the befit of the user and as a result of the user's action is no different than a Speedtest or any other metric used to ensure the underlying structure of the Internet has not been compromised.

    It's hard to see a DoS here... but then I'm not Verisign sucking on the US government's funds.

    E

    • by Zocalo ( 252965 ) on Monday August 24, 2020 @04:15AM (#60434715) Homepage
      To be fair to Verisign here, they are not the only operator of root DNS servers. Most (all?) of the RIRs operate them (the second link in TFS is to APNIC discussing the issue), as well as several prominent Internet backbone related organizations like the ISC, many of which are non-profits. Root servers need to be *big*, and they don't need their normal load added to with this kind of crap which will have ballooned with Chromium's rise in popularity over the last decade. Personally, I think this is entirely on the Chromium devs - for what they are trying to do, "random-domain" is functionally equivalent to "random-subdomain.google.com", and that would result in the additional load being redirected much earlier to Google's servers, who can absolutely afford this and have the ability to run the servers.

      Also, it's Chromium usage tracking data. Seriously, why are they *not* doing this already?
    • by BlacKSacrificE ( 1089327 ) on Monday August 24, 2020 @05:24AM (#60434807)

      While I don't know the specific ins and outs of this or any other $evilcorp's actions against another, I do sort of see their point.

      A speed test is typically run from a server that local to your location to test edge performance. It is a service specifically set up and provided to do whats written on the box, with an expectation of fair use but no expectation of return. Critically, not every browser on the plant is using speed test to look up goat porn. If MS Word started running speed tests against my service every time it was run or a document was saved (so clippy could tell you how long it would take your spreadsheet to upload to the cloud for example) I'd be pretty pissy as well. I didn't provide the service so MS could add an upsellable feature to their application, and hell yes I'd want a slice of that pie.

      Likewise, the DNS system is provided as a service, free (regardless of what is happening under the bonnet, none of us are paying a subscription for DNS resolution) for all and sundry to use, serves a singular purpose and again, has a expectation that you won't build an application that winds up being half of all the DNS traffic to operate. There is an entire rabbit hole of ongoing ethical red flags we could cite when considering Google's actions around the commodification of personal data and it's attitude to the public internet, but the TL;DR is that it's just another example of them riding hard and deep over what is basically public infrastructure. These fuckers have enough iron in their sheds and flesh in their shops, there is absolutely no reason for them not to roll their own solution that does not require half the global root capacity to operate.

      • So far I think you have done the best job of articulating why Verisign may be pissed off with how Google goes about it. Furthermore since Chrome is now enforcing its own DoH and bypassing the system DNS settings does it need to still do this?

        • Furthermore since Chrome is now enforcing its own DoH and bypassing the system DNS settings does it need to still do this?

          Yes, because no one has had the sense to make tampering with transit illegal. DoH in Chrome is not enabled by default yet, and so the most insidious and obnoxious DNS meddling done by US ISPs redirecting bad hostnames to their own advertising pages will still work. Google implemented this feature specifically to deal with that bullshit, and it works well enough that ISPs largely abandoned it. But the moment that behavior goes away, you can be sure Comcast and Spectrum and friends will return to that bad

        • DoH simply means they pass the initial request to an encrypted DNS server (eg Cloudflares) which will then make requests to the hierarchy of other DNS servers anyway.

          So the exceptional load is still there, and impacting all the servers as requesting a doman that does not exist means the caches will not be hit and the request must be passed on in the expectation that this is just a new domain name.

          • I think you missed my point there. Google's justification for all the extra traffic was to check if networks were hijacking DNS traffic. With DoH the request is encrypted until it gets to the trusted DNS server so it completely bypasses traditional DNS hijacking which relied on plaintext requests. So it makes their justification completely pointless and redundant.

    • Google is running their users browsers as a botnet. If Microsoft or Apple were doing this you'd be screaming bloody murder.
      • Potentially Microsoft is also doing it, as Edge is now based on Chromium

      • Yup. If Google wants to DDoS the root servers, they should be paying to run them. This is classic Google behaviour, who cares if we trash a public resource as long as it furthers our agenda.
        • by ELCouz ( 1338259 )
          Same could be said with consumer routers flooding NTP servers with hardcoded NTP IPs. You can't just magically send NTP request every minutes times X sold routers and expect to remain undetected / free.
          • Yup, that's what I based my comments on. When D-Link engaged in NTP vandalism some years ago, they apologised and stopped doing it once it was drawn to their attention. When Netgear did the same thing they went to some lengths to try and fix the problem once they were informed about it.

            In contrast, what have Google done? According to the Chromium bug tracker [chromium.org], nothing.

    • by hardaker ( 32597 )

      I actually wrote a blog post about this [1] for apnic earlier this year, after giving a talk about it at DNS-OARC last year. The problem is a pain, as it does generate a bunch of garbage queries that are otherwise useless. Yes, our infrastructure can (and must) handle them, but there is a cost in terms of time, energy, etc.

      [1]: https://blog.apnic.net/2020/04... [apnic.net]
      [2]: https://youtu.be/sh9Bbk_1bMQ?t... [youtu.be]

  • by YuppieScum ( 1096 ) on Monday August 24, 2020 @04:25AM (#60434741) Journal

    Pretty much every "free" WiFi service in hotels, coffee shops and bars does this, in order to force you to a sign-in page.

    Further, I just typed "kjhsdjfhkjhazdkjfjhkjhnasdjkhad.com" into my browser's address bar and got a search page from Level3.

    DNS interception happens all the time, ever since someone realised it could make money...

    • by Calydor ( 739835 )

      The hotels, coffee shops etc. are a different beast. They intercept ALL traffic if you're not signed in and direct you to the login page and it is expected behavior when you're there. The issue being discussed here is testing what happens when you try to go somewhere that doesn't exist.

      • by ArmoredDragon ( 3450605 ) on Monday August 24, 2020 @08:04AM (#60435065)

        It's called a captive portal and they don't intercept all traffic. DNS in particular they're supposed to leave alone, because returning the captive portal IP would result in poisoning the DNS resolver cache, and for 5 minutes after signing in the user would swear there's no connectivity because their browsers home page doesn't load.

        They do however intercept port 80 and 443, so no matter what IP address your browser goes to, it gets a 303 redirect to the captive portal web page. Everything else is usually actively blocked (TCP port closed or UDP black hole) until the sign in occurs.

        • by _merlin ( 160982 ) on Monday August 24, 2020 @08:33AM (#60435131) Homepage Journal

          Not all of them, though. A hotel I stayed at in Shanghai (near Century Park) only blocked TCP when not signed in. UDP went through just fine. I could make SIP calls with an Australian SIP provider without signing in to the hotel WiFi.

        • Only if they botch the TTL. DNS record TTL is measured by seconds. You can put a 1-second expiry on a result. However, that does nothing to already-cached records, so you still have to grab the ports.

        • by brunes69 ( 86786 )

          Ummm no.

          Most captive portals *DO INDEED* "intercept" DNS. They return responses with a short TTL so as to not corrupt the cache. "Intercept" is a loose word here because they aren't actually intercepting anything, as they are the primary resolver returned by DHCP.

          Back in the bygone days, many captive portals did not intercept DNS - but folks soon figured out that you could trivially use DNS Tunneling to get free internet anywhere that DNS was allowed out. The widespread use of easy DNS Tunneling caused capt

          • by skids ( 119237 )

            Incompetent captive portals intercept DNS. They'll continue breaking as more security is added be it by DNSSec or DNS/TLS. Modern captive portals intercept HTTP traffic to specific hosts causing the OS or the browser to pull up a captive portal agent or webpage.

            Not that that's a great solution because each OS/browser does this in their own peculiar way and worse, present a moving target to captive portal developers. Especially Apple which has a special browser just for this and it's behavior is always ch

          • That doesn't sound like a very good idea. Why on earth would you screw with TTL and risk running into problems when there are much easier ways of effectively blocking tunnels? For example, blocking text records prior to signing in, or even setting limits on subdomain lengths. DNS poisoning is bad mojo and stands to break a lot of things, even if you make a very short TTL.

    • Re: (Score:2, Funny)

      by Anonymous Coward

      kjhsdjfhkjhazdkjfjhkjhnasdjkhad.com" into my browser's address bar and got a search page from Level3.

      Kjhsdjfhkjhazdkjfjhkjhnasdjkhad is popular tourist spot in glorious nation of Kazhakstan.

    • The primary "force" is by forcing all outbound HTTP and HTTPS through a managed web proxy. There are powerful reasons to do this, managing the load and providing accountability for services in their shop.

    • There's a difference between DNS interception, and proxying at the gateway to direct to a captive portal. Once you sign in a typical hotel / WiFi hotspot does not intercept any traffic. Specifically even when not signed in the HTTP / HTTPS redirect is *NOT* accomplished by DNS interception.

      This is also why captive portals fail when attempting to access sites with HSTS.

    • Pretty much every "free" WiFi service in hotels, coffee shops and bars does this, in order to force you to a sign-in page.

      No they don't. Redirects are done at the network level. If you use naming services for redirects you will win a never ending stream of support calls post sign-in.

      Further, I just typed "kjhsdjfhkjhazdkjfjhkjhnasdjkhad.com" into my browser's address bar and got a search page from Level3.

      Don't use level3's DNS servers and you won't have that problem.

  • by FudRucker ( 866063 ) on Monday August 24, 2020 @04:28AM (#60434749)
    how do i disable this awful feature?
    is there something in chrome://flags/ that can be disabled?
  • Don't work around DNS hijacking. If DNS hijacking actually breaks stuff, then it won't be acceptable to hijack DNS. Also, use validating resolvers. DNSSEC should have made DNS hijacking ineffective long ago.
    • by alantus ( 882150 )
      It does break stuff, and it's not acceptable, but sometimes there's no choice.
    • Re: Let it break (Score:5, Interesting)

      by ArmoredDragon ( 3450605 ) on Monday August 24, 2020 @08:14AM (#60435079)

      DNSSEC is rarely implemented by ISPs, probably because most of them like to hijack NXDOMAIN, which would break, and AFAIK most OSs don't implement it client side.

      You can, if you choose, implement it at your router. I personally use a commination of DNSSEC and DNS over TLS with 1.1.1.1 and 1.1.0.0, which supports both, and then I configure my firewall to transparently redirect all DNS traffic to the routers DNS server, thus preventing individual devices from using their own DNS server. Especially Google devices, which like to always use 8.8.8.8 and 8.8.4.4.

      You can do all of this with OpenWRT.

  • by Doub ( 784854 ) on Monday August 24, 2020 @05:34AM (#60434821)
    My shitty ISP DNS hijacks requests to display its own branded "this domain doesn't exist" page. I never noticed Chrome doing anything about it before I switched to the Google DNS server to avoid the hijack altogether.
    • A VPN would help too.
    • by Robert Goatse ( 984232 ) on Monday August 24, 2020 @09:20AM (#60435303)
      That's how Verizion FIOS does with non-existing domains. Every since I setup a Pi-Hole device with DNS pointing to OpenDNS servers, I don't see those ads any more.
    • I never noticed Chrome doing anything about it

      Chrome doesn't do anything about it other than reporting to Google that it's happening on your network. Although another user has said that Chrome doesn't remember URLs in your history even if your ISP serves a legit landing page when it detects it's on this network. Take that comment for what you will. ... Actually maybe you could verify that behavior for us since you have a shitty ISP. I can't test this here.

    • by ftobin ( 48814 )

      Just as a suggestion I'd suggest using Quad9 (9.9.9.9) over Google's DNS. Quad9 is a 501c3, has no logging policy, and also does malware-domain filtering. Cloudflare has a similar filtering system but apparently Quad9 is the best one out there. Quad9 supports DNSCrypt, DoH, and DoT.

  • Google change the Chrome algorithm slightly so that the 3 domains looked up are all like random-domain.google.com -- that will greatly reduce the load on the root DNS servers.

    • by pjt33 ( 739471 )

      Someone else suggested the same thing almost two hours before you. It's not a solution, because it's much easier for the malicious actor to see that they're being tested and so avoid detection.

    • by hardaker ( 32597 )

      They specifically don't want to do this since then the ISPs that are doing NXDOMAIN rewriting will detect that domain too easily and change their tools to recognize that domain and special case it.

  • If DNS was a real secure protocol this requests won't be needed. Until that there's no option.
  • by Sqreater ( 895148 ) on Monday August 24, 2020 @06:48AM (#60434919)
    They should test-on-suspicion rather than just test dumbly.
    • And what's the basis for suspicion? Being online at all? There's no other way to tell.

      • I'm sure they could come up with something if they tried.
      • If someone enters a word in the combined URL and search bar, Chrome needs to know if that word is supposed to be a part of a domain name (i.e. a hostname) or something the user is searching. They can't just look it up and conclude it's a hostname if it resolves, because everything resolves on NXDOMAIN hijacking networks. So what they do now is look up three randomly constructed hostnames that they are fairly sure don't exist, and if they get something back anyway, they know NXDOMAIN gets hijacked on that ne
        • Did you even read the summary?

          Google's Chrome browser and its Chromium-based brethren randomly conjures up three domain names between 7 and 15 characters to test,

          You are making up a completely different and unrelated scenario. If your domain doesn't have a TLD, then it doesn't go to the root servers (and if it did, whose root would it go to?). A keyword can do a local DNS lookup if there's only one word - but you pretty much just have to trust local DNS for that. They don't exist outside the LAN anyway.

          • Yes, and I read the article too, long before it got to Slashdot. The story is misleading because it is written from the root server operator perspective. Chrome doesn't ask for "jnvwpawnw.". It asks for "jnvwpawnw", without the dot. It requests the address of a hostname, not a TLD. It doesn't intentionally query the root zone. That's just an artifact of the test it's doing. The resolver looks up that single word, doesn't find it and passes the request on to the root servers, because there is no indication t
    • That's a great way of not knowing how many people in your country have Coronavirus. Especially since how do you form suspicion? Like coronavirus few networks will show any detectable symptoms that redirects are happening.

      Unless you want Google doing *even more* analysis of the URLs you're typing in. How do they tell the difference between two mistyped URLs and two sites that hit a common CDN?

  • by johnjones ( 14274 ) on Monday August 24, 2020 @08:05AM (#60435067) Homepage Journal

    if they correctly used DNSSEC then they would be able to see if the domain was being intercepted...

    • It's not up to Chrome to use DNSSEC. The best Google could do is implement DoH and bypass the OS's settings altogether, something which is about as popular as shit on a blanket here.

      • by green1 ( 322787 )

        And yet, that's exactly what Google has done. So why do they still do this when they don't use your resolver anyway?

  • by Anonymous Coward

    the traffic would be a distributed denial of service attack in any other scenario.

    A factor of two is denial of service? It's pretty bad, but I usually think of DDOS as a factor of ten or a hundred, not two.

  • You are complaining about something. You are blaming it on somebody else. Perhaps if you fixed your shit, this wouldn't have to happen. I have no patience for people who say OMG the universe is conspiring against me! Destroy the universe!
  • by azcoyote ( 1101073 ) on Monday August 24, 2020 @09:16AM (#60435281)
    Can't a malicious actor simply wait until the fourth request in a group to start its action? I would wager that an intelligent programmer or machine learning could determine what it looks like when Chrome starts up. The DNS traffic is bound to change and increase suddenly, especially on a Windows PC. After it's relatively certain that Chrome has already performed its check, then start your hijacking.
    • The algos for this stuff are constantly changing. It's an arms race. I see my regular captive portal change behavior ever few months. Then I find new ways to address the issue. Then they change how it works. Then I ...

    • Possibly, but it would be even easier to have, say, a dozen IP addresses for the catch-all, and then use them round-robin style. There's a very low chance that Chrome would get the same IP address for all three requests.
    • Can't a malicious actor simply wait until the fourth request in a group to start its action?

      Well they can NOW! Tomorrow's article: Google Chrome accounts for 3/4 of all DNS root queries.

  • So Chrome is melting Greenland?
  • I'm sure that the extra root server traffic has been more than offset by Google's own popular caching DNS servers. Even server administrators who would normally pass up to root DNS servers instead of their ISP caching DNS are likely to choose Google instead for speed.

  • what are my options for protecting myself from DNS hijack?

    is this method employed by any other software, outside of Chrome?

    is there a better method of detecting hijacks?

    is there any other method of detecting hijacks?

    • by green1 ( 322787 )

      The most common entity doing a DNS hijack these days is the browser. Both Chrome and Firefox hijack DNS requests and send them to their own choice of server instead of the one programmed in your OS or supplied by your router/ISP.

  • I use a web browser to render pages on servers I connect to. I do not want to expect it to have anything to do with DNS resolution, for which I have other mechanisms. If my DNS resolution is hijacked, that's a problem, but it's my problem, not Chrome's. Do what you're supposed to do Chrome, and let me worry about the rest of my system.
  • but allow user override via flags. problem solved?
  • ...would be to grab three random DNS entries already in cache to see if they return the same IP address. Only do the three random names if they can't find three good addresses. The DNS names in cache would presumably not have to go up to the root servers as much.

The moon is made of green cheese. -- John Heywood

Working...