Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Networking Google The Internet

How CDNs and Alternative DNS Services Combine For Higher Latency 187

The_PHP_Jedi writes "Alternative DNS services, such as OpenDNS and Google Public DNS, are used to bypass the sluggishness often associated with local ISP DNS servers. However, as more websites, particularly smaller ones, use content distribution networks via embedded ads, widgets, and other assets, the effectiveness of non-ISP DNS servers may be undermined. Why? Because CDNs rely on the location of a user's DNS server to determine the closest server with the hosted content. Sajal Kayan published a series of test results which demonstrates the difference, and also provided the Python script used so you can test which is the most effective DNS service for your own Internet connection."
This discussion has been archived. No new comments can be posted.

How CDNs and Alternative DNS Services Combine For Higher Latency

Comments Filter:
  • by ironicsky ( 569792 ) on Saturday May 29, 2010 @09:42AM (#32388814) Homepage Journal
    Why drag us lovely CDN's in to this.
  • by Mondo1287 ( 622491 ) on Saturday May 29, 2010 @09:55AM (#32388888)
    Maybe I'm missing something here, but shouldn't be the application's responsibility to provide a geographically correct host name to the client, not the responsibility of DNS? It seems like poor application design to rely on DNS for this. Your app should determine the host based on the IP of the client, not give the client an arbitrary host name and then rely on DNS to provide your geologically correct server.
    • Re: (Score:3, Informative)

      by jcinnamond ( 463196 )

      I think you're missing the point. Geographically aware DNS is used to send you to your nearest deployment of an application. Deciding after you've arrived is too late.

      • Re: (Score:3, Interesting)

        by Zerth ( 26112 )

        Like you couldn't redirect on GET instead of serving up the app?

        • Sure you can! If you don't mind effing up the URL bar and possibly generating certificate warnings.

          It's not a clean nor transparent way to do it.

      • Re: (Score:3, Interesting)

        by Trepidity ( 597 )

        There's various tricks you can do to decide later, if you have significant content other than the raw HTML page itself, though they do require some server processing. The initial HTML request will be based on DNS, but once the user's hit your servers, you have their IP, so you can rewrite the URLs of embedded content / AJAX requests / whatever, so that they hit a geographically nearby server.

      • I think you're missing the point. Geographically aware DNS is used to send you to your nearest deployment of an application. Deciding after you've arrived is too late.

        Well depending on the protocol, you could just be redirected to the closest by other means. For example, an http server could redirect to another server by name.

        We recently ran into issues of trying to rely on the DNS server for establishing geographic location, when we realised that the DNS server making the address look up could be five serv

        • The issue here is actually network hops. Regardless of geographic distance, the fewer hops from router to router from the client to content server is generally better. IF a CDN can put their servers near your ISP's upstream feed point - or, even better, in your ISP's network, then you theoretically get better download performance if you use the ISP's DNS. A 3rd party DNS relay may refer you to CDN server that is more hops away from your client.

    • So once you've connected to the server, its then going to have to redirect you somewhere else since its only once you connect that it gets your IP so it can figure out if you need to be directed.

      DNS is the step before that, so you can do it one stage earlier.

      Or your app has to have some list of ip ranges so it knows where its at and then where to connect to, which means you have to keep the client list up to date since ips move across the nation wildly. The IP I had 2 months ago is no being used in Texas,

      • by rekoil ( 168689 )

        Or you can use DNS for a first guess to the closest site, then use a redirect at the server (which, unlike DNS, sees the real client IP) to correct egregiously bad geo-DNS decisions. This way, a redirect is only done if it's likely that the overhead of the redirect itself will be offset by the faster page load from the "correct" site.

  • How many of the resources hosted by CDNs are things which we're already stopping with various ad blocking techniques, and how many are content we actually care about?

    • by Burdell ( 228580 ) on Saturday May 29, 2010 @10:16AM (#32389024)

      It isn't just ads. For example, Microsoft, Apple, Symantec, and Red Hat use CDNs for distributing software updates (that's just a few companies I know of off the top of my head). Basically, CDNs keep the Internet working, saving server load at the source and bandwidth across the Internet and at the providers.

      • Sounds like a system of mirrors to me. Now, the more relevant question: why is it using DNS to try to determine my location?
        • by b4k3d b34nz ( 900066 ) on Saturday May 29, 2010 @10:29AM (#32389122)

          The whole point of a CDN (the middle initial) is distribution, theoretically to a broad area.

          For example, without a CDN, you have 3 servers, all located in San Francisco. The guy who lives in Florida (or Russia, or South America) who requests content from your server will receive it much more slowly than the guy who lives in Vegas.

          With a CDN, there will be servers all over the nation (and preferably around the world, if you serve internationally) which will be physically closer to the requestor that can serve with a lower latency. The servers within the CDN farm utilize reverse DNS lookup to balance and serve traffic from the correct place.

          • content from your server will receive it much more slowly than the guy who lives in Vegas.

            Not likely. Theres more latency sure, and australia has shitty connections to the rest of the world, but outside that, there really isn't a bandwidth limitation on the backbone, just latency issues, which TCP handles just fine.

            • Respectfully, I disagree, since we see exactly the opposite of what you're saying using our CDN setup. Packet loss over long distances causes a slow retransmittal if it has to go across the world. More switching = slower transmittal as well, so if you can reduce the number of stops along the way, you're going to see both lower latency and less retransmission.

              Also, I never said bandwidth, I just said it would get there more slowly, and that's due to the number of computers sitting between me and the theoreti

        • by rekoil ( 168689 )

          That's *exactly* what a CDN is, although generally they're implemented as caching proxies as opposed to true mirrors (i.e. content is pulled into a site the first time it's accessed, then served from the site from that point on). Just about every large web property uses CDNs run by Akamai, Limelight, Internap, Level3, and others, and most the largest sites (Google, Yahoo, et al) operate their own in-house.

          DNS is used because *most* of the time, the location of your DNS resolver is a good hint of the client'

      • by Professor_UNIX ( 867045 ) on Saturday May 29, 2010 @10:29AM (#32389110)

        This is exactly the problem. Most people have probably not heard about a little company called Akamai, but chances are if you're downloading content from a large site, you're using Akamai's content delivery network. Go view a trailer on Apple's site for instance and you'll see the host is actually served off edgesuite.net (which is Akamai). They use a distributed system of caching mirror servers to serve up content to a server closest to you geographically.

        The one reason I use an open DNS server instead of my cable provider's (Cox Cable) servers is because they have an Akamai server for Cox and it was horribly overloaded. I was getting 512 Kbps anytime I was trying to download something from Apple. I switched my DNS to a combination of Level3's and Cisco's open DNS servers and I started hitting another Akamai server outside Cox and started getting 15 Mbps. It was night and day going from barely being able to watch a standard definition movie trailer on Apple's site while it buffered buffered, played, buffered, play buffered, etc. to being able to watch a 1080p HDTV stream with the buffer way ahead of my realtime viewing.

      • Re: (Score:3, Insightful)

        by pjt33 ( 739471 )

        Ok, saving network capacity I can buy as a benefit. I'm not sure that latency - the focus of TFS - is a real issue when downloading software updates, though.

      • Basically, CDNs keep the Internet working

        You are also one of the guys who claims its going to collapse under its own weight next year too I'm sure.

        If you think the limited amount of bandwidth is the issue than you really don't know anything about what the backbones are capable of.

        Just because your local ISP is too cheap to upgrade its circuits to fill its own needs doesn't mean the internet is 'out of bandwidth'

        • by Burdell ( 228580 )

          Nope, I don't think the Internet is going to "collapse". I've run servers and networks for ISPs for over 14 years, so I think I understand just a little about how things work. We have had Akamai servers on our network for about 10 years, and they save us a good chunk of bandwidth (as much as 15% some days) and give our users a better experience (faster and smoother downloads).

    • by michael_cain ( 66650 ) on Saturday May 29, 2010 @10:26AM (#32389088) Journal

      Seven or so years ago, before I retired from one of the large cable companies, CDNs were hosting the relatively static parts for a surprisingly large number of broadly popular sites. I had an opportunity to see the list when we were approached by the then-largest CDN, who wanted to place servers in many of our head-end locations for the obvious performance benefit. I was the one who pointed out that all of our internal DNS requests were routed to one of two data centers, one on the East Coast and one on the West, creating exactly the situation described in the OP: the CDN would have no idea where the original request came from, so would be unable to direct the end user to the appropriate server.

      I was one of the few engineers who argued for less centralization in our network. I wanted broader distribution for reliability purposes: at that time, the massive centralized mail servers had a tendency to fail at the drop of a hat. But it would also have given us the ability to work with companies like the CDNs in order to provide better service.

      • It's a problem of implementation. Few sites disclose that they're wanting to do it, and I had no idea until I installed noscript and started to have to enable a huge number of sites to view what should be relatively simple ones. On top of which each one can have vulnerabilities. I'm not suggesting that you were wrong, there are definite reasons why centralization is asking for trouble. But by the same token, I don't think the companies engaging in that process are as transparent, honest and responsible abou
      • Supposedly my ISP of 2 years ago consolidated its servers into 2 data centers about 3 years go. About 2.5 years ago, their DNS servers in 1 center went off line and the servers in the other center were overloaded, so they went down, too. Fortunately, my local head end still had a machine they could use for DNS and did so (allowing only local subscribers, of course). The nest major DNS outage they had, the local head end no longer that resource, and their routers were forcing all DNS requests to their server

        • by xous ( 1009057 )
          So your competent enough to be able to connect to a VPN and remember an IP address but you couldn't just change your resolvers to a known set of public resolvers?
          • As I said in my post, the ISP's routers were redirecting DNS requests to the IPS's own DNS servers. I don't know if that one still does this, but my current ISP currently does not.

  • Google Public DNS (Score:4, Informative)

    by The MAZZTer ( 911996 ) <megazzt&gmail,com> on Saturday May 29, 2010 @10:01AM (#32388928) Homepage
    Automatically routes your DNS request to a Google server close to you. So there's no problem here.
    • Re: (Score:3, Informative)

      by arkhan_jg ( 618674 )

      But if you look at TFA, that doesn't actually work in practise - looking at, for example, the swedish EC2 host pinging -
      internap:
      using local DNS gives a ping of 36.3, opendns is 40 and googledns is 189!
      akamai:
      local dns resolved IP pings at 13.2, opendms at 51.7 and googledns at 36.

      In both cases, using local DNS gives a substantially faster responding server with both CDN networks tested, presumably one that is physically closer to the testing machine. Using google DNS and open DNS both result in getting les

      • Have some perspective. The biggest difference in your numbers is about the time it takes you to blink twice. Why don't you think about that for a few seconds, which is several orders of magnitude longer than it takes to find and start receiving data from your non-optimal CDN.
        • Yeah the latency is a minor issue, particularly for video content where actual bandwidth and jitter matters. Adding up the latency for lots of gets on a single web page might be noticeable. In the bigger scheme of things, having your traffic travel a longer path ends up costing someone more money. Traffic taking longer paths increases the bandwidth use on cross-country fibers, may involved farming out to other backbones for delivery, etc. It's the same notion (or at least used to be) of local calls bein

        • Comment removed based on user account deletion
  • I don't really know what benefits CDN could give me.

    Anyway, I solved the sluggish ISP DNS problem with simply installing bind9 and be done with it. Setting up a DNS server on a modern system is really child's play, no need for the openDNS stuff.

    (install bind9; remove DNS IP. Done - around 1 minute)

    • I solved the sluggish ISP DNS problem with simply installing bind9

      So instead of one problem, you now have two? ;-)

      Most of the complaints I read on Slashdot invariably seem to be related to a loss of control. Seems to me that if you object to how others do things, taking charge and doing it yourself when possible is the only logical solution. For the tecnically inclined, that typically amounts to a few extra bucks per month along with, as you pointed out, some minimal work.

  • Slashdot.org is serving static assets from the hostname a.fsdn.com which is served via Akamai CDN. I count 19 requests to http://a.fsdn.com/* [fsdn.com] on a single pageload of the homepage. These static files are currently served by a server within my ISPs network rather than some server on the other side of the globe... Alamai uses DNS routing.
  • by poptix_work ( 79063 ) on Saturday May 29, 2010 @10:18AM (#32389034) Homepage

    While some shoddy CDN companies may reroute you at the DNS level, many are actually smarter about it. Smart systems will redirect you to a 'closer' system via a different URL for media files, or utilize anycast BGP routing so that you always take the shortest path to one of their nodes.

    As for 'who serves stuff on CDNs that I want to see anyway' -- everyone. From porn sites to Google to Youtube, they're all one type or another of CDN.

    • Re: (Score:3, Informative)

      Ok so by "shoddy CDN companies" you mean every CDN anyone here has ever heard of? And the vast majority of enterprises that have hot/hot (public) datacenters?

      Using anycast for serving content is a guarantee of fail. Great for DNS, less than ideal for HTTP. How serious a failure depends on important reliable and consistent end user experience is. Using geolocation based on the actual source address for content within the pages is a very intelligent thing to do in addition to doing it at the LDNS level in

  • Two things make those numbers fairly irrelevant: CDNs are optimized for delivering content to end users, not datacenters (where most machines are non-Windows anyway, so you don't even need AV updates). And what matters in the end aren't ping times, but actual request latency.

  • Uptime (Score:3, Insightful)

    by h4rr4r ( 612664 ) on Saturday May 29, 2010 @10:29AM (#32389114)

    Considering TWC can't keep their DNS servers up reliably using them is not even an option.

  • Use NoScript and / or RequestPolicy, which let you allow the CDNs you want, block those you don't. And have the additional side benefit of blocking tracking cookies and other such nastiness from companies you don't like (DoubleClick, Google Analytics, etc.)
  • This is not accurate (Score:5, Informative)

    by davidu ( 18 ) on Saturday May 29, 2010 @10:32AM (#32389146) Homepage Journal

    I'm the founder of OpenDNS (and long-time slashdot reader).

    This article is not very accurate for a number of reasons. First, both my service (OpenDNS) and Google's are co-located in similar POPs to all of the major CDNs which causes this problem to be largely avoided. The author of the blog post used a tiny sample size and tested mainly from EC2 instances, neither of which helps his cause.

    1) EC2 instances are BY DESIGN not co-located in the same place as major peering infrastructure because that real estate costs more. They are one or two hops away. People use EC2 for compute power, not for routing performance. So he needs to use something like Keynote or Gomez to test from home connections. If he had, he'd see it doesn't impact anything, and often improves performance, especially in the US. We don't have POPs in Asia yet, though they are coming this year, and when we do, we'll improve things for him.

    2) Akamai is the only CDN where this will ever be perceptible because their deployments are so dense. They have 3000+ pops which means they will also be able to target more precisely. But this is being worked on RIGHT NOW in the IETF -- http://tools.ietf.org/html/draft-vandergaast-edns-client-ip-01 [ietf.org]

    Anyways, this is really not the issue the author makes it out to be, and for the edge cases, they are being worked on.

    Thanks,
    David

    • Re: (Score:3, Interesting)

      by funfail ( 970288 )

      Hi David. Isn't it possible for you to just cooperate with Akamai and resolve according to the client location based on IP address?

      • by cynyr ( 703126 )
        to flame you, Why cooperate with online advertisers, i really wish that i could block ads, pre-render instead of post in chrome, would make webpages nice and snappy, and save me some bandwidth.
        • pre-render blocking does indeed save you some bandwidth and can make pages render more quickly. That's my main reason that I use FF. However, some folks may WANT to do post-render. The reason (a bit on the shady side though) is that downloading the ad and just not seeing it will:

          1) Give the site you are visiting revenue
          2) Cost the advertiser in bandwidth even though you don't have to see the ad.

          Some folks have gone to Chrome preferentially so that they can block post render and allow their favorite sit
      • by davidu ( 18 ) on Saturday May 29, 2010 @11:35AM (#32389528) Homepage Journal
        yep! This is the exact goal of the IETF draft I linked. Unfortunately the old guard of DNS (Vixie, et al) are not supporting it because they fear it raises insurmountable privacy concerns. Most disagree since the ultimate authority will see the clients IP eventually, but that's the current hold up. Not sure if it can be resolved to everyone's satisfaction. :-(
        • Most disagree since the ultimate authority will see the clients IP eventually,

          I think the concern could be about unnecessary disclosure to 4th parties. (The 1st responder DNS being the 3rd party and the "ultimate authority" being the 2nd party)

        • by gad_zuki! ( 70830 ) on Saturday May 29, 2010 @08:11PM (#32393662)

          >. Unfortunately the old guard of DNS (Vixie, et al) are not supporting it because they fear it raises insurmountable privacy concerns.

          Old guard? You'll find end users are also very much concerned with privacy. Rewriting the DNS spec solely for CDNs is ridiculous. Want location services in web broswer? Add it to http. Let the browser makers figure out the implemention.

          Not to mention, there's nothing open about your service. Its simply free. There's nothing open source about it and you openly violate the DNS spec with your typo domain crap. Sorry, the internet doesnt need "open" dns to ruin dns. You've done enough already. Thankfully, google offers free dns at 8.8.8.8.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      awesome! thank you for your reply. BUT wouldn't giving client ip away in the dns request reduce privacy?

      • Re: (Score:3, Interesting)

        by davidu ( 18 )
        That's the argument opponents make. I don't buy it for a variety of reasons. Hard to write it on my iPhone but will blog about it soon.
      • Because the organization that runs the authoritative DNS isn't going to see your source IP in a fraction of a second when you make the connection to their (in this case) web server?

        • Re: (Score:3, Insightful)

          by davidu ( 18 )

          Well the critics argue that the Internet != The WWW. Which is true. If you are sending email, the destination SMTP server, and it's corresponding authoritative DNS server would never normally see the client's original IP. The fact that TONS of benefits exist from routing and performance to anti-spam measures would benefit from this, we're creating a vector of privacy leakage that possibly didn't previously exist in all scenarios.

          None of this considers the fact that very few DNS operators would actually e

          • None of this considers the fact that very few DNS operators would actually even implement this standard.

            I take it that you and the other big DNS providers use custom DNS software? Still, I wonder how many others use BIND (though I suppose they might be part of the "old guard", so would not implement the feature).

          • Actually this isn't even true- when you initially submit it to the first MTA that connection always was from 'you.' That server would be (in the worst case) the first one that used DNS to the receiving domain. It would also be a host trying to directly connect to the MX.

            I can't think of a single protocol this isn't true for, so how would anyone except whomever you choosen as your LDNS provider and parties authoritative for your destination ever be seeing this information?

            I know, I'm preaching to the choi

    • by arkhan_jg ( 618674 ) on Saturday May 29, 2010 @12:04PM (#32389726)

      You know, I'd thought I'd actually try it out for myself with a rough and ready test. I have an ISP that gives me multiple real IP addresses, so I stuck my PC on the DMZ with a real IP, and tested each of the DNS servers as the sole DNS server in windows, without using either my local dnsmasq local cache or the one on my router. Obviously, I flushed windows own DNS cache between each ping test. The results are below, make of them what you will.

      I also tested all DNS providers with both primary and secondary servers; since the 2ndary servers always gave me the same IP address as the primary, they're not included. Ping times are a simple 0DP average of two sets of 10 pings (and there were no odd spikes, with my connection otherwise idle)

      First though, the response times of the DNS servers themselves, average uncached - tested using GRC's DNSBench.
      aaisp is my own ISP, BT is a large ISP in my country, 4.2.2.3 is one which I'm using at the moment, having previously tested it as fastest.

      google (8.8.8.8): 156ms
      opendns (208.67.222.222): 176 ms
      aaisp (217.169.20.20): 115 ms
      BT (194.72.9.34): 71ms
      level 3 (4.2.2.3): 95ms

      Then, testing which CDN server each DNS server sends me to, and the ping times of those servers - I used the same CDN DNS names as the article;

      First, cdn.thaindia.com (internap):

      google resolves as 64.7.222.130, ping 167ms
      opendns resolves as 77.242.194.130, ping 15ms (!)
      aaisp resolves as 64.20.60.99, ping 82 ms
      BT resolves as 64.20.60.106, ping 81ms
      level 3 resolves as 64.20.60.106, ping 81ms

      Then profile.ak.fbcdn.net (akamai):

      google resolves it as 92,122,217,75, ping 22ms
      opendns resolves as 195.59.150.152, ping 15ms
      aaisp resolves as 92.122,208.106, ping 13ms
      BT resolves as 88.221.94.242, ping 14ms
      level 3 resolves as 195.59.150.144, ping 15 ms

      However you slice it, google's public DNS is a bad choice for me. Longer to resolve addresses, and it sends me to non-optimal CDN servers. OpenDNS is a mixed bag; slower resolution than the rest, but sends me to easily the most optimal cdn.thaindia.com server (shame about the redirected NXDOMAIN problem). Yet BT are the fastest DNS resolver of all, and still return decent results. Go figure; I thought they'd be overloaded and well, crap.

      I'm definitely going to have to further testing for my own personal use, using whole page rendering on my favourite sites to see what is actually the best option for me personally, as DNS resolver speed clearly isn't the whole story in this CDN world.

    • by davidu (18) writes: on Saturday May 29, @11:32AM (#32389146)

      I'm the founder of OpenDNS (and long-time slashdot reader).

      Oh yeah, how long-time? I've been reading /. for 928499 seconds where as you've apparently just clicked here 18 seconds ago. What a newbie! They should have somekind of filter for first time posters...

        ) = hides under hat

    • Since you're here ...

      So you want to explain why you hijack google traffic?

      Why does www.google.com CNAME to google.navigation.opendns.com ... which is in your address space and apparently just passes through to the real google servers?

      • Re: (Score:3, Interesting)

        by BitZtream ( 692029 )

        If anyone would like to see this for themselves on any servers they use, check out namebench

        http://code.google.com/p/namebench/ [google.com]

        Tests to figure out which DNS servers you should use from a speed perspective mostly, but does all sorts of neat checks for DNS hijacking like OpenDNS does.

        Its bad enough to do NXDOMAIN hijacking, but flat out stealing google traffic and running it threw their own servers is just bullshit.

    • Re: (Score:2, Interesting)

      by sajalkayan ( 1213718 )
      I'm the author of the blogpost and am inclined to reply. David, I don't mean any disrespect to OpenDNS. It is an awesome service and I too myself use it when nothing else works. I don't have anything against OpenDNS. If you for some strange reason want to discredit data from EC2 instances, please see the data from Thai and the Swedish ISP. Both are personal internet connections of people residing in the respective countries from their homes. Now that you have really discredited me, I have to work harder
      • by davidu ( 18 )
        I always encourage testing and was actually happy to see you posted your scripts. You can email me at david at opendns dot com if you want to discuss further. I just don't think your testing methodology backs up your rather bold claims.
  • by pongo000 ( 97357 ) on Saturday May 29, 2010 @10:40AM (#32389194)

    ...so those in the know can select the nameserver(s) closest to them [opennicproject.org] without having to depend upon a 3rd party to determine (sometimes erroneously) what servers are closest.

    • Selecting the close name servers doesn't mean those DNS servers are returning the closest servers for the names you're looking up.

      Short of some sort of hijacking or really shitty service you should be using your ISPs name servers. Even on slashdot, its highly unlikely that you really do no more than the admins upstream.

      Yes, 15 or 20 slashdot readers do, but the rest of you are just armchair admins who really don't know what your doing regardless of how many forums you've read.

  • ...to see who has the balls to announce to the /. world that they don't know what CDN stands for!

    I win! [wikipedia.org]

  • I'm using Open DNS and since yesterday Google keeps offering to translate everything into Dutch (I'm in UK)
  • I use Google DNS to bypass the interstitial ad results page my ISP pops up with any "incompletely typed" (i.e. I didn't type .com/.net/etc.) or mistyped URL.

    Since I rarely if ever click on widgets, ads or other assets, I doubt that any lag time in response would make a material difference to me (nor, I suspect, would it to many others).

  • I love how "geographically aware" applications will happily direct me to Japan or Taiwan when the link from America is far faster. Why the hell should something route me to Japan when I start from Thailand? Or route to Taiwan from China? WTF? I suppose in some people's tiny minds, this makes sense, but in reality the USA link is usually much faster.
  • ...but my current ISP redirects all NXDOMAIN results to their ad page, and the only "opt-out" is a browser cookie that turns that page into an error page. At least Verizon offered an alternative DNS server with that misfeature disabled. I can't wait until my one-year contract is up.

  • Shouldn't the script begin with
    #!/usr/bin/python

    and I needed to install dnspython as a dependency.

"A child is a person who can't understand why someone would give away a perfectly good kitten." -- Doug Larson

Working...