Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet Networking

Google and OpenDNS Work On Global Internet Speedup 151

Many users have written in with news of Google and OpenDNS working together on The Global Internet Speedup Initiative. They've reworked their DNS servers so that they forward the first three octets of your IP address to the target web service. The service then uses your geolocation data to make sure that the resource you’ve requested is delivered by a local cache. From the article: "In the case of Google and other big CDNs, there can be dozens of these local caches all around the world, and using a local cache can improve latency and throughput by a huge margin. If you have a 10 or 20Mbps connection, and yet a download is crawling along at just a few hundred kilobytes, this is generally because you are downloading from an international source (downloading software or drivers from a Taiwanese site is a good example). Using a local cache reduces the strain on international connections, but it also makes better use of national networks which are both lower-latency and higher-capacity."
This discussion has been archived. No new comments can be posted.

Google and OpenDNS Work On Global Internet Speedup

Comments Filter:
  • by Anonymous Coward

    Akamai has been doing this the proper way for years now.
    I prefer this is implemented by the content provider, not by the ISP

    • by PPH ( 736903 )

      If I understand TFS correctly (and sometimes I don't), it is the target web service that is selecting the local cache for you based on your IP address. So that shouldn't be a problem. Now, if OpenDNS selects it, there could be a trust problem (someone could hijack OpenDNS or they could turn evil). But that's not how I read the description.

      What I'm not understanding is the speedup part. When I open an HTTP connection to some web service, they should already have 'my' IP address (issues of NAT, tunneling, TO

      • When I open an HTTP connection to some web service, they should already have 'my' IP address

        By the time you make an HTTP connection, you've already chosen which mirror of the web service to use. According to the article, this spec would allow DNS servers, such as an ISP's DNS server resolving on behalf of the ISP's customers, to use the prefix of each user to determine which mirror to recommend. It's like a network-topology-aware version of round-robin DNS.

        • By the time you make an HTTP connection, you've already chosen which mirror of the web service to use.

          Wrong. There are many requests and responses made in any transaction, and HTTP has support for this strange thing called "redirect". Even if your first connection is to a server in Djibouti, you may be redirected to a server in Canada, and then that one may again redirect you to a server in Sweden, where you'll finally be given the resource you requested.

          • Which means EVERY http server has to be set up like that.

            Here, google's DNS & openDNS is set up like that & it servers millions of users. Which is slightly more effective AND efficient.

            • by Score Whore ( 32328 ) on Tuesday August 30, 2011 @01:33PM (#37256408)

              You aren't understanding what is going on here. All Google and OpenDNS are doing is providing the authoritative DNS server with the IP address of the client. Google/OpenDNS know nothing about any possible caching or local servers. They are just making it possible for the final DNS server to possibly, assuming that whoever owns the domain you are resolving supports some kind of CDN, send you to a nearby server.

              What likely really happened here is this:

              Akami: Hey Google! You're compulsion to violate everyone's privacy is fucking up the Internet by breaking all of the CDN/Geo-based services.
              Google: But... but... we must know everything about everyone otherwise we'll not be able to sell them to our customers.
              Akami: Whatever dude, you're causing 50% congestion on the backbone links because your shit hides the actual address of the client.
              Google: Well, we've got sack-fulls of PhDs. We'll find a solution that allows us to keep spying and selling and still allows the CDNs to work.
              Akami: Look you jackasses, the system that exists today works. Your crap is just causing problems and labor for everyone else.
              Google: Na-na-na-na-na-na-na-na I can't hear you.
              Akami: Seriously. You're being a dick.
              Google: No. We've just invented this awesome thing that is going to Speed-Up! The InterNets!
              Akami: Jesus Christ! The system that was there before you broke it works. Why the fuck do you have to keep breaking shit?
              Google: Because we can and people are stupid. And we are rich. So STFU.
              Akami: Fuck. There's no reasoning with these clowns.

              • by afidel ( 530433 )
                I hope this is made into an RFC and MS adopts it. We run our own primary DNS servers because AT&T has been WAY too slow to respond to security issues with their DNS resolvers and AFAIK they still don't properly handle DNSSEC requests. We tried using Google as forwarders but at least at the time they were much slower than running our own primaries (especially once cache warmed). The one drawback has been the fact that we don't always get directed to the most local server by content delivery networks (you
              • I'm curious: what are Google doing to "fuck up the Internet by breaking all of the CDN/Geo-based services". So far as I know they are not doing anything to hide my network addresses, or the route between my machines and other hosts, from other servers. If they were poisoning routing tables or anything like to divert traffic through them and so hide the information other services use for geolocation, I'm sure there would have been a rather loud outcry of "we don't think that motto means what you think it mea
                • They're not doing it to hide your address, that's just an effect of what they are doing. Google's resolvers are nowhere near the querying client. CDN and GeoIP based services are now unable to determine the nearest server based on the IP address of the resolver. Consider: If I am a customer of Xmission and I use Xmission's name servers then I will get directed to an Akamai host that is physically located in Xmission's datacenter and the content will traverse over their LAN. However if I use Google's resolve

                  • Ah, I see, I'd not thought of that. That would explain some services occasionally thinking I'm in the states, or somewhere else entirely, these days as I've started using Google's resolvers.

                    Though I think it is the CDN's problem not Google's (or OpenDNS's). They've relied on a property that is not guaranteed but is commonly the case (that DNS requests come from a topologically similar place as other subsequent requests), and it is now not quite as commonly the case. As a programmer I know that if you rel
          • I refuse most redirects. Unless and until I examine them, redirects are pretty much dead-ends. I've not yet found such an addon for Chromium, but Firefox has had it for a long time now. Call me paranoid, but I really don't like content being downloaded to my machine from places that I've never heard of. I want all my content to come from the server on which I landed when I clicked the link. Cross site scripting is also blocked on my machine. FFS, do you have any idea what those two vectors are capable

          • by tepples ( 727027 ) <.tepples. .at. .gmail.com.> on Tuesday August 30, 2011 @01:41PM (#37256522) Homepage Journal

            Even if your first connection is to a server in Djibouti, you may be redirected

            Which costs a TCP setup and teardown to Djibouti.

            to a server in Canada, and then that one may again redirect you to a server in Sweden

            Which costs a TCP setup and teardown to Canada.

        • by DavidTC ( 10147 )

          No, this isn't for ISP DNS, which baffled me at first also. ISP DNS servers almost always use the same 'outgoing path' as your traffic, and hence get handed the same mirror that you yourself would have been handed.

          In fact, this often gives better results than 'correct' geolocation, as DNS servers are usually closer to the network boundary. If the geolocation shows I'm in Tennessee, they might direct me to a mirror in Nashville...but what if my ISP actually connects to the internet via Atlanta, and now inst

  • How is this different from P2P ?
    • Very. It's content delivery in real time, latency optimized, what this is about. P2P, using residential low-speed links and desktop computers, is simply not suited for the task.

      What is not is new. Distributed content caches were all the rage at the end of last millennia - everybody remembers about Squid, I guess? - and DNS geo load balancing (including fancy boxes with large price tags), and all that stuff. Ever fatter pipes have always reduced the need for this sort of solutions and my guess is that it wi

      • Ever fatter pipes have always reduced the need for this sort of solutions and my guess is that it will continue to be the case.

        That would be correct if ISPs were upgrading their big lines at the same rate they're upgrading their customer facing lines.

        Unfortunately, they're letting their peering lines get oversubscribed, but selling space to CDNs.

        • by EdZ ( 755139 )
          Just like the flip-flopping between THIN CLIENTS ARE THE SOLUTION! and ALL POWER TO THE WORKSTATIONS! every few years - at the moment, we're in a thin-clients + remote server phase ('the cloud') - there's a general flip-flopping between localised caching when the content served grows larger than the available bandwidth (the current case with streaming video) and once more bandwidth comes online the content servers become centralised again.

          Such is the circle of life!
      • by m50d ( 797211 )
        IIRC the main problem with multicast was it required all the routers in between to support it, and why would you upgrade something that was already working. Maybe with IPv6 we'll get working multicast, since that requires replacing/upgrading all your routers anyway.
        • No, the problem with multicast is that the big providers wanted more money based on bandwidth & multicast would severely cut bandwidth need.

          As you can see with AT&T and Comcast, they already use multicast internally, as do almost all video over IP providers. They just don't want to share it.

        • I always thought multicast meant that you'd have to be downloading the same bits as everyone else at the exact same time so you'd have to sit around and wait for a multicast session to open up to start receiving your data. It would be great for broadcast television where everyone tunes in to a specific channel to get what they want, but it would be fairly useless if someone connected 5 minutes after the multicast started unless the data packet they were receiving supported the ability to pick up 5 minutes

        • Your first statement is true. As for IPv6, I don't see IPv6 as making a case for multicast globally, perhaps unfortunately. Even the contrary may be true: Since deploying IPv6 costs large sums of money, multicast will have to wait. Also, multicast and IPv6 have little in common that can create synergies; it's not as if IPv6 brought by design a multicast proposal that was radically more convenient than with IPv4.
      • by lennier ( 44736 )

        Ever fatter pipes in North America have always reduced the need for this sort of solutions and my guess is that it will continue to be the case.

        Fixed that for you. The rest of the world, not so much, and we grind our teeth at having to use American web apps because they're sloppy bandwidth hogs just like American cars are petrol hogs. Caching is a good thing. Pity AJAX seems practically engineered to break caching as its primary purpose.

  • by lucm ( 889690 ) on Tuesday August 30, 2011 @12:11PM (#37255354)

    > The service then uses your geolocation data to make sure that the resource you’ve requested is delivered by a local cache

    This will make censorship much easier. No more corrupt foreign data in [your favorite oppressive country].

    • Heh, before I even got to the comments I thought "I know someone is gonna talk about how this makes it easier for privacy violations". Not to say that you're wrong though....
    • by vlm ( 69642 )

      > The service then uses your geolocation data to make sure that the resource you’ve requested is delivered by a local cache

      This will make censorship much easier. No more corrupt foreign data in [your favorite oppressive country].

      If my assumption is correct that they're basically using cnames to generate geo-locatable new host names, you could just distribute the knowledge than in Afghanistan / GB / GER, you simply need to visit 4.4.4.youtube.com to gain "usa" access to youtube.

    • What? It makes it easier to avoid censorship. Remind yourself of what Google did with China and the .hk redirect.
    • Makes it almost impossible to remain anonymous by proxy or Tor if you use the service, or if DNS servers start to enforce the behavior Google is suggesting in their IETF brief.

      Seems like when you do a DNS lookup, your octets get sent by yourself or the proxy, and then Google and friends see's a request for that domain in Analytics or what not from the proxy. Not going to take much to attach one piece of information to the other, or pull together patterns from Analytics from the DNS responses for the proxy t

  • by King_TJ ( 85913 ) on Tuesday August 30, 2011 @12:19PM (#37255456) Journal

    Isn't this little more than an expensive band-aid for the underlying bandwidth problem? Delivering content from strategically located caches is an OLD concept, and it's always been trouble-prone, with some sites not receiving updated content in a timely manner and others getting corrupted.

    Quite frankly, I wish some of the big players with vested commercial interests in a good-performing internet (like Google, Amazon, or Microsoft) would pitch in on some investment funding to upgrade the infrastructure itself. I know Google has experimented with it on a small scale, running fiber to the door in a few U.S. cities. But I'm talking about thinking MUCH bigger. Fund a non-profit initiative that installs trans-Atlantic cables and maintains them, perhaps? If a nation wants to censor/control things, perhaps they'd reject such a thing coming to their country, but that's ok.... their loss. Done properly, I can see it guaranteeing a more open and accessible internet for all the participants (since presumably, use of such circuits, funded by a non-profit, would include stipulations that the connections would NOT get shut off or tampered with by government).

    • Quite frankly, I wish some of the big players with vested commercial interests in a good-performing internet (like Google, Amazon, or Microsoft) would pitch in on some investment funding to upgrade the infrastructure itself.

      You'd have several hundred lawsuits from several dozen companies that have a vested interest in keeping control over the existing infrastructure. You'd have antitrust investigations being called for, contract lawsuits against the cities that promised them monopoly access, and several billion dollars poured into lobbying to make sure that it cannot and will not happen.

      And keep in mind, on the low tiers the vast majority of laid fiber is dark, just waiting for someone to actually plug into it and use it. An

      • by vlm ( 69642 )

        And I'd be shocked if that didn't include the transatlantic lines as well.

        They're all lit all the time. Maybe just by second string "if we lose a strand, your strand becomes their protection ckt" but they'll be lit. I used to be in the telecom business.

        Where you find dark fiber is on shorter local hops where there simply isn't the current demand, at any reasonable cost.

        You'll never completely satisfy demand between stateside US-HI. You can supersaturate demand to the point of dark fiber between little-city and cow-village.

        • by afidel ( 530433 )
          I don't think that's true at all, many transoceanic links have dark pairs. I know the Google-cable was going to be only half lit at installation. The existence of dark fiber on transoceanic links is driven by many of the same economics as dark fiber on land, only magnified since the cables are so much more expensive, the installation makes trenching look cheap, and the lead times are measured in quarters instead of weeks.
    • by slamb ( 119285 ) * on Tuesday August 30, 2011 @01:04PM (#37256028) Homepage

      Isn't this little more than an expensive band-aid for the underlying bandwidth problem?

      Keep in mind that Google, Amazon, Akamai, etc. had already created geographically distributed networks to reduce latency and bandwidth. Improving the accuracy of geolocated DNS responses through a protocol extension is basically free and makes these techniques even more effective.

      Also, Google cares a lot about latency. A major component of that is backbone transit latency, and once you have enough bandwidth to avoid excessive queueing delay or packet loss, I can imagine only four ways to significantly it: invent faster-than-light communications, find a material with a lower refractive index than the optical fibers in use today, wait for fewer round trips, or reduce the distance travelled per trip. This helps with the last. Building more fiber wouldn't help with any of those and would also be a lot more expensive.

      Full disclosure: I work for Google (but not on this).

    • by lennier ( 44736 ) on Tuesday August 30, 2011 @05:14PM (#37258884) Homepage

      Isn't this little more than an expensive band-aid for the underlying bandwidth problem?

      Not really. There is always a finite quantity of bandwidth. It only becomes a "problem" when you have applications which assume infinite bandwidth or are forced to assume this for legal or political rather than technical reasons.

      Like, oh let's just say for example, streaming video.

      Streaming is the anti-caching. It's a terrible technical non-solution to a legal problem. It clogs the tubes and wastes bandwith by design just to retain control over the obsolete idea of "broadcasting" so that copyright control and advertisements can be retrofitted into the stream.

      But doing video right would require re-engineering our entire economy -- which will have to happen sooner or later when the IP crash comes -- so we'd rather just break the Internet by design and then attempt to retrofit some kind of weird fixup after the fact to make some preferred partners work sort-kinda okay.

    • by SuperQ ( 431 ) *

      Sorry, but you have no clue what this is about. This has very little to do with bandwidth and much more to do with latency.

      Say you live in Germany.

      From California to Germany you're talking about 150ms minimum. If you're working with an interactive site like Gmail or Google maps the experience is going to suck. Say it takes the server 100ms to do the work to respond to your request. That's already faster than most website servers are designed to respond to. This means that 250ms is the time it takes to

  • by Anonymous Coward

    A network where only big players can afford fast delivery is not neutral. CDNs starve the actual network in favor of local caches. Money that would have to go to bandwidth improvements now goes to the CDNs, which in turn are only used by global players. This leaves us with an anemic network that can deliver Youtube clips quickly but chokes on broadband communication between individuals on different continents. And if that weren't bad enough, they abuse DNS to do it.

    If I type google.com into my browser, I ac

  • Sorry, I may just have woke up on the wrong side of the bed this morning.. but wouldn't doing as TFA suggests just open the door to another MITM attack method?
    • Not really... but it would provide another Man-on-the-end attack method, where the owner of the DNS gets much more information and control regarding the endpoint. Someone who has a malicious DNS host would have much more control over how to direct traffic based on geolocation before you even reach their server. People from a specific location could be routed via a middleman to sniff/poison the data via this method.

      A man in the middle already knows your entire IP, so they gain pretty much nothing here that

      • An added bonus for DNS providers is that they get a bit more telemetry on where lookups for domains are coming from.

  • What I like about this entire thing is that we can save on bandwidth, which should also lead to some power savings overall, not a lot, but just another drop, those drops could add up and become something useful in the future.

  • I realize that IPv4 is going to be with us for quite some time, but is this going to be worth the effort? It requires a bit of jiggery-pokery to repoint your DNS, the kind of thing that appeals to the Slashdot crowd but which your grandma will never, ever pull off. ISPs could help, but will they do so before IPv6 makes it irrelevant?

    • by vlm ( 69642 )

      I realize that IPv4 is going to be with us for quite some time, but is this going to be worth the effort? It requires a bit of jiggery-pokery to repoint your DNS, the kind of thing that appeals to the Slashdot crowd but which your grandma will never, ever pull off. ISPs could help, but will they do so before IPv6 makes it irrelevant?

      Its done on their side. I'm reading between the lines and trying to unfilter the dumbed down journalist-speak, but I think I could implement the same thing by configuring about 16 million bind "views" for each of a.b.c in the source ip address range a.b.c."whatever". Then bind gives a different cname for the domain inside each view, like if my src addrs is 1.2.3.4 then their bind server barfs out a cname of 3.2.1.www.whatever.com whenever someone asks for www.whatever.com and some other sorta thingy decid

      • by LDAPMAN ( 930041 )

        "The problem with caching is random access drive speed etc has not been increasing as fast as bandwidth used. So where a slightly tuned up desktop made a decent tolerable usable cache around 2000, around 2010 to make the cache better than directly access, you need monsterous gear and complicated setups. Rapidly it becomes cheaper to buy more bandwidth."

        Modern caching systems primarily return data from memory, not from disk. Even if you were to pull the data from disk, the disk is normally an order of magni

    • I realize that IPv4 is going to be with us for quite some time, but is this going to be worth the effort? It requires a bit of jiggery-pokery to repoint your DNS, the kind of thing that appeals to the Slashdot crowd but which your grandma will never, ever pull off. ISPs could help, but will they do so before IPv6 makes it irrelevant?

      It's described in IPv4 terms, but extending it to work with IPv6 addresses should be simple enough. The trickiest part will be finding the golden CIDR mask to replace IPv4 /24. Giving up /64 is too much, since it identifies most ISP customers uniquely, and /48 has similar issues. Probably something near /32 or /40 would be appropriate, although you could probably do a lot with as little as /20.

      Other than that, the described technique is still fully relevant because IPv6 doesn't change the game in any othe

  • by vlm ( 69642 ) on Tuesday August 30, 2011 @12:23PM (#37255504)

    Description seems a little simplified. Sounds like an Akami presentation from over a decade ago.

    So is this a commercial competitor to Akami, or a non-commercial competitor, or a freeware / public competitor, or is it something somewhat different, like a squid proxy set up for transparent caching from 2002 or so?

    Speaking of squid, its 2011, is squid ever gonna support ipv6? There's not much software out there that doesn't support v6, and squid is probably the most famous.

    • Re:Akami? (Score:5, Informative)

      by Binary Boy ( 2407 ) on Tuesday August 30, 2011 @12:35PM (#37255678)

      Speaking of squid, its 2011, is squid ever gonna support ipv6? There's not much software out there that doesn't support v6, and squid is probably the most famous.

      http://wiki.squid-cache.org/Features/IPv6 [squid-cache.org]

    • Re: (Score:3, Informative)

      by lurp ( 124527 )

      None of the above. It's a scheme to pass your IP address to CDNs such as akamai so that they can select an edge server that's closer to you. Absent this, CDNs select an edge server closest to your DNS provider — that's fine if you're using your ISP's DNS, but in the case of an OpenDNS or Google Public DNS, that's likely a poor choice.

    • Speaking of squid, its 2011, is squid ever gonna support ipv6? There's not much software out there that doesn't support v6, and squid is probably the most famous.

      Hell, I'd be happy if it used HTTP 1.1! It's only been a standard since June 1999 [ietf.org].

    • Asterisk?

    • Here's how I'm reading this.

      So google already have local caches, and respond to DNS queries with your local cache's IP address. Just like Akami and other content networks do.

      But some global DNS services like OpenDNS will do the lookup once, getting an IP for a cache that is local to their service, and return that IP address to all users.

      So google and OpenDNS came up with a DNA protocol extension so that end users get the right address.

    • So is this a commercial competitor to Akami, or a non-commercial competitor, or a freeware / public competitor, or is it something somewhat different, like a squid proxy set up for transparent caching from 2002 or so?

      Neither. Services like Akami rely on the IP address of the request to deliver the data from the closest point. This is fine if you use your local city's local ISP's DNS server as Akami will see the IP address of your city correctly and send you to the correct server. However this is quite problematic if you use 8.8.8.8 or OpenDNS as your primary DNS provider because in the past there has been no way to determine from the request *where* you are located. Akami would deliver the request any request to a user

    • No, it's not. All they have done is fixed their DNS so things like Akamai (and Google's own distributed system) can function as designed.

      In the past if you switched over to Google's DNS or OpenDNS, you ended up with a slower Internet experience in some ways, as the CDNs would send data from the node located closest to the DNS server, rather than the client. If you were a user in Europe, you might very well be receiving a YouTube stream from within the US. This has changed, as the CDN can tell the location o

  • by shentino ( 1139071 ) <shentino@gmail.com> on Tuesday August 30, 2011 @12:40PM (#37255732)

    I'm all for this if and only if all protocols are fully complied with.

    HTTP gives plenty of leeway and in fact was designed with caching in mind. So long as the involved parties do not violate the protocol, I'm ok with it. Cache control directives must be honored, for example. No silent injection of random crap.

    The DNS protocol must also be honored to the extent that deviations from same have not been expressly authorized. OpenDNS offers typo correction and filtering services on an opt-in basis. NXDOMAIN hijacking and whatnot is foul play.

    Just don't fuck with the protocols, and you can do whatever you want.

    • by kasperd ( 592156 )

      HTTP gives plenty of leeway and in fact was designed with caching in mind. So long as the involved parties do not violate the protocol, I'm ok with it.

      This initiative is not going to be changing http at all. The client may end up at a different replica of the http service, but each of them is equally valid. The only difference is how fast you get the answer.

      The DNS protocol must also be honored to the extent that deviations from same have not been expressly authorized.

      The protocol was designed in an extensi

  • How about we get rid of all the linkages to crap sites like facebook, digg, mysapce, etc. These are the links that are almost always responsible for slow page loads for me. I rarely have any issue at all with downloads or videos, local or international. Interesting too when using noscript and some webpages literally twitch when something like facebook is blocked as they try over and over to reload and reconnect. And as others have said, these local caches are just another security risk waiting to be ex

  • Alternately... (Score:4, Insightful)

    by Score Whore ( 32328 ) on Tuesday August 30, 2011 @12:55PM (#37255884)

    Google could just not provide a service that inserts themselves into the DNS path. The problem isn't "the internet" or DNS, it's that Google's DNS servers have no relationship to the client systems. If people were using DNS servers that had some relationship to their network -- such as the one provided by their IPS -- then this wouldn't be an issue.

    Plus not using Google's DNS gives you a little more privacy. Privacy of course being defined as not having every activity you do on the internet being logged by one of Google's many methods of invading your space (DNS, analytics, search, advertising, blogger, etc.)

    • When your ISP DNS randomly redirects you to a webpage advertising their products, or typing www.google.com in the address bar leads to an ISP page with ad's and a small link to www.google.com , 8.8.8.8 does save you a lot of hassle

      • by SuperQ ( 431 ) *

        Or when ISP DNS servers are so badly managed that they take seconds to respond to lookups.

    • You forget the reason why these services popped up to begin with. In general often your ISP can't be trusted. There are ISPs out there who run DNS servers which break some of the fundamental functionality of DNS used by various applications, such as the ability to honestly say "no server found" rather than directing you to a search page.

      Then there's ISPs who are too incompetent to run a DNS server providing an undersized box which can't cope with basic traffic. It's sad that in some cases Google's DNS serve

  • While this is a commendable effort, the biggest offender when it comes to horrible load times is the embedded advertising content. Most of these ad providers -- I'm looking at you Google -- deliver their content at a snails pace, delaying the delivery of the actual content.

    Case in point, I used chrome's network developers tool to analyze load time for the various elements on http://www.slashdot.org

    The top 5 longest durations go to *.doubleclick.net and range from 242 to 439ms, with the total page load time

  • by davidu ( 18 ) on Tuesday August 30, 2011 @01:07PM (#37256064) Homepage Journal
    Happy to answer questions about this work.
    • What's the process to go about using this if I'm currently using round robin for say 10 servers? Do I need to switch to a DNS server that can obey your extra tag and select the correct closest IP?

  • Messing with DNS is doing it the Wrong Way. All of these CDN services are based on HTTP. When you're using them, that's an HTTP server you're talking to. It's perfectly capable of geolocating you by IP, and it can either hand you back links to a local CDN, or redirect you to another server.

    Why the hell must we mess with DNS to do this? This is a solution which only works if you use Google DNS, OpenDNS, or sometimes if you use your local ISP's DNS. What if you're just running bind for you local net vs the r

    • Because it works, and there is more out there than just HTTP. This same approach will work for any protocol that uses DNS to resolve domain names. It also doesn't require a ton of server side hacks that need/should be implemented from all vendors. Easy fix, and quick to deploy requiring no end-user code/system changes. Seems like a no brainer to me.

      • by pslam ( 97660 )

        Because it works, and there is more out there than just HTTP. This same approach will work for any protocol that uses DNS to resolve domain names.

        Except that this is only used for HTTP. I do not know of any non-HTTP examples.

    • by slamb ( 119285 ) *

      All of these CDN services are based on HTTP. When you're using them, that's an HTTP server you're talking to. It's perfectly capable of geolocating you by IP, and it can either hand you back links to a local CDN, or redirect you to another server.

      Then it's not possible to geolocate that first HTTP request.

      What if you're just running bind for you local net vs the root servers? Bzzt. Doesn't work.

      It should work, although it may not be necessary. I see six possibilities:

      • Your local bind is configured to send q
    • by anom ( 809433 )

      If you're running bind for your local net, then you don't need this as your DNS resolver is already located close to you. The problem arises when DNS resolvers are utilized that are not "close" to the clients they serve and therefore CDN's will often end up picking a CDN replica close to your resolver rather than close to you.

      Obviously this problem grows as does the distance between you and your resolver -- if you're using a huge resolving service like Google DNS or OpenDNS, then you are much more likely t

      • by pslam ( 97660 )

        And here we have the real reason why this is being promoted:

        3. And IMO most importantly, this removes the server selection choice from being under the sole control of the CDN provider. If this stuff is logic'd through the main HTTP page of the website, the CDN must expose its server selection strategy to the client, which is likely proprietary business knowledge.

        It breaks DNS. It certainly breaks my local DNS installation, for starters. It also means that *everyone* must use this DNS hack because service

        • by anom ( 809433 )

          "It breaks DNS" seems like a pretty strong comment to me and I'm not following how exactly it's going to do this. If you have a local DNS installation (I assume you're talking about dns /resolvers/ here?) that local machines use, there is absolutely no need for you to implement this, as any CDN basing a server selection choice on your local DNS installation will be well-guided. Your resolver won't send the applicable EDNS option, and the authoritative DNS server won't care that it's not there -- it'll jus

  • Maybe it's about time the Windows operating system was given the ability to cache DNS queries locally BY DEFAULT. It would speed up searches for most used sites by the user, and take the load of the internet from running requests for sites the user keep on visiting.

  • So...the summary states "they've reworked their DNS servers so that they forward the first three octets of your IP address to the target web service". Uh, doesn't my browser send my WHOLE ip address to the web service when I make a HTTP request anyway? How is this different/better?

    If what they meant to say is that the resolver sends the first three octets of my ip address to the destination's name servers when doing a recursive lookup, then how is this any better than using any old DNS? In other words
  • If you have a 10 or 20Mbps connection, and yet a download is crawling along at just a few hundred kilobytes

    I would love to have a connection that "crawls along at just a few hundred kilobytes (a second)", most of the times, when it crawls, it does so at a few tens of kilobytes a second (sometimes even less than that).

  • Evil and OpenDNS Work On Global Internet Speedup.

    I think from now on simply replacing the word Google with Evil should be an auto-correct feature.
    • by Kozz ( 7764 )

      Evil and OpenDNS Work On Global Internet Speedup.

      I think from now on simply replacing the word Google with Evil should be an auto-correct feature.

      And who, exactly, is forcing you to use OpenDNS?

  • There are two factors that affect the performance of web (HTTP) lookups: latency and bandwidth. Latency depends on the distance between client and server. You won't be able to send data faster than the speed of light. Bringing the data closer to the client helps to reduce latency, especially for small lookups. Bandwidth becomes the limiting factor when you transfer (large amounts of) data over under-dimensioned pipes. In general, I'd be a much more happy person if people would use HTTP caching headers (E
  • An Internet speedup would involve adding the ability to carry more bytes per second, analogous to changing your delivery vehicles from donkey carts to vans. This is just improving the logistics of the donkey cart-based delivery service.

  • by Animats ( 122034 ) on Tuesday August 30, 2011 @02:36PM (#37257128) Homepage

    What we need is less junk in web pages. The amount of dreck in web pages has gotten completely out of control. The home page of Slashdot has 3424 lines of HTML, not counting included files. This is not unusual. I'm seeing pages over 4000 lines long for newspaper stories. Pages with three to five different tracking systems. Hidden content for popups that's sent even when the popup isn't being loaded. Loading of ten files of CSS and Javascript for a routine page.

    CSS was supposed to make web pages smaller, but instead, it's made them much, much bigger. Being able to include CSS from common files was suppose to improve caching, but instead, many content management systems create unique files of CSS for each page.

    And you get to pay for downloading all this junk to your smartphone. It doesn't matter what route it takes through the backbone; eventually all that junk crosses the pay-per-bit bandwidth-throttled air link to the phone.

    Where bandwidth really matters is video. There, the video player already negotiates the stream setup. That's the place to handle choosing which source to use. Not DNS.

  • I was preparing to hate this (I don't trust Google that much & OpenDNS do some questionable things), but I can't find anything wrong with this from a privacy or openness perspective. I think there has already been huge DNS pools that return a random set of records, so the idea of DNS being universal and reproducible is gone, if that idea ever existed (and I see no reason why that would be a problem).

    Isn't this what anycast is supposed to solve? For example when using 6to4, one can specify a single, gl

    • Anycast works great for protocols like 6to4, DNS or NTP where each packet is a complete transaction and it really doesn't matter if all your packets end up at the same server. It isn't so good for TCP where the client and server have shared state. In the worst case where a route is flapping you may never maintain a connection for more than a few seconds as your packets get repeatedly sent to different server.

  • (not related to my previous post:) How does this work with DNS caches? Say Ebay implemented this and gave back A records depending on the client IP address. If user A was using Google DNS in Norway and requested DNS records for Ebay, then Google could
    • a) Cache the reply and return those for all other users (B,C,D...). This would mean that all users of that Google DNS server, maybe over all of Europe, would get the A records for Ebay's Norway server (I don't think Ebay has a server in Norway, but never
  • Why would they use this rather involved mechanism rather than anycast [wikipedia.org] IP addressing? TFAs don't go into why they've gone to the effort of reimplementing something which essentially already exists.
  • by Nethead ( 1563 ) <joe@nethead.com> on Tuesday August 30, 2011 @09:42PM (#37260990) Homepage Journal

    How are the first three bytes of 2001:470:a:6bb:21f:29ff:fe87:88c0 going to tell them anything about my location?

    • By octets, it was clear that they were referring to IPv4, since the IPv6 'segments' you have - 2001, 470, etc are not octets. As a post above asked, is it even worth the effort?

      The second word in your address - 470 - tells one that it's issued by ARIN, and so a search on domains under ARIN would pinpoint the subscriber of that particular address, and whether any of the sub-nets fall outside ARIN's geography.

      • by Nethead ( 1563 )

        2001:470:a:6bb::/64 is an H-E tunnel (and seemingly oddly, a host address.) I can't remember if they do swippage on it, or even if there is SWIP [wikipedia.org] for IPv6.

If all the world's economists were laid end to end, we wouldn't reach a conclusion. -- William Baumol

Working...