Forgot your password?
typodupeerror
Google The Internet Technology

Google Proposes DNS Extension 271

Posted by CmdrTaco
from the you-know-my-name dept.
ElusiveJoe writes "Google, along with a group of DNS and content providers, hopes to alter the DNS protocol. Currently, a DNS request can be sent to a recursive DNS server, which would send out requests to other DNS servers from its own IP address, thus acting somewhat similar to a proxy server. The proposed modification would allow authoritative nameservers to expose your IP address (instead of an address of your ISP's DNS server, for example) in order to 'load balance traffic and send users to a nearby server.' Or it would allow any interested party to look at your DNS requests. Or it would send a user from Iran or Libya to a 'domain name doesn't exist' server."
This discussion has been archived. No new comments can be posted.

Google Proposes DNS Extension

Comments Filter:
  • by Saishuuheiki (1657565) on Thursday January 28, 2010 @02:12PM (#30937140)
    If you read the entire post by google, you'll notice they are suggesting only the first 3 octects of the IP address are transmitted. Now while this could theoretically be used to censor regions of users, it could not be used to expose you (since it isn't the complete IP address)
    • by Monkeedude1212 (1560403) on Thursday January 28, 2010 @02:20PM (#30937346) Journal

      Doesn't that theoretically nail you down to somewhere within 252 ish machines? (Assuming IPv4).

      The first 3 octets seem like they could be enough to personally identify you based on your DNS Search records.

      • by TBone (5692) on Thursday January 28, 2010 @03:18PM (#30938874) Homepage
        No, it narrows you down to somewhere within 252ish public IP addresses (even considering IPv6, which contains a standard rest-of-the-address to "encapsulate" IPv4). Very few people (I'll even go so far as to say "the majority of users") on broadband services across most of the world truly appear to the outside world as an actual unique IP address, which is to say you and the guy at the desk/apartment/house/whatever next to you has a discrete and separate network address from you. Your connection is generally going to be NAT translated in some form or another from a private-network-space IP address to a public address. You will appear, to the world, to be generally the same "computer" as several users around you in the network.
      • by peragrin (659227) on Thursday January 28, 2010 @03:20PM (#30938924)

        Well if your like my house it is closer to 1 in 765. NATs are wonderful for that. As they can determine IP but not one of the four users across 9 computers with Internet access.

    • by gstoddart (321705) on Thursday January 28, 2010 @02:23PM (#30937420) Homepage

      If you read the entire post by google, you'll notice they are suggesting only the first 3 octects of the IP address are transmitted. Now while this could theoretically be used to censor regions of users, it could not be used to expose you (since it isn't the complete IP address)

      No, but given that only an additional 255 (or is it 254?) users besides you can be coming from that range, it's not like over time someone can't correlate this to you.

      I'm not convinced this doesn't have privacy implications, or that we're not better off with our requesting DNS being the one who is shown. I don't necessarily want web sites to know where I'm coming from.

      Cheers

      • by Talisein (65839) on Thursday January 28, 2010 @02:34PM (#30937760) Homepage

        Web sites already know where you're coming from. They have your IP address. Every single one of them, unless you're using a proxy. The problem is they can't easily redirect you to the server closest to you once you've already resolved their address. The only in the whole system who do not know your IP when you're browsing the web is potentially the authoritative DNS server; the usual case is the same people who run the authoritative DNS server also run the web server, so while they don't get your IP when you do the DNS lookup they will when you eventually land on the site.

      • by gparent (1242548) on Thursday January 28, 2010 @03:07PM (#30938570)

        No, but given that only an additional 255 (or is it 254?) users besides you can be coming from that range, it's not like over time someone can't correlate this to you.

        Could be 256.

      • by LordLimecat (1103839) on Thursday January 28, 2010 @03:07PM (#30938580)
        Its 254, assuming that its not being natted in any way. And the IP addresses change randomly for most users, at random intervals.

        Somehow all these people are super concerned with THIS idea, but have no qualms about everything they do online being logged in weblogs. But then, its google (or microsoft, or apple), so we have to bash them; theyre too successful to be allowed to have good, non-evil ideas!
    • by Vainglorious Coward (267452) on Thursday January 28, 2010 @02:24PM (#30937458) Journal

      only the first 3 octects of the IP address are transmitted...could not be used to expose you

      Combining this with the information from the already quite pervasive tracking google does, I can't imagine that identifying your one-of-256-addresses is anything other than trivial.

    • by TheRaven64 (641858) on Thursday January 28, 2010 @02:26PM (#30937528) Journal

      The first three octets limit you to a maximum of 256 machines. In practice, most addresses are assigned in /24s, so you end up with two of these used for the router and broadcast addresses. Most broadband ISPs don't recycle addresses often, so you end up with the same IP for weeks, if not months, at a time. Of the other 200 people on your /24, how many are online at the same time as you? Maybe 10-20? Of these, how many have sufficiently similar surfing patterns that, when you combine the DNS results with tracking data from all sites that use Google analytics, they can't be distinguished from you?

      If Google can't track your Internet usage from the first three octets of your IP address and DNS results then they haven't got nearly as much expertise in data mining as you'd need to operate a successful search engine.

      • by natehoy (1608657) on Thursday January 28, 2010 @04:09PM (#30939876) Journal

        Of course, since this is only to give them enough information so you can access a Google server nearby as opposed to one somewhere else, they'll have your FULL IP ADDRESS about 1/100 of a second later.

        Google doesn't need this to track you. In fact, this information is less useful than what they already have. This is about Google (and anyone else who has distributed datacenters) being able to make better decisions about which datacenter to send you to. This saves them bandwidth charges, which adds up to BIG money. That alone is plenty of reason why Google wants this, and everyone who manages multiple distributed datacenters should too.

    • by poetmatt (793785) on Thursday January 28, 2010 @02:32PM (#30937700) Journal

      even the first 2 octets can be enough to reliably identify with some digging. what do you think 3 is gonna do?

      • by Saishuuheiki (1657565) on Thursday January 28, 2010 @02:49PM (#30938124)
        Isn't it a moot discussion anyways? Generally speaking they're going to get your IP address anyways when you connect to their server; so why is it important if they get your IP earlier when you're looking up their server?

        I guess there could be some way to track what sites you're looking up from different tiers of DNS servers. If you were using google DNS, they'd have your entire DNS anyways, and if you were using another, then they'd only get your IP if you're connecting to google.com
    • by Anonymous Coward on Thursday January 28, 2010 @02:35PM (#30937794)

      I'm not worried about the "evil" aspect of it. This just doesn't sound like what DNS should be used for.

    • by D Ninja (825055) on Thursday January 28, 2010 @03:11PM (#30938688)

      Thank you! Came in here to say this. Did the submitter even read the article?

      And for those interested:

      Our proposed DNS protocol extension lets recursive DNS resolvers include part of your IP address in the request sent to authoritative nameservers. Only the first three octets, or top 24 bits, are sent providing enough information to the authoritative nameserver to determine your network location, without affecting your privacy.

    • by Imagix (695350) on Thursday January 28, 2010 @04:10PM (#30939914)
      And that defeats the purpose. The internet got away from classes of IPs and went to classless delegation for a reason. Now they want to bring it back. And if the concern was really for geolocation purposes, then the ISP can simply put a recursive nameserver close to the clients (say only 1 hop up from the client). Since all of the client's traffic must pass by that hop anyway, that DNS will be close enough to determine where the client is.
    • by tlambert (566799) on Thursday January 28, 2010 @04:48PM (#30940756)

      To: DNSEXT (DNS Extension Working Group, Internet Engineering Task Force)
      From: Paul Vixie
      Date: Thu, 28 Jan 2010

      "I don't think that's a general enough solution to be worth standardizing.
      please investigate the larger context of client identity, beyond the needs
      of CDN's."

      I also agree with his later statement in the same thread:

      "it may be too dangerous in any form but that's a separate issue."

      -- Terry

    • by Ungrounded Lightning (62228) on Thursday January 28, 2010 @05:51PM (#30941968) Journal

      Now while this could theoretically be used to censor regions of users, it could not be used to expose you (since it isn't the complete IP address)

      Sure it could expose me. I have my own Class-Cs - two of 'em. When I'm on one the first three octets point straight to me.

      When I'm running from my DSL I have an eight-IP address block (broadcast / broken-broadcast / modem / five-usable) so first three octets point to a group of 32 of which I'm one. For DSL users with one-usable it points to a group of 64 users of which they're one. For unfettered PPP (such as dialup), where the IP addresses can be arbitrary, it's still one-in-256.

      Sorry, guys. One-in-64 (or even one-in-256) is too close to home for me.

      Doubly so because, once it's down to one-in-256, some governments will be willing to bust up to 255 innocents to get one guy they REALLY don't like. I don't like the idea, when I'm on the road, of being one of the innocent up-to-255 when some terrorist, spy, or whatever uses a dialup and we "win the lottery" and end up with the same first-three-octets.

  • Bad summary (Score:3, Informative)

    by Talisein (65839) on Thursday January 28, 2010 @02:12PM (#30937148) Homepage

    The proposal says they would only use the first three octets. And users could just use a different DNS server if they had a restrictive servers that blacklisted Iran or whatever.

  • by Anonymous Coward on Thursday January 28, 2010 @02:12PM (#30937150)

    The summary isn't even close to correct. What the hell is going on with Slashdot these days?

  • How's that evil? (Score:5, Insightful)

    by Anonymous Coward on Thursday January 28, 2010 @02:18PM (#30937324)

    What a load of crap. There is no way to exploit that. If a someone wants to block certain IP ranges, it is much more efficient to do so at the HTTP (or whatever the protocol in use is) level, rather than in DNS.

    Even if this gets introduced, every DNS server will continue supporting the old (without 'IP forwarding') way of doing things, so it's easy enough to pick a DNS server which doesn't forward your IP. Everything will work just as it does now (you won't have the potential speed advantage you might get with the new system though).

    Whoever wrote TFS doesn't know the first thing about how networks work. Looking at what just happened in China, do you think that Google of all companies really wants to endanger your privacy?

    The reason why Google offers public DNS servers and why they came up with this is because they want to make the internet faster for everyone. And they're doing it in an open, backwards-compatible way.

    This is a good idea and should be implemented.

    • by slyborg (524607) on Thursday January 28, 2010 @03:14PM (#30938786)

      > The reason why Google offers public DNS servers and why they came up with this is because they want to make the internet faster for everyone.

      BAHHAHAHAHAHAHAAAHAA...Yes, Google only wants rainbows and ponies for ALL the good children!

      My good AC, I actually think you aren't a Google astroturf, but how naive can this be? Google is a public corporation whose fiduciary duty is to make money for their shareholders, not make the intertubes flow more smoothly, unless that causes Google to make more money.

      Google's beef with China was that China ripped off Google source code. Before this, they had no problems at all turning over email of human rights activists and censoring results in China. Their newfound interest in Chinese information freedom is the result of their rage at being made to look stupid and weak by the Chinese government.

  • This is important! (Score:5, Insightful)

    by HaeMaker (221642) on Thursday January 28, 2010 @02:19PM (#30937338) Homepage

    This is extraordinarily important for efficient operation of the internet. If people want to block you, they can, DNS or no DNS. However, for global load balancing, this is vital. You want to connect to a server near you, not near your DNS server.

    This will not stop the proper function of proxies.

  • by Tei (520358) on Thursday January 28, 2010 @02:20PM (#30937364) Journal

    Internet already work withouth the need to propagate this information. Following the OS concept of "Less power", the less information about you that is propagated, the less problems.

    "By returning different addresses to requests coming from different places, DNS can be used to load balance traffic and send users to a nearby server. For example, if you look up www.google.com from a computer in New York, it may resolve to an IP address pointing to a server in New York City. If you look up www.google.com from the Netherlands, the result could be an IP address pointing to a server in the Netherlands. Sending you to a nearby server improves speed, latency, and network utilization."

    It seems this balancing is already possible withouth the need to propagate that data. I choose here safety/privacy, over a potential speed gain. Also the risk is for everyone, but the gain is just for a few ones (the people that has lots of servers and need a balancing solution)... hence, is unfair. My view of this.

    • by LordLimecat (1103839) on Thursday January 28, 2010 @03:11PM (#30938682)
      Then choose a dns server that doesnt use these extensions, or choose one you trust.
  • What about IPv6 (Score:2, Interesting)

    by wadey (215252) on Thursday January 28, 2010 @02:23PM (#30937438)

    It seems IPv6 will be in use soon; so why tinker with DNS requests on IPv4 ?

    Also, does anybody know how GEO locating an IP will be done on IPv6 (at least down to country level) ?

  • this is what anycast routing was invented for. the root servers use it, why not secondaries?

  • by nweaver (113078) on Thursday January 28, 2010 @02:26PM (#30937508) Homepage

    There are already many uses where the IP address of the resolver is used to determine service, basically every CDN etc uses this technique.

    This extension is needed if you want OpenDNS and the like to Not Suck when fetching Akamai sourced content, youtube videos, etc.

    And its not like the owner of the DNS authority won't find out who you are anyway, after all, you then CONTACT THEM DIRECTLY WITH YOUR IP ADDRESS!!

  • by toejam13 (958243) on Thursday January 28, 2010 @02:30PM (#30937624)
    There are several products currently on the market that allow you to perform geographic load distribution via DNS. These products look at your LDNS server's address and either attempt to triangulate using a reverse DNS lookup to the LDNS server, calculating number of hops and/or round-trip times to that LDNS from each of your sites, or they use static IP range tables broken down by region. The assumption is that a client in somewhat close proximity to their LDNS server.

    The problem with these methods is that some very large ISPs may use only a couple of LDNS servers for an entire continent. In the case of third party DNS services, it grows to being a couple of LDNS servers for the entire planet. So there is no geographic unity between client and LDNS server.

    This proposal helps a bit, but unless it includes a method where a LDNS server can be told that a DNS query's response is only good for that client's /24 subnet (or any varying mask bitlength), you'll still end up with clients clobbering each other with these geographic load distribution products unless you set the TTL to 1 second. That work around has the nasty side effect of increasing your DNS load by an exponential factor, which isn't good either.
  • by TheSunborn (68004) <tiller@[ ]mi.au.dk ['dai' in gap]> on Thursday January 28, 2010 @02:31PM (#30937650)

    I can't se how this does give any more information to Google or other users.

    Example: If i do a lookup on www.slashdot.org then this query should newer hit any dns server controlled by Google.

    The only way a query would end up on a google controlled dns server, would be if the domain i looked up were owned by google, and in that case I don't care, because then I am about to visit the site anyway which mean they will have my entire ip.

  • by markhahn (122033) on Thursday January 28, 2010 @02:35PM (#30937790)

    look, you can already use whatever DNS server you want. if you're worried about your traffic being analyzed by someone else's DNS, just use your own (or a privacy-respecting) DNS elsewhere.

    DNS is just the obvious way to ensure that clients use the best path to content.

  • by TheDarkener (198348) on Thursday January 28, 2010 @02:37PM (#30937820)

    ...don't fix it.

  • Ups and Downs (Score:5, Insightful)

    by LaminatorX (410794) <sabotage@NoSpAm.praecantator.com> on Thursday January 28, 2010 @02:39PM (#30937878) Homepage

    I like it. I don't know what the aggregate increase in efficiency across the net would be, but I'm betting if Google is suggesting it, it could be significant. While there are some potential abuses, they're really no different than what can already be done at the router/server level currently.

  • by mpapet (761907) on Thursday January 28, 2010 @02:42PM (#30937944) Homepage

    The use of the word 'marginal' needs to be disambiguated too. It means 'not of central importance.'

  • by ka9dgx (72702) on Thursday January 28, 2010 @02:53PM (#30938248) Homepage Journal

    The reason the internet is so successful is that it has a core that doesn't try to think too much. Get packet, forward packet, etc..

    If load balancing is a concern, the client node should determine where the best place to get content from is at, NOT some hack which makes DNS less reliable, and noisier.

    Use digital fountains and give out multiple sources to get streams from, and let the end user's computer figure it out. They are the ones in the best place to determine which is a more reliable stream of packets, not some aggregated delayed measure post facto.

    I don't like this idea. Round robin should be good enough.

  • by gmuslera (3436) on Thursday January 28, 2010 @03:02PM (#30938464) Homepage Journal
    While this don't identify you for a lot of reasons, there are some good points of using this. Hitting local caches/distribution network nodes/etc will make internet actually faster (a good percent of total bandwidth comes from places where this applies, and going to somewhat local resources unclogs international links). At least where i live where around 200 ms is the avg ping time with the rest of the world, but 30 or lower to local ones, accessing most of static resources local should make a difference.

    And probably more important, dont forbids you to keep your privacy, old nameservers, or if you want, your own authoritative nameserver,will not send that information and you could use them
  • by stimpleton (732392) on Thursday January 28, 2010 @03:05PM (#30938538)
    " Or it would send a user from Iran or Libya to a 'domain name doesn't exist' server."

    Why limited to these countries? How about Australia? Remember, this is a country that blocked Wikileaks thru its state sanctioned banlist. Politicians there are on board [stuff.co.nz].

    Even Linden Labs(makers of Second Life) have set up servers there(only 2-3 countries to have their servers outside the US). Critics theorize this is little to with technical distributed computing reasons but to be in readiness to self censor their content as LL seems to have had the opinion from Ozzie officials that Second Life in its current form would be "offensive". IE: against the law...like Child Porn etc.

    Google needs the tools to "keep sweet" with local authorities. These DNS changes would help them avoid being like Linden Labs situation.
  • This is bad (Score:2, Insightful)

    by BhaKi (1316335) on Thursday January 28, 2010 @03:09PM (#30938626)
    This is crap. You don't need user's IP address for load balancing. The only motives behind this are propaganda and psyops. For instance, this move will allow US to block traffic to certain sites from certain countries and then claim that access failures are due to censorship imposed by that country's government.
  • by Sloppy (14984) on Thursday January 28, 2010 @03:12PM (#30938722) Homepage Journal

    The way things currently work, really makes sense for most people. Your ISP is a single hop away and you want the authorities to talk to it (not you) so that it can cache the result. And it's ok to have that extra traffic between the recursive resolver and you, because it's not a long ride.

    But what Google is asking for also makes sense -- if you're using a far-away recursive resolver.

    And the very premise of that is stupid. Why the fuck would anyone want to use Google for DNS, instead of something closer (e.g. either their ISP or even a box on their very own LAN)?

  • by BhaKi (1316335) on Thursday January 28, 2010 @03:14PM (#30938794)

    Or it would send a user from Iran or Libya to a 'domain name doesn't exist' server.

    And who would be the victims? The same people whom Google is claiming to be fighting for.

  • by kindbud (90044) on Thursday January 28, 2010 @03:21PM (#30938948) Homepage

    So even if your resolver DNS already has the answer cached, it's supposed to transmit the request again so the authoritative server can see the requesting client's IP network, and possibly return a different answer. Is it supposed to cache that, or not? Is a resolver supposed to use this extension for all queries, or only load-balanced ones? The draft includes no mechanism for specifying whether a particular query should or should not use the extension. I assume then that a resolver patched with this extension would use it for all queries, which would completly negate the benefits of caching.

    So Google thinks obsoleting the DNS cache will help speed up web browsing? Really?

  • by Rysc (136391) * <sorpigal@gmail.com> on Thursday January 28, 2010 @03:32PM (#30939182) Homepage Journal

    This all sounds totally crazy if you're Paul Vixie and have written a little article titled What DNS Is Not [acm.org] which specifically mentions that it shouldn't be used for this.

    How quickly we forget [slashdot.org].

  • by Aladrin (926209) on Thursday January 28, 2010 @03:42PM (#30939412)

    This will completely destroy IP rotation aka load balancing. I hope they aren't allowed to do it.

  • by AnotherBlackHat (265897) on Thursday January 28, 2010 @03:44PM (#30939466) Homepage

    Sounds like a terrible idea to me.

    If a caching DNS server that serves multiple users in multiple countries, then suddenly, it's not caching anymore.
    If there are multiple possible IP addresses that I can be directed to, why not just send all of them to me, and let me (my DNS server) decide which one is best?
    What if have more than one IP? Which one should I use?
    How often is it, really, that the route to the DNS server isn't the best route anyway? I.e. is the tiny benefit of a slightly better route for a handful of people really worth making a change to something as basic as the DNS protocol?

    I'd rather see a way to redirect the connection - cut out the DNS middleman.

  • The company I work for has a Class A IP network and is not based on the US.

    I'm physically located in Atlanta, but all of the existing geolocation services which I am aware of that use my exposed IP address seem to want to place me in the center of Europe somewhere.

    Will this be smart enough to do better?

  • by RabidMonkey (30447) <canadaboy&gmail,com> on Thursday January 28, 2010 @04:09PM (#30939874) Homepage

    We've been running into this wall for a while, and let me tell you, the workaround is the most disgusting mess imaginable. Trying to manage views/geolocation when everything is hidden behind a caching server is horrible. There is no car analogy.

    Sure, this might give google more information about you, but frankly, they already have it if you're querying their servers (directly). Where this benefits them, and other content players, is when they aren't the default DNS server. This allows them to know that you're coming from say, your city, as opposed to the city where your ISPs DNS server is. I would imagine for huge ISPs in the states, their DNS infrastructure is probably, at best, regionalized (east, central, west?). This would allow google/ms/anyone to get a much better idea as to where you are actually coming from, to provide you with much better content. As well, it makes managing DNS much easier.

    Two thumbs up for this.

    Next up - a DNS management protocol (http://tools.ietf.org/html/draft-ietf-dnsop-name-server-management-reqs-03)...

"If I do not want others to quote me, I do not speak." -- Phil Wayne

Working...