Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
The Internet

ICANN Considers Using '127.0.53.53' To Tackle DNS Namespace Collisions 164

angry tapir writes "As the number of top-level domains undergoes explosive growth, the Internet Corporation for Assigned Names and Numbers (ICANN) is studying ways to reduce the risk of traffic intended for internal network destinations ending up on the Internet via the Domain Name System. Proposals in a report produced on behalf of ICANN include preventing .mail, .home and .corp ever being Internet TLDs; allowing the forcible de-delegation of some second-level domains in emergencies; and returning 127.0.53.53 as an IP address in the hopes that sysadmins will flag and Google it."
This discussion has been archived. No new comments can be posted.

ICANN Considers Using '127.0.53.53' To Tackle DNS Namespace Collisions

Comments Filter:
  • Re:hacky (Score:4, Interesting)

    by hcs_$reboot ( 1536101 ) on Thursday February 27, 2014 @04:38AM (#46355219)
    That solution is indeed hacky. But if the LAN is correctly setup, the collisions should be minimal. I.e. on a "home" workstation, named something like "linux.home", that very station identifies itself and if the other LAN members communicate with "linux.home" an entry is supposed to be already present in "hosts" (like) files - and, usually, "hosts" file resolution takes precedence over DNS. For bigger implementations a DNS server or equivalent should be in place, and forward the unknown domains to external (Internet) DNS - again, their local config should contain an entry for the ".home" zone, preventing an external resolution.

    Is returning 127.0.53.53 instead of NOT FOUND a good idea? Not sure about that, since, for instance, a browser will say "Cannot connect to..." instead of "Domain not found" - which is actually the correct error message. The real problem is when the domain+subdomain exist on the Internet, users will process information from the wrong site instead of the intranet one

    Of course all IT teams will have to be DNS competent - which is currently not (always) the case ...
  • Re:hacky (Score:4, Interesting)

    by Anonymous Coward on Thursday February 27, 2014 @05:18AM (#46355369)

    It may not even say "Cannot connect to". 127.0.53.53 is in the 127/8 range, reserved for localhost. On some systems, only 127.0.0.1 works for localhost, but nothing prevents a system from using the entire range for localhost.

    So rather than getting an error, when server1.here lacks a host file entry for server2.here, you will be connecting to server1.here. So, from server1.here, "ping server2.here" will show that the network works. Browsing to "http://server2.here/" will show the start page of server1.here. If that's the default page, or the two servers are being setup to run some kind of load balancing - thus having the same content - the resulting confusion can be very hard to figure out.

  • Re:hacky (Score:5, Interesting)

    by DarkOx ( 621550 ) on Thursday February 27, 2014 @08:26AM (#46355921) Journal

    The problem really isn't so much not being able to reach some.home, on the internal network or even something.home on the Internet when you already have a local .home. zone.

    The problem is all the uncounted config files out there with unqualified or partially qualified names in them. The RFCs are not entirely clear on what the correct behavior is, and worse the web browser folks have decided to implement the behavior differently themselves in some cases, rather than use the system nss services/apis.

    So if you imagine an environment where DHCP configures a list of DNS search suffixes, and one of those is something like us.example.com or something. How the Windows boxes interpret a query mobile.mail (note no trailing dot) will possibly be different than the way the Linux machines do, and different than what the OS X machines do, etc and what Chrome or Firefox decide to do might be different than what nslookup does even on the same machine!

    Its going to be nightmarish from a support and troubleshooting perspective, and lets face it nobody on your PC tech team really understands DNS, your network admins probably have a good handle but some major blind spots, and your developers are accustomed to making what are now dangerous assumptions. I am not sure I fully understand DNS on most days.

    This is going to be a support nightmare at least at some sites, even some places where the ONLY sin was not using FQDNs everywhere all the time. Which might have deliberate, perhaps not the best way to have gone about it but knowing how search domains operate, and being able to set them with DHCP is entirely possible and like someone architect-ed mobile systems getting a local resource by depending on that behavior.

    There are all kinds of potential security problems too. The gTLD expansion is making the Internet both less reliable and less safe.

  • Re:hacky (Score:5, Interesting)

    by DarkOx ( 621550 ) on Thursday February 27, 2014 @08:42AM (#46355991) Journal

    Right its a good idea to expect every application developer everywhere to put a special case test into their code see if the value in the buffer after a call to gethostbyname is 127.0.53.53 rather than just checking the return code and using the value directly or not based on the return code. Doing this means a new branch in every new app, for no real reason; It means odd behavior in old/not updated code that expects to either successfully resolve and address or not.

    Case in point someone introduced a hostname into our DNS recently that caused a major application to break. Turned out there was a stale config entry for a hostname that no longer existed. As long as it had been getting back NXDOMIN things hummed along nicely, it just tried the next host in its list from a config file. When someone added that name back it, it started trying to connect to the new server ( which did not run the application it was expecting and did not listen on that port ) this would cause long timeouts on login while it tried and retried the other server. I grant this was a configuration error, someone should have cleaned that old config file, but there are situations like laptops where this might not be the case. Inside your organization .mail might exist as a zone, take the machine home and CustomAPP might work fine today getting NXDOMIN and switching to a local database or trying a different public hostname etc, now its going get back 127.0.53.53 and quite likely not know what to do; when the service isn't there.

    No its patently stupid for the name resolution system to return BAD data. If something like .mail is not allocated or de-alocated than it does not exist, and NXDOMIN is what a public DNS system should return. The meaning is clear.

  • by i.r.id10t ( 595143 ) on Thursday February 27, 2014 @11:05AM (#46357121)

    So... what you are saying is that ICANN/IANA should've done something similar for names that has been done for the various "private" "non-routable" ip address pools (10.x.x.x, 192.168.x.x, etc) have done since The Beginning... there needed to be some TLD that would only work for local networks and queries.

    Of course, since that didn't happen all those years ago admins and amatures (and amature admins even) have been using a random mess of things, usually done by trying to ping or get a nslookup for some hopefully imaginary TLD and when it works (or rather, returns a NXDOMAIN error) they assume they can use it locally without repurcussion.

    Which means there are tens, hundreds, or maybe thousands (or more!) "fake" TLDs in use out there, some hard coded into applications that are no longer supported, etc. but are still in use. Which means to try and fix it now would be pretty much near impossible.

Any circuit design must contain at least one part which is obsolete, two parts which are unobtainable, and three parts which are still under development.

Working...