Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Networking Communications Security The Internet

BIND Still Susceptible To DNS Cache Poisoning 146

An anonymous reader writes "John Markoff of the NYTimes writes about a Russian hacker, Evgeniy Polyakov, who has successfully poisoned the latest, patched BIND with randomized ports. Originally, the randomized ports were never supposed to completely solve the problem, but just make it harder to do. It was thought that with port randomization, it would take roughly a week to get a hit. Using his own exploit code, two desktop computers and a GigE link, Polyakov reduced the time to 10 hours."
This discussion has been archived. No new comments can be posted.

BIND Still Susceptible To DNS Cache Poisoning

Comments Filter:
  • by CustomDesigned ( 250089 ) <stuart@gathman.org> on Saturday August 09, 2008 @09:31AM (#24536855) Homepage Journal

    This has nothing to do with BIND vulnerabilities. DJdns, or whatever you feel is more secure, has exactly the same problem. It is a protocol weakness. The article mentions BIND only because it is the reference implementation for DNS.

    The most interesting idea I've seen is to use IPv6 for DNS [slashdot.org]. The oldest idea is to start using DNSSEC.

  • by cortana ( 588495 ) <sam@[ ]ots.org.uk ['rob' in gap]> on Saturday August 09, 2008 @09:53AM (#24536953) Homepage

    $ apt-cache -n search power dns | wc -l
    0

  • by mlksys ( 93950 ) on Saturday August 09, 2008 @10:27AM (#24537113)

    I am not an expert on the problem.

    Is it possible that configuring cache timeout on the DNS server to be significantly less than the non trivial attack time might avoid the problem?

    I assume once cache is poisoned that that poison does eventually timeout in the cache unless the attack is continuous?

  • by shallot ( 172865 ) on Saturday August 09, 2008 @10:29AM (#24537133)

    % apt-cache -n search pdns-recursor
    pdns-recursor - PowerDNS recursor

    Granted, it *is* actually missing on several architectures because of some unimplemented system calls, but that shouldn't bother too many people.

  • by Niten ( 201835 ) on Saturday August 09, 2008 @10:35AM (#24537177)

    Does anyone else think that maybe we are approaching this problem the wrong way?

    Yes, the wrong way being tacking on extra transaction ID space by means of fragile kludges such as random source port numbers and, possibly, random IPv6 addresses.

    It will require a lot more effort, but the right way to solve this problem is by improving the protocol itself. That may mean putting a much larger transaction ID field in the packets, where it cannot be mangled by NAT devices. Or it may mean delegating nameservers by IP address rather than domain name so that resolvers will no longer need to accept potentially-malicious glue records. But preferably, it means moving to a cryptographically-strong domain name system such as DNSSEC.

  • by Anonymous Coward on Saturday August 09, 2008 @10:55AM (#24537273)

    The basis of the attack is to include "extra" information in a forged response to a query for a non-existent host. Bind trusts that extra information and other DNS servers only pay attention to that information if it falls under certain strict rules.

    I ask for aaaae3fcg.bankofamerica.com and also send 100,000 responses to that query to that same recursive DNS server, that all say something to the effect of "a record aaae3fcg.bankofamerica.com = bah, also look to 666.666.666.666 for anything else related to bankofamerica.com. Oh, and cache this until the sun goes dark"

    Nobody asks Bind to believe the part about THE REST OF THE WHOLE BLOODY DOMAIN in the response for a single record in the domain. No other servers cache that information.

    That bind also used non-random ports made it a 5 minute attack over a fast link, instead of a 10 hour attack. That in the past bind used bad random numbers for the transaction ID made it a 30 packet attack...

    Who's the fanboy now?

  • by Anonymous Coward on Saturday August 09, 2008 @11:46AM (#24537527)

    It doesn't. It simply ignores answers to questions it didn't ask.

    That makes it extremely unlikely (even in the face of larger and faster computers and network links) with it being made less likely by simply increasing your TTL.

  • DJB's take . . . (Score:5, Informative)

    by geniusj ( 140174 ) on Saturday August 09, 2008 @11:47AM (#24537529) Homepage

    For those that haven't seen it, djb threw up some information regarding this problem and various options a few years ago.

    http://cr.yp.to/djbdns/forgery.html [cr.yp.to]

  • by Anonymous Coward on Saturday August 09, 2008 @11:56AM (#24537585)

    Not all dns servers cache the glue records beyond that transaction. Those that don't *cache* the glue are not vulnerable to this attack.

  • by bconway ( 63464 ) on Saturday August 09, 2008 @12:01PM (#24537625) Homepage

    Consider reading the links in the article. Obfuscation isn't a fix.

    Article says, that DJBDNS does not suffer from this attack. It does. Everyone does. With some tweaks it can take longer than BIND, but overall problem is there.

  • by Paul Jakma ( 2677 ) on Saturday August 09, 2008 @12:55PM (#24537939) Homepage Journal

    Or it may mean delegating nameservers by IP address rather than domain name so that resolvers will no longer need to accept potentially-malicious glue records.

    Good post. Forgive me for focusing in on this one point and nitpicking it.. ;)

    0. Glue used to have a specific meaning: records configured in a parent to help delegate a zone. You (and many people reporting on the current flaws) seem additionally to use it to refer to "additional answers" in DNS replies. While such answers often are glue records, they're not quite the same thing. Least, it didn't use to be. Perhaps the meaning has moved on, but I'll use "glue" in the stricter sense.

    1. That's essentially already the case, for in-zone delegations (ie delegating example.com to a name in example.com. like ns.example.com.). The authorative server must be configured with glue, and it must return that glue in the additional answer section for anyone to be able to query ns.example.com.

    (Interestingly, DJB has long had a page arguing that "out-of-bailiwick" delegations are bad [cr.yp.to], amongst other things).

    2. Resolvers mustn't treat glue from the delegator as authorative anyway. Resolvers are only supposed to believe additional answers that are in-bailiwick (ie you'll only believe an additional record for/in example.com. if it came from a DNS server you know to be authorative for example.com., e.g. cause you just queried it for a name in example.com.)

    3. The problem here is not inherent to glue/additional, but in knowing whether a reply is authorative or not. The only good way to fix this is to secure the DNS chain of trust, either through near-ubiqutious deployment of IPSec+PKI for DNS servers (ha!) or a PKI inside DNS.

  • by Antibozo ( 410516 ) on Saturday August 09, 2008 @03:41PM (#24539067) Homepage

    The third party hierarchical trust you disdain is one of the primary benefits of DNSSEC, because DNSSEC can eventually replace certificates for distribution of public keys. Currently, the only PKI we have is from a third-party non-hierarchical trust—the CAs—who are really not that trustworthy. DNS, however, is already hierarchical, and it makes a lot more sense to use a hierarchical system of trust—the same system in fact—to validate it. Do you really think having hundreds of trust anchors makes more sense than having a single trust chain?

    What hampers DNSSEC, more than anything, is all the FUD about it. It's really not difficult to implement, and there are tools for nameserver operators to use. If people would start actually practicing signing domains, even without the trust chain, a lot of their fears would be allayed. What is technically standing in the way is the failure to plan for it on the part of the major registrars, who need to provide a mechanism for signing the DS records. Of course, a lot of those those registrars also happen to make a lot of money on the side as CAs, and one of them in particular also operates the .com and .net TLDs. You do the math—who benefits the most both from FUD about DNSSEC and from the insecurity of traditional DNS?

    People need to set aside their paranoia about DNSSEC and understand that the worst case, most fantastical, trust violation scenarios for DNSSEC are still better than the status quo. And the best case is that everyone ends up with ubiqitous PKI with no per-unit cost. You could securely publish all your public keys, ssh host keys, personal public keys, etc. for the simple cost of getting a domain. If people can start seeing the possibilities for a single distributed, redundant, hierarchical, secure network database, there will be more community pressure on the registrars to get with the program. ICANN already plans to have a signed root by the end of the year and there are several ccTLDs that are already signing their zones. What excuse do we have for not even having a plan for .com? It's infuriating that we are facing the current situation, after all these years, and people are still questioning whether DNSSEC is a good idea.

  • Re:DJB's take . . . (Score:3, Informative)

    by vic-traill ( 1038742 ) on Saturday August 09, 2008 @05:39PM (#24539937)

    For those that haven't seen it, djb threw up some information regarding this problem and various options a few years ago.

    http://cr.yp.to/djbdns/forgery.html [cr.yp.to]

    I went and had a look at the thread (dated from Jul 30 2001) referenced in the excerpt at djb's site (follow the posting link in the URL above). As far as I can tell, Jim Reid was pooh-poohing the usefulness of port randomization, the approach used as an emergency backstop against Kaminsky's attach just over seven years later. To be fair, Reid was doing so in the context of advocating for Secure DNS.

    djb drives people crazy (particularly the BIND folks), but he's someone to listen to - is it the case, as I understand from reading through these docs, that in 2001, djb's dnscache performed the port randomization that everyone's been scrambling to deploy over the past several weeks for other implementations, including BIND?

    Or am I mis-interpreting here?

  • Re:DJB's take . . . (Score:2, Informative)

    by Maniacal ( 12626 ) on Saturday August 09, 2008 @06:58PM (#24540509)

    Here's something DJB posted to his mailing list on Thursday. Don't know if I'm allowed to post this here but what the heck:

    http://cr.yp.to/djbdns/forgery.html [cr.yp.to] has, for several years, stated the results of exactly this attack:

          The dnscache program uses a cryptographic generator for the ID and
          query port to make them extremely difficult to predict. However,

          * an attacker who makes a few billion random guesses is likely to
              succeed at least once;
          * tens of millions of guesses are adequate with a colliding attack;

    etc. The same page also states bilateral and unilateral workarounds that would raise the number of guesses to "practically impossible"; but then focuses on the real problem, namely that "attackers with access to the network would still be able to forge DNS responses."

    I suppose I should be happy to see public awareness almost catching up to the nastiest DNS attacks I considered in 1999. However, people are deluding themselves if they think they're protected by the current series of patches. UIC is issuing a press release today on this topic; see below.

    ---D. J. Bernstein, Professor, Mathematics, Statistics, and Computer Science, University of Illinois at Chicago

    DNS still vulnerable, Bernstein says

    CHICAGO, Thursday 7 August 2008 - Do you bank over the Internet? If so,
    beware: recent Internet patches don't stop determined attackers.

    Network administrators have been rushing to deploy DNS source-port randomization patches in response to an attack announced by security researcher Dan Kaminsky last month. But the inventor of source-port randomization said today that new security solutions are needed to protect the Internet infrastructure.

    "Anyone who knows what he's doing can easily steal your email and insert fake web pages into your browser, even after you've patched," said cryptographer Daniel J. Bernstein, a professor in the Center for Research and Instruction in Technologies for Electronic Security (RITES) at the University of Illinois at Chicago.

    Bernstein's DJBDNS software introduced source-port randomization in
    1999 and is now estimated to have tens of millions of users. Bernstein released the DJBDNS copyright at the end of last year.

    Kaminsky said at the Black Hat conference yesterday that 120,000,000 Internet users were now protected by patches using Bernstein's randomization idea. But Bernstein criticized this idea, saying that it was "at best a speed bump for blind attackers" and "an extremely poor substitute for proper cryptographic protection."

    DNSSEC, a cryptographic version of DNS, has been in development since
    1993 but is still not operational. Bernstein said that DNSSEC offers "a surprisingly low level of security" while causing severe problems for DNS reliability and performance.

    "We need to stop wasting time on breakable patches," Bernstein said. He called for development of DNSSEC alternatives that quickly and securely reject every forged DNS packet.

    Press contact: Daniel J. Bernstein

  • by jamesh ( 87723 ) on Saturday August 09, 2008 @11:35PM (#24542457)

    DNS does already work over TCP, and is used where the response will be over a certain size, eg a zone transfer from primary to secondary DNS server.

    The problem is one of efficiency. TCP has much higher overheads, you need three packets just to get a connection started and then you have to keep track of the connection and shut it down properly. Three packets doesn't sound like much but over a high latency link (eg 500ms) it makes for a huge increase in the time it takes to resolve a name.

  • by Znork ( 31774 ) on Sunday August 10, 2008 @07:28AM (#24544471)

    who are really not that trustworthy.

    I generally don't trust the CA's further than I can throw them. Who do you figure is trustworthy enough to handle it for DNS? Who could be regarded as trustworthy, no matter who in the world you ask? There seems to be some administrative problems with handing the keys to Mother Theresa.

    hundreds of trust anchors

    Having trust anchors at all is the problem. You need to verify against several independent sources, preferably sources you have some reason to trust, to avoid single points of corruption.

    Designing a system based on a single point not failing is simply bad engineering. You design with the expectancy that individual points _will_ fail, but minimizing the consequence of the failure of any such points.

    secure network database

    It's spelled _in_secure.

    I certainly see the possibilities for what you want to be talking about. DNSSEC isn't it.

    and people are still questioning whether DNSSEC is a good idea.

    Ok, here's a hint. If you've failed to convince people that DNSSEC is a good idea after this many years, and with the current (and, many would argue, since a long time ongoing) DNS situation, it should really, I mean, _REALLY_ tell you something. Maybe the problem is in the actual idea.

    It's not like it's hard to come up with a list of options to improve and design a naming system to be redundant, fault tolerant, intervention tolerant and secure, yet there's an apparent fixation of going with a system that has obvious flaws. Numerous suggestions to fix the flaws that made the latest round of problems obvious have been made in the past, yet the push continues.

    Why not actually sit down and redesign in a distributed way that would be acceptable to all parties? I mean, how bad does it have to get? DNSSEC gets bogged down in political and paranoid issues because DNSSEC has political issues designed into its core. It caters to the paranoid by ensuring there is plenty of opportunity to make paranoids right.

    When you reach insurmountable political issues you design around them. Maybe they'll go away in some utopian future when we'll have utterly secure trust anchors in the hands of incorruptible angels, but until then we could do well with a system that doesn't require unobtainable dreams to work.

  • by Antibozo ( 410516 ) on Monday August 11, 2008 @05:07AM (#24553137) Homepage

    I generally don't trust the CA's further than I can throw them.

    Yet, they are the current standard for providing end-to-end security. So how mainstream to you think your level of doubt is?

    Mind you, I don't trust the CAs either, which is why I want DNSSEC, since it can provide a superior mechanism with far fewer vectors for subversion, which I can control for my own domains, and which also is not vulnerable to cache poisoning.

    Who do you figure is trustworthy enough to handle it for DNS? Who could be regarded as trustworthy, no matter who in the world you ask?

    I would trust ICANN for the root. Obviously, you can't please everyone in the world, nor do you have to. For the TLDs, the current gTLD registry operators also happen to be CAs, so they can subvert both DNS and SSL already. Have you ever heard of a case of a CA subverting SSL security by deliberately issuing a false cert?

    Having trust anchors at all is the problem. You need to verify against several independent sources, preferably sources you have some reason to trust, to avoid single points of corruption.

    Yes, those independent sources are called "trust anchors". From your somewhat vague statement I presume you mean you'd like to make your trust anchor some form of composite, perhaps a web of trust of some kind. So then it would require some form of conspiracy to subvert the trust anchor. And you think that's superior to having a single trust anchor, which would require some form of conspiracy to subvert it.

    And how do you propose that Joe User assess whether the DNS answer he gets is trustworthy in such a system, let alone come up with an initial set of trust anchors for his web of trust?

    Designing a system based on a single point not failing is simply bad engineering.

    Again you are vague; it's unclear what single point would fail in DNSSEC. Presumably the scenario you have in mind would be the root or TLD operators' subverting DNSSEC. Do you really think that's a plausible scenario? If so, why isn't it happening already with the CAs, any one of which could accomplish the same thing. Again, some CAs are also TLD operators, and many more of them can subvert DNS. Yet we don't have any evidence that this has happened.

    It's spelled _in_secure.

    That's your argument? Wordplay?

    Maybe the problem is in the actual idea.

    Or maybe the problem is in the FUD that people keep promulgating about the idea.

    It's not like it's hard to come up with a list of options to improve and design a naming system to be redundant, fault tolerant, intervention tolerant and secure, yet there's an apparent fixation of going with a system that has obvious flaws.

    Really? If it's so easy, let's hear your design.

    Of course, it's easy to claim that designs have obvious flaws without presenting actual examples. Would you care to describe these obvious flaws in DNSSEC? What vulnerabilities exist, what are the plausible threat models, and how do those threat models compare to those of the status quo, or the easily designed system you have in mind?

    DNSSEC gets bogged down in political and paranoid issues because DNSSEC has political issues designed into its core. It caters to the paranoid by ensuring there is plenty of opportunity to make paranoids right.

    If deploying SSL had required proving to all the tin-foil hats in the world that CAs wouldn't be able to subvert the security of the system (of course, they are), it never would have gotten off the ground. Would you have the designers cater to every marginal threat in order to design a system that would be unwieldy beyond measure, impossible for a regular user to understand, and which still fails to provide perfect security?

    When yo

This restaurant was advertising breakfast any time. So I ordered french toast in the renaissance. - Steven Wright, comedian

Working...