Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
The Internet Bug Security

Paul Vixie Responds To DNS Hole Skeptics 147

syncro writes "The recent massive, multi-vendor DNS patch advisory related to DNS cache poisoning vulnerability, discovered by Dan Kaminsky, has made headline news. However, the secretive preparation prior to the July 8th announcement and hype around a promised full disclosure of the flaw by Dan on August 7 at the Black Hat conference has generated a fair amount of backlash and skepticism among hackers and the security research community. In a post on CircleID, Paul Vixie offers his usual straightforward response to these allegations. The conclusion: 'Please do the following. First, take the advisory seriously — we're not just a bunch of n00b alarmists, if we tell you your DNS house is on fire, and we hand you a fire hose, take it. Second, take Secure DNS seriously, even though there are intractable problems in its business and governance model — deploy it locally and push on your vendors for the tools and services you need. Third, stop complaining, we've all got a lot of work to do by August 7 and it's a little silly to spend any time arguing when we need to be patching.'"
This discussion has been archived. No new comments can be posted.

Paul Vixie Responds To DNS Hole Skeptics

Comments Filter:
  • by niceone ( 992278 ) * on Tuesday July 15, 2008 @07:08AM (#24194091) Journal
    I just remember the IP addresses and type them in myself. How hard is that?
  • stability (Score:1, Interesting)

    by eneville ( 745111 )
    ive never had to change tinydns installs for security reasons
  • by hal9000(jr) ( 316943 ) on Tuesday July 15, 2008 @07:17AM (#24194127)
    this [informationweek.com] article at information week said it best the day after the announcement.

    Geez, if you want responsible disclosure, you have to trust the experts when they say "it's new and it's bad"
    • Geez, if you want responsible disclosure, you have to trust the experts when they say "it's new and it's bad">

      And that's one reason why I keep saying to hell with 'responsible disclosure'. But the more I say that the more people look at me like I'm some kind of space alien who just landed in a flying saucer.

      Maybe then we wouldn't have software vendors taking weeks, months or years to patch remotely exploitable bugs (yes, I'm looking at YOU, Microsoft)

      • by tyler.willard ( 944724 ) on Tuesday July 15, 2008 @07:46AM (#24194329)
        Maybe then we wouldn't have software vendors taking weeks, months or years to patch remotely exploitable bugs (yes, I'm looking at YOU, Microsoft)

        Sure you would; and the blame for any damage would be blamed on who made the disclosure.

        There is nothing wrong with how this was/is being handled. Limited disclosure with a solid and "reasonable" deadline is a perfectly fine way to balance the myriad issues with security threats.
        • Re: (Score:3, Insightful)

          by pfleming ( 683342 )

          Maybe then we wouldn't have software vendors taking weeks, months or years to patch remotely exploitable bugs (yes, I'm looking at YOU, Microsoft)

          Sure you would; and the blame for any damage would be blamed on who made the disclosure.

          There is nothing wrong with how this was/is being handled. Limited disclosure with a solid and "reasonable" deadline is a perfectly fine way to balance the myriad issues with security threats.

          Except Microsoft doesn't handle things this way. If this had been only a Windows issue we would have never heard about it. The fact that Open Source is vulnerable as well means that we will eventually know what the problems were and be able to look to see that it was fixed in the Open Source versions.

          • Re: (Score:3, Insightful)

            This has nothing to do with any specific vendor or open source.

            This issue is about how and when independent researchers disclose a vulnerability they find.

          • The fact that Open Source is vulnerable as well means that we will eventually know what the problems were and be able to look to see that it was fixed in the Open Source versions.

            Anyone with a serious interest in finding out the details of the flaw doesn't need the source code to figure it out, or to see what changes were made to address it. All they need are the binaries and a disassembler.

      • Maybe then we wouldn't have software vendors taking weeks, months or years to patch remotely exploitable bugs (yes, I'm looking at YOU, Microsoft)

        Somebody benefits from the status quo.

    • by Goaway ( 82658 ) on Tuesday July 15, 2008 @07:45AM (#24194325) Homepage

      So, you figure eighty vendors coordinated a simultaneous patch for some issue that is not really a big deal, probably just some guys vying for attention?

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      Geez, if you want responsible disclosure, you have to trust the experts when they say "it's new and it's bad"

      I don't want "irresponsible disclosure". I don't want to be vulnerable, while major corporations get to do marketing damage control. They had a hole. Ok, everyone makes mistakes. They found the hole. Great, then we can do something about it. Or not, because they kept quiet about it while secretly writing the fixes. They kept quiet about it for long enough that even Microsoft had fixes ready.

      Meanwhile

    • Re: (Score:3, Interesting)

      by Lost Race ( 681080 )

      you have to trust the experts when they say "it's new and it's bad"

      OK... How bad? "Real bad" doesn't really help me at all. To make an informed decision I need to know four things:

      1. Cost of patching
      2. Cost of failure due to not patching
      3. Probability of failure due to not patching
      4. Probability of failure in spite of patching

      #1 is making my firewall basically wide open to UDP. #2 is cache poisoning. Without knowing more about Kaminsky's attack, I can't really make any useful guesses about #3 and #4.

      For now I've all

  • If he posted that here, it would get modded Flamebait or Troll.

  • by wild_quinine ( 998562 ) on Tuesday July 15, 2008 @07:35AM (#24194233)
    ... and IT admins make the worst end users.

    Knowing how to run a system is not purely technical knowledge, it's also a measure of professional ability. That means knowing when to take advice, and knowing who to take it from.

  • Security expert discovers flaw

    Tries to do the right thing and report it responsibly to vendors. Coordinates patch day responsibly.

    People ignore it/blow it off/procrastinate and then proceed to throw blame and lawsuits against those who tried to help them in the first place.

    They require a license to drive a car on a highway. Sure as hell would thin out the herd to require a license to drive a server on the information superhighway.

    • Re: (Score:2, Interesting)

      by mrsbrisby ( 60242 )

      Not exactly.

      This flaw was well known in 1990. Paul Vixie has been dragging his feet for almost twenty years with crack-potted shit like "additional credibility rules" and extra complexity.

      How to fix this bug trivially was well known over ten years ago [cr.yp.to] and still the ISC has been refusing to secure its users because they want to push DNS-SEC- a security system based on a trust hierarchy that doesn't exist, whose implementation has never worked, and will never work because Paul is a fucking idiot who cares mor

      • by danFL-NERaves ( 302440 ) on Tuesday July 15, 2008 @08:14AM (#24194687)

        Your mad ad hominem attack skills have convinced everyone that Paul Vixie is the know nothing douchebag in this conversation. Kudos!

      • by theskov ( 556173 )
        I don't really see where the trivial fix is, in the linked document. Are you talking about the fix that means renaming all domains to be too long to remember? How is that any better than using the raw IP-addresses?
      • by gregmark ( 750089 ) on Tuesday July 15, 2008 @08:53AM (#24195283)

        Randomizing UDP source ports does not solve the problem, it only makes it more difficult to impersonate the responding DNS server. Secure DNS makes this kind of impersonation impossible, or at least allows us to bask in the warm glow of impossible.

        The DJB vs BIND thing is an illusion. Whatever everyone agrees is the best implementation should win and I doubt that Paul Vixie or anyone else at ISC thinks differently.

        But it has become abundantly clear to me that DJB and his minions (of which I assume you are one) have failed to matter in most ways, not because of your ideas, but because of the brusque, immature manner in which those ideas are submitted for consideration, outside the standards committees which have served the Internet well for 30 years.

        • by Znork ( 31774 ) on Tuesday July 15, 2008 @11:50AM (#24198545)

          Secure DNS makes this kind of impersonation impossible

          Mmm, no. It makes this kind of impersonation possible by anyone who can coerce/corrupt/control some part of the chain of trust.

          outside the standards committees which have served the Internet well for 30 years.

          Actually, on the topic of security and cryptography, I'd say the standards committees have failed the internet pretty badly. The apparent fixation with providing Verisign with revenue streams has gotten in the way of designing acceptable trust systems.

          The only result that the fixation with certificates and authorities has gotten us is a situation wherein everyone is becoming their own authority and nobody cares about certificate warnings anymore.

          If one wanted to repair the systematic damage by now, the best way would be to simply scrap the CA's out of browsers and anywhere else and just add a way to easily add specific CA's for each new domain/service provider one comes in contact with.

        • Disclaimer. I'm the author of LDAPDNS [nimh.org].

          Randomizing UDP source ports does not solve the problem, it only makes it more difficult to impersonate the responding DNS server.

          Tens of thousands of times harder.

          Secure DNS makes this kind of impersonation impossible, or at least allows us to bask in the warm glow of impossible

          Unfortunately, Secure DNS doesn't exist. Nobody's quite sure what it looks like, or how it'll work because the things crytopgraphers is secure nobody wants to do because it makes ugly domain nam

  • What is Secure DNS and where is the wikipedia article? Anybody?
    • by Anonymous Coward on Tuesday July 15, 2008 @07:51AM (#24194371)

      "The Domain Name System Security Extensions (DNSSEC) are a suite of IETF specifications for securing certain kinds of information provided by the Domain Name System (DNS) as used on Internet Protocol (IP) networks. It is a set of extensions to DNS which provide to DNS clients (resolvers):

              * Origin authentication of DNS data.
              * Data integrity.
              * Authenticated denial of existence."

      http://en.wikipedia.org/wiki/DNSSEC

      • by _Knots ( 165356 ) on Tuesday July 15, 2008 @02:33PM (#24201595)

        Unfortunately, as Vixie admits, DNSSEC has intractable problems and is... well, let's be generous and say "pushed too quickly through the standards process". (See http://cr.yp.to./djbdns/forgery.html [cr.yp.to]; in particular, Vixie's observation 'We are still doing basic research on what kind of data model will work for dns security. After three or four times of saying "NOW we've got it, THIS TIME for sure" there's finally some humility in the picture... "wonder if THIS'll work?" ...' [this was _after_ several DNSSEC RFCs were approved and intended to be implemented were shown to be utter crap.])

        Encouraging people to use DNSSEC is just about as useful as encouraging people to use HOSTS.TXT. OK, I exaggerate a bit, but it's simply not going to solve the problem, is going to expose zones to arbitrary enumeration (remember, The Internet community discouraged answering AXFR requests from the Internet at large presumably for a reason), and is going to introduce much larger computational demands of DNS servers that implement it.

        (Here's a thought: most of this forgery comes from my ability to guess your DNS cache / resolver's query port and request ID. Come IPv6, we could surely add 48+ bits of entropy to the process by having DNS servers listen on a prefix of addresses. Much simpler, if gross.)

        • Unfortunately, as Vixie admits, DNSSEC has intractable problems [...]

          So... why is Vixie recommending it? It's about as secure as using PGP without the web of trust, or using HTTPS without the PKI.

          It would make sense if he didn't admit that there were intractable problems, and then told people to use it. Or if he said that there's really nothing you can do, and that the DNS vendors dropped the ball somethin' fierce by failing to secure the system in the decade since the problem was described.

          But what is he e

          • Re: (Score:3, Insightful)

            by _Knots ( 165356 )

            If I have to guess, it's because Vixie is associated with ISC, who makes BIND, and is hoping that ISC makes more money with the "ZOMG, run DNSSEC or you're all doomed!". Of course, Vixie has never shown any kind of restraint over DNSSEC, and has previously urged adoption of (prior) broken editions of the protocol that somehow passed muster at the IETF despite not living up to their claims.

            DJB may be a meanie, but he seems much more technically competent than Vixie. (I offer as evidence, again, the securit

            • Re: (Score:2, Informative)

              by mibh ( 920980 )
              ISC is a nonprofit, and our software is free. It's difficult to imagine how we'll cover more of our expenses if you all decide to start deploying Secure DNS. I have indeed urged adoption of prior editions of Secure DNS which is how we learned that the protocol and/or implementations of same still needed work. Am I doing that again? You betcha -- because we'll be fine tuning the Secure DNS protocol and its implementations, and the DNS protocol and its implementations, for the rest of the Internet's lifet
              • by _Knots ( 165356 )

                Yes, sorry, my brain had glitched. The ISC is a nonprofit but was (is?) closely associated (http://cr.yp.to/djbdns/blurb/bindmoney.html) with the company I should have named, namely Nominum, the for-profit branch of Paul Vixie's life. I apoligize for the confusion.

                Unrelatedly, if you're still worried that your protocols need work, it seems premature to promise that you won't flag-day the protocol again. You might certainly hope that you won't have to, which I admit is an admirable wish. Or is that to sa

                • Re: (Score:2, Informative)

                  by mibh ( 920980 )

                  Nominum, the for-profit branch of Paul Vixie's life.

                  you can't be serious. i resigned as chairman of nominum's board back in 2002 or so. i have no position there. they stopped working on BIND9 at about that same time. let the record show that i am grateful to nominum's then-shareholders for their significant donation of code and effort to BIND9 9.0.0, which went well beyond what ISC paid them for.

                  Or that you'll leave DNSSEC running but go for a DNSSEC V2?

                  it's an educated bet. the restarts to the Secur

    • Re: (Score:1, Informative)

      by Anonymous Coward

      Secure DNS is a protocol which uses cryptographic signatures on DNS records to prevent DNS spoofing and other unauthorized manipulations. It has some problems which are mostly a result of DNS being a UDP protocol, which, for performance reasons, can't have long handshakes or cryptographic calculations on the server side.

  • From Paul Vixie's response: "Tom Cross of ISS-XForce correctly pointed out that if your recursive nameserver is behind most forms of NAT/PAT device, the patch won't do you any good since your port numbers will be rewritten on the way out, often using some pretty nonrandom looking substitute port numbers."

    Does this mean that a typical small-business setup isn't protected by the patch?

    For example, a server which provides recursive DNS and which connects to the Internet by a "cable" modem.

    • Oops! I meant cable router, not cable modem

      • It's a problem if your router uses NAT. Does your router use NAT? Is your DNS server behind the router?
        • by Rakarra ( 112805 )

          My home router uses NAT. And sometimes I use my own DNS server to serve the home network and give that network a private TLD. Are those sorts of setups impacted now?

    • It depends on how the NAT device assigns port numbers to outgoing queries. Apparently the fix for this flaw is to ensure the source port for lookups is truly random; some devices may use very predictable sequences (such as our Cisco ASA at work, mutter mutter).

      If you visit Dan Kaminsky's blog [doxpara.com], there's a DNS checker in the right hand panel which allegedly tells you if you need to worry or not. It just looks to see if all your queries for their test domain name came from the same port number.

    • Re: (Score:3, Interesting)

      by socsoc ( 1116769 )
      Just another reason to make your local DNS forwarder use OpenDNS, or if you don't have one on your LAN, direct your router/workstations to OpenDNS. If your small-business LAN is relying on your provider's DNS, hopefully they patched it. Most worth their salt have, but OpenDNS also provides many features that are useful to small-business (in addition to not having been vulnerable to the flaw).
  • by SlappyBastard ( 961143 ) on Tuesday July 15, 2008 @08:00AM (#24194471) Homepage

    All paranoid theories about this issue sort of ignore the fact that unless you plan to install hundreds (or even thousands) of systems from your own compiled source for years and years to come, all of this discussion is at best academic.

    The new distros are going to have the patch.

    And really, considering the number of prehistoric vulnerabilities that should have been patched, that this one is finally getting patched is fine.

    Yeah, there's a bit of "trust me" factor here with this patch, but a lot of good people are putting their credibility on the line for this patch.

    All of this whole FOSS thing entails a lot of trust. I mean, you're really telling me that everyone on here whining about the need to see the source code has read every line of code in every OS they're using? There is a level at which we're all sort of hoping that the guys interested in each of the particular parts of the OS have done a thorough job in their separate efforts.

    • All of this whole FOSS thing entails a lot of trust. I mean, you're really telling me that everyone on here whining about the need to see the source code has read every line of code in every OS they're using?

      There's a specific phrase to describe it, but it escapes me at the moment.

      Bascially, when you have a crowd of people standing around watching someone get beat up, nobody steps in to help, because everyone assumes someone else will help.

      Verifying source code is somewhat like that: someone else will do it. The great thing about the internet is the crowd is so large that the few people, who would jump in no matter what, are always present.

      • by CrankyFool ( 680025 ) on Tuesday July 15, 2008 @12:04PM (#24198793)

        You're looking for Diffusion of Responsibility [wikipedia.org], made famous by the incident in which Kitty Genovese was murdered within earshot of a whole bunch of people, all of whom thought "damnit, someone should do something about this!"

      • by dave562 ( 969951 )
        Verifying source code is somewhat like that: someone else will do it. The great thing about the internet is the crowd is so large that the few people, who would jump in no matter what, are always present.

        Of course they are. They stopped ignoring their mom shouting down into the basement that it was dinner time a LONG time ago. ;)

    • by jc42 ( 318812 )

      All of this whole FOSS thing entails a lot of trust. I mean, you're really telling me that everyone on here whining about the need to see the source code has read every line of code in every OS they're using?

      No, but given the "number of eyes", it's usually not necessary that every person has read every line of code. If a set of N people have read the relevant code and understand it, that's often sufficient to solve a problem.

      I ran across an example just a few days ago. The details aren't too important here

  • by BOFslime ( 178524 ) on Tuesday July 15, 2008 @09:00AM (#24195427) Homepage
    I'm having trouble with Paul Vixies' line:

    Q: "This is the same attack as described way back in ."
    A: No, it's not.

    When Dan Kaminsky states in his blog. [doxpara.com]

    "DJB was right. All those years ago, Dan J. Bernstein was right: Source Port Randomization should be standard on every name server in production use."
    and
    " 1) It's a bug in many platforms 2) It's the exact same bug in many platforms (design bugs, they are a pain) " How is this not the same flaw DJB described?
    • by hal9000(jr) ( 316943 ) on Tuesday July 15, 2008 @09:22AM (#24195799)
      1) It's a bug in many platforms 2) It's the exact same bug in many platforms (design bugs, they are a pain) " How is this not the same flaw DJB described?

      You are looking at two separate issues. The flaw Kaminsky found is apparently a newly discovered design flaw that makes DNS forging easy even with todays, unpatched DNS servers. In the advisory, they discussed previous problems with generating the transactionID to explain the problem and point out that what Dan found is not something already known (alot of people missed that very obvious point).

      The second seperate, issue is UDP source port randomization. That is what Kaminsky was referring to DJB's solution. Kaminsky's assertion is that UDP source port is a good development practice which DJB incorporated into his DNS server.

      Bear in mind that UDP source port generation doesn't solve the underlying problem, it simply makes blind DNS forging more difficult because now an attacker has to guess both a pseudo random transaction ID and a pseudo random UDP source port number. Alot of DNS servers and OS, simply picked source port numbers incrementally or in the case of a DNS server, re-used the some one over and over.

      I don't know hom much more difficult DNS forging will be by randomizing the UDP source port numbers. The additional keyspace is (2^16-1023) and you can probably divide that in half again. But it's better then nothing and probably provides enough time (the time it would take an attacker to blindly guess the transactionID and UDP source port number) for the actual response to hit the DNS server. In DNS, the first response wins.
    • by xrayspx ( 13127 ) on Tuesday July 15, 2008 @09:39AM (#24196105) Homepage
      DJB's source port randomization makes it much much harder to exploit the main bug, which is apparently a fundamental flaw in the DNS. We'll know on the 7th what that flaw is, but until DNSSEC or something similar is implemented, source port randomization will mitigate the risk until such time that the root cause is fixed.
    • Because a single flaw may be exploitable through multiple vectors of attack.

      Though in this case source port randomization isn't the fix, it a stop gap that makes the flaw difficult to exploit in practice.

    • Just because two problems have the same mitigation doesn't mean that they're the same problem.

      You can take aspirin for a headache, or for a heart attack, but that doesn't mean that a headache is a heart attack or visa-versa. In some cases (I don't know the history, I never followed djbdns), you can have someone state a solution without a known attack -- 'This doesn't feel right, I don't think we should do it, but I can't say exactly why', and so we can get into a situation where there was no flaw for it to

  • A simple test to run (Score:5, Informative)

    by GeorgeK ( 642310 ) on Tuesday July 15, 2008 @11:08AM (#24197675) Homepage

    In a comment to a question I posted for the CircleID article, Paul Vixie posted a nice and simple test that people can run to see how vulnerable they are:

    dig porttest.dns-oarc.net in txt

    FAIR or GOOD means you're ok, but POOR (which is the result I got) means you should be worried.

    • Very confused here... I get "POOR" at home behind a NAT router; okay, I understand that. I run the same test on a remote server on the backbone, which is running the latest bind9, patched this week, and it gives exactly the same "POOR" result.

      What hoop are we supposed to jump through to get a "GOOD" result?

      • Install DJB's dnscache.

      • by jroysdon ( 201893 ) on Tuesday July 15, 2008 @07:20PM (#24206073)

        Many older named.conf configs have "query-source port 53;". While this is nice for firewalls to open up DNS traffic (assuming replies from source udp/53 to destination udp/53 (your query port) are always safe), it makes it easy to exploit.

        This must be removed and those nameserver restarted. I had to do this with all of my servers, otherwise all queries come from the same port (53). Complain to your ISP if you find this to be the case with their DNS servers and in the meantime use some other DNS servers.

        My two local Comcast resolves show "Poor" as the ports they use only have a standard deviation of 130-150, which I guess is assumed to be far too obvious and easy to keep bombarding.

        Second, your NAT router may be de-randomizing your DNS queries if you run a resolver at home, and taking your random ports and putting them in order starting at 1024 (or something similar).

        My own local/internal DNS servers shows as a std dev of 9 since my PAT router is de-randomizing the external ports. I'll be replacing my PAT router.

        Third, your NAT router may be your DNS resolver and may not be using random source ports.

        • by epine ( 68316 )

          My stock OpenBSD 4.0 / BIND 9.3.2-P1 name server configuration passes with no modifications on my part. I got deviations from 16,000 to 20,000 across several hosts I tried within my network.

          It seems to come with the territory that to write secure code, you have to offend people.

          A) I disagree with you.
          B) I think you're wrong.
          C) That's broken.
          D) That's broken, get lost.

          If you chose A, you've probably never written a line of secure code.

    • I did the "dig" test on my patched DNS servers, and one of them failed.
      Reason: It was connected to an ADSL router by a 192.168.1.0/24 subnet which was translated by port S-NAT to a narrow range of source UDP ports.

      As a result, all of the fixing of the DNS servers was made useless.
      It was only the "dig porttest.dns-oarc.net in txt" test which exposed this.

      Not that you should really do:

      dig @your.dns.server porttest.dns-oarc.net in txt

      where your.dns.server is the local DNS server behind your ADSL router or fire

    • by illtud ( 115152 )

      dig porttest.dns-oarc.net in txt

      FAIR or GOOD means you're ok, but POOR (which is the result I got) means you should be worried.

      Thanks for that. I've patched our RHEL5 BIND, but I still failed that test. I discovered that our legacy named.conf had carried a 'query-source' directive from ghod-knows-when and that just patching isn't always enough.

      I found the issue from:

      http://www.centos.org/modules/newbb/viewtopic.php?form=1&topic_id=15132&forum=37&order=ASC&start=0 [centos.org]

  • by 93 Escort Wagon ( 326346 ) on Tuesday July 15, 2008 @12:40PM (#24199457)

    Third, stop complaining, we've all got a lot of work to do by August 7 and it's a little silly to spend any time arguing when we need to be patching.

    The patch is now in my crontab and set to run on the 6th.

"Marriage is low down, but you spend the rest of your life paying for it." -- Baskins

Working...