Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Security The Internet IT

Ten Percent of DNS Servers Still Vulnerable 170

maotx writes "Even with the uproar caused by the recent DNS attacks, a recent study shows that roughly 10% of 2.5 million DNS servers show that they are still vulnerable to DNS cache poisoning. To put that a little bit more in perspective, of that 10% discovered, 230,000 were identified as potentially vulnerable, 60,000 are very likely to be open to this specific type of attack, and 13,000 have a cache that can definitely be poisoned." From the article: "The use of DNS cache poisoning to steal personal information from people by sending them to spoofed sites is a relatively new threat. Some security companies have called this technique pharming."
This discussion has been archived. No new comments can be posted.

Ten Percent of DNS Servers Still Vulnerable

Comments Filter:
  • DJBDNS -- rocks (Score:4, Informative)

    by www.sorehands.com ( 142825 ) on Thursday August 04, 2005 @12:52PM (#13241329) Homepage
    I have been using the DJBDNS with the DJBDNS rocks [djbdnsrocks.org]installation under FC2. This makes it very easy to install and manage.

    The same person also does Qmail Rocks [qmailrocks.org]. Of course djbdns and qmail is much more secure than bind and sendmail.

  • Re:DJBDNS -- rocks (Score:3, Informative)

    by Feyr ( 449684 ) on Thursday August 04, 2005 @01:07PM (#13241554) Journal
    as i said many times when i installed djbdns a few months ago. djbdns is crap, but it's the best crap available

    as for being more secure... it doesn't have nearly the same complexity and features as say, bind.
  • Re:DJBDNS -- rocks (Score:3, Informative)

    by geniusj ( 140174 ) on Thursday August 04, 2005 @01:13PM (#13241630) Homepage
    I use djbdns for the dynamic DNS/DNS hosting provider mentioned in my sig. It's worked out amazingly well, and it's been deployed that way for a few years now. There's a few reasons I really like it:

    1) The rsync method of replication is very well suited for keeping multiple DNS servers synced with the exact same records.

    2) I never have to worry about it or touch it

    3) The CPU and memory usage are much lower than when I was doing this with BIND. In fact, it's pretty much negligible with a few hundred queries per second.
  • by sarlos ( 903082 ) on Thursday August 04, 2005 @01:15PM (#13241667)
    Someone's gotta speak up for the poor admins. Not all of them really are morons for not patching. There are cases where the patch breaks more than it fixes. In these cases, it's often more economical to just leave the vulnerability there (hey, at least you know about it) than to try to patch it. SQL Slammer caused some serious problems with IIS because the 'patch' for the bug it exploited was part of a large update that required a lot of man-hours to clean up after. Of course, there are plenty of moron admins out there too, I wouldn't want them to feel overlooked... >.>
  • Prior art (Score:4, Informative)

    by jfengel ( 409917 ) on Thursday August 04, 2005 @01:17PM (#13241686) Homepage Journal
    Especially since the pharmaceutical companies have a much better (and prior) claim to the name for using organisms to produce medicines [wikipedia.org].
  • by Bi()hazard ( 323405 ) on Thursday August 04, 2005 @01:26PM (#13241795) Homepage Journal
    The fix in question here is available. The BIND webpage [isc.org] has a scary warning box on the right with details. Everyone should be upgrading to the new version.

    But it's not surprising that there's still vulnerable servers out there. In fact, I'm surprised the total is so low. Aside from the few admins who just aren't doing their jobs, these kinds of things often run into bureaucracy. In many organizations, upgrades have to be thoroughly tested before release and there's standard schedules for patch cycles. An admin who wants to simply stick a new version of something on the production server may be told to wait until approval comes. That could take a while. And occasionally you'll have some crappy system that doesn't work well with the new software, and they're stuck rolling back until the problem is solved.

    I had a friend who worked at a small ISP that had some serious security issues. The guy who should have been patching things "resigned"-something to do with the smell of pot lingering in his office. Anyways, the position went vacant for a little while and the task fell to the two new interns, my friend and another girl. Coincidentally they were both young women and had no experience relevant to the job, proof of quality hiring practices. To make a long story short, the (not terribly large) customer database got hacked and the company was sued. The owner, who had been heavily in debt already, vanished completely. Naturally the whole thing went down in flames and my friend didn't even get a reference out of it.

    Most of you are probably sitting there thinking this story is too outlandish to be true. Haha, well, this is the internet so you never know what to trust, but you know there's places out there where things just aren't done the way they're supposed to be. It's shocking what goes on, and there will always be vulnerable servers around.

    Getting it down to the numbers in the article this quickly is actually pretty good. The real lesson here is that you need to insulate yourself from the fools who won't take responsibility. Always assume 10% of the internet is out to get you, because they probably are. Hey, I don't even want to think about what 10% of slashdotters would want to do to me.
  • by kylog ( 684524 ) on Thursday August 04, 2005 @02:26PM (#13242605)
    The news.com article is short on specifics about what the thousands of servers are actually doing, but there's better info at Dan Kaminsky's site: http://www.doxpara.com/ [doxpara.com]

    This powerpoint presentation has some details: http://www.doxpara.com/Black_Ops_Of_TCPIP_2005.ppt [doxpara.com]
  • OT: Your Sig (Score:3, Informative)

    by sconeu ( 64226 ) on Thursday August 04, 2005 @02:42PM (#13242801) Homepage Journal
    Your sig is an urban legend. See snopes [snopes.com] for details.

  • by 3l1za ( 770108 ) on Thursday August 04, 2005 @09:03PM (#13246219)
    So he waits for his cached info to expire, and does it again... except this time, his reply packet includes extra information, "Oh, by the way, www.microsoft.com is on joes.evil.server.here."

    What the badguy actually does is:
    • gets queried for www.badguy.com by target.com
    • delegates authority for HIS nameservers to ns.yahoo.com, for example; so he says:
      • www.badguy.com NS ns1.yahoo.com
      • www.badguy.com NS ns2.yahoo.com
      • ...
    • ALSO includes fake mappings of the form:
      • ns1.yahoo.com A 1.2.3.4
      • ns2.yahoo.com A 1.2.3.4
      • ...
    • so target.com contacts "ns1.yahoo.com" at 1.2.3.4 and asks to resolve "www.badguy.com"
    • since ns1.yahoo.com is *actually* a name server under bad guy's control (bad guy controls 1.2.3.4), ns1.yahoo.com returns how to get to www.badguy.com
    • then in future queries for www.yahoo.com, the name server will ask 1.2.3.4 for the IP for www.yahoo.com and send that reply to the requestor
    Much better explained here [cr.yp.to]

    As DJB says, the "work around" is not to accept authoritative mappings (e.g. ns1.yahoo.com A 1.2.3.4) from anyone but yahoo.com.

New York... when civilization falls apart, remember, we were way ahead of you. - David Letterman

Working...