Ten Percent of DNS Servers Still Vulnerable 170
maotx writes "Even with the uproar caused by the recent DNS attacks, a recent study shows that roughly 10% of 2.5 million DNS servers show that they are still vulnerable to DNS cache poisoning. To put that a little bit more in perspective, of that 10% discovered, 230,000 were identified as potentially vulnerable, 60,000 are very likely to be open to this specific type of attack, and 13,000 have a cache that can definitely be poisoned." From the article: "The use of DNS cache poisoning to steal personal information from people by sending them to spoofed sites is a relatively new threat. Some security companies have called this technique pharming."
DJBDNS -- rocks (Score:4, Informative)
The same person also does Qmail Rocks [qmailrocks.org]. Of course djbdns and qmail is much more secure than bind and sendmail.
Re:DJBDNS -- rocks (Score:3, Informative)
as for being more secure... it doesn't have nearly the same complexity and features as say, bind.
Re:DJBDNS -- rocks (Score:3, Informative)
1) The rsync method of replication is very well suited for keeping multiple DNS servers synced with the exact same records.
2) I never have to worry about it or touch it
3) The CPU and memory usage are much lower than when I was doing this with BIND. In fact, it's pretty much negligible with a few hundred queries per second.
In the Admins' Defense (Score:2, Informative)
Prior art (Score:4, Informative)
Re:Admins - Take some initiative! (Score:5, Informative)
But it's not surprising that there's still vulnerable servers out there. In fact, I'm surprised the total is so low. Aside from the few admins who just aren't doing their jobs, these kinds of things often run into bureaucracy. In many organizations, upgrades have to be thoroughly tested before release and there's standard schedules for patch cycles. An admin who wants to simply stick a new version of something on the production server may be told to wait until approval comes. That could take a while. And occasionally you'll have some crappy system that doesn't work well with the new software, and they're stuck rolling back until the problem is solved.
I had a friend who worked at a small ISP that had some serious security issues. The guy who should have been patching things "resigned"-something to do with the smell of pot lingering in his office. Anyways, the position went vacant for a little while and the task fell to the two new interns, my friend and another girl. Coincidentally they were both young women and had no experience relevant to the job, proof of quality hiring practices. To make a long story short, the (not terribly large) customer database got hacked and the company was sued. The owner, who had been heavily in debt already, vanished completely. Naturally the whole thing went down in flames and my friend didn't even get a reference out of it.
Most of you are probably sitting there thinking this story is too outlandish to be true. Haha, well, this is the internet so you never know what to trust, but you know there's places out there where things just aren't done the way they're supposed to be. It's shocking what goes on, and there will always be vulnerable servers around.
Getting it down to the numbers in the article this quickly is actually pretty good. The real lesson here is that you need to insulate yourself from the fools who won't take responsibility. Always assume 10% of the internet is out to get you, because they probably are. Hey, I don't even want to think about what 10% of slashdotters would want to do to me.
More info from the researcher's web site (Score:2, Informative)
This powerpoint presentation has some details: http://www.doxpara.com/Black_Ops_Of_TCPIP_2005.pp
OT: Your Sig (Score:3, Informative)
one small detail about the attack (Score:4, Informative)
What the badguy actually does is:
As DJB says, the "work around" is not to accept authoritative mappings (e.g. ns1.yahoo.com A 1.2.3.4) from anyone but yahoo.com.