Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Networking The Internet

ISC Offers Response Policy Zones For DNS 39

penciling_in writes "ISC has made the announcement that they have developed a technology that will allow 'cooperating good guys' to provide and consume reputation information about domain names. The release of the technology, called Response Policy Zones (DNS RPZ), was announced at DEFCON. Paul Vixie explains: 'Every day lots of new names are added to the global DNS, and most of them belong to scammers, spammers, e-criminals, and speculators. The DNS industry has a lot of highly capable and competitive registrars and registries who have made it possible to reserve or create a new name in just seconds, and to create millions of them per day. ... If your recursive DNS server has a policy rule which forbids certain domain names from being resolvable, then they will not resolve. And, it's possible to either create and maintain these rules locally, or, import them from a reputation provider. ISC is not in the business of identifying good domains or bad domains. We will not be publishing any reputation data. But, we do publish technical information about protocols and formats, and we do publish source code. So our role in DNS RPZ will be to define 'the spec' whereby cooperating producers and consumers can exchange reputation data, and to publish a version of BIND that can subscribe to such reputation data feeds. This means we will create a market for DNS reputation but we will not participate directly in that market.'"
This discussion has been archived. No new comments can be posted.

ISC Offers Response Policy Zones For DNS

Comments Filter:
  • oh, and follow-up (Score:4, Informative)

    by bsDaemon ( 87307 ) on Friday July 30, 2010 @05:56PM (#33090070)

    it looks like you can also define policy in the RPZ zone so that the domain you're trying to block can pointed to a web server were you have a block message up, presumably describing the policy reason that the site is being listed.

    additionally, there is no requirement that says one must subscribed to a Spamhause-style service, that's just a hypothetical option. Besides, if your recursive DNS servers are blocking stuff you want to get to anyway, you can choose different ones, or set up your own. Setting up BIND as a recursive DNS server is ridiculously easy, and you can ignore RPZ zones to your hearts content then.

  • by Korin43 ( 881732 ) on Friday July 30, 2010 @05:57PM (#33090088) Homepage
    No. SSL certificates are useful for providing encryption and a better sense of security, but they're far too corporate. The certificate companies aren't going to spend much time checking people are who they say they are for a cheap certificate because it will cost them money. Not to mention that they aren't used on most of the internet (because they're a waste of money on personal sites). This creates a way to come up with better security information for every site.
  • by mrbene ( 1380531 ) on Friday July 30, 2010 @06:48PM (#33090664)

    A whole lot depends on implementation. The initial intent seems to be to provide a mechanism of blocking domain names that have just been created and have high probability of being phishing/spamming/whatever nefarious. Theoretically, DNS could be updated to include the age of the record to help clients make up their own minds of whether to connect or not, but then you'd start on a slippery slope of additional information about records.

    By building the protocol around a layer of abstraction, additional information can be considered - the actual IP that it's resolving to, how rapidly that's changing, how many different domain names are being created against the netblock that this one is created against, and so on. Much richer information, and theoretically can provide much more useful results.

    The implementation? It's going to be problematic for some, since the decision is being made by a 3rd party as to what is trusted. But this is the case with many ISPs DNS servers anyway - if it doesn't resolve, you end up at a search page instead of getting a DNS error. This won't affect the majority of users in a way they perceive. Is that a good thing? Most of the time...

    Overall, if the DNS server I used was smart enough to prevent successful lookups of records created recently (>1 day), records associated with IPs that saw more than n records added per time period, and a maybe one or two other basic things, I'd probably have a significantly reduced vulnerability to drive by downloads, bots depending on fast fluxing C&C servers, and other actively nefarious threats.

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...