Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Networking The Internet

ISC Offers Response Policy Zones For DNS 39

penciling_in writes "ISC has made the announcement that they have developed a technology that will allow 'cooperating good guys' to provide and consume reputation information about domain names. The release of the technology, called Response Policy Zones (DNS RPZ), was announced at DEFCON. Paul Vixie explains: 'Every day lots of new names are added to the global DNS, and most of them belong to scammers, spammers, e-criminals, and speculators. The DNS industry has a lot of highly capable and competitive registrars and registries who have made it possible to reserve or create a new name in just seconds, and to create millions of them per day. ... If your recursive DNS server has a policy rule which forbids certain domain names from being resolvable, then they will not resolve. And, it's possible to either create and maintain these rules locally, or, import them from a reputation provider. ISC is not in the business of identifying good domains or bad domains. We will not be publishing any reputation data. But, we do publish technical information about protocols and formats, and we do publish source code. So our role in DNS RPZ will be to define 'the spec' whereby cooperating producers and consumers can exchange reputation data, and to publish a version of BIND that can subscribe to such reputation data feeds. This means we will create a market for DNS reputation but we will not participate directly in that market.'"
This discussion has been archived. No new comments can be posted.

ISC Offers Response Policy Zones For DNS

Comments Filter:
  • I'd hate to see what governments do with this technology or rival corporations. Who's to say that Comcast won't make Rural Town's USA's coop appear to be a site with a negative reputation.

    • by N0Man74 ( 1620447 ) on Friday July 30, 2010 @05:02PM (#33090146)

      First of all, didn't they say that the reputation would be determined by "cooperating good guys"? Since when has Comcast ever been described as "cooperative", or "good"? ;-)

      But seriously, reputations aren't usually vetoes where one person can blackball a server, are they? I would imagine that they would realize that it would be a waste of time, given that all of the other "good guys" would collectively carry too much weight for one entity to effectively sabotage.

      I also imagine that they'd realize that this would be a good way to lose credibility as a "good guy", and maybe have it revoked.

      Hopefully the same principal would apply on the other end if a "non-good guy" gets in the system in order to push bad sites.

      I seriously doubt it will be a magic bullet, but it might help.

    • by Sean ( 422 )

      I was just going to ask that question. What will define reputable?

  • It suddenly turned out that wikileaks, the piratebay, and anybody affiliated with Falun Gong or the Dalai Llama were all "spammers, e-criminals, and speculators"...

    Well, at least this is a standards-driven OSS-supported alternative to the existing DNS filtering schemes that definitively have never been used for nefarious purposes so far.
  • Bad, bad idea (Score:5, Insightful)

    by 6031769 ( 829845 ) on Friday July 30, 2010 @04:33PM (#33089746) Homepage Journal

    I have a lot of time for Paul Vixie, but in this particular case he has come up with a bad idea. This should absolutely not be handled in DNS. There are plenty of reputation-based schemes already in operation for per-protocol black or white listing which work as well (and as badly) as any such scheme can do. There is no need to drag it down to the core, polluting DNS with yet more protocol shenanigans as we do so.

    DNS was always a simple protocol which did one job and did it well. Please stop trying to expand it to solve problems which have already been solved (by those who wish to do so) elsewhere.

    • Re: (Score:3, Insightful)

      by bersl2 ( 689221 )

      Well, at least you always have the option of querying the root servers directly. Surely they won't have this enabled.

    • by Shag ( 3737 )

      There are plenty of reputation-based schemes already in operation for per-protocol black or white listing which work as well (and as badly) as any such scheme can do. There is no need to drag it down to the core, polluting DNS with yet more protocol shenanigans as we do so.

      Given that connections via most protocols are preceded by DNS queries (unless you're using hardcoded IP addresses for everything), I think whether this is or isn't a good idea comes down to one question:

      Are there are lot of domains out there that deserve a bad reputation for things they do on certain ports or over certain protocols, but are otherwise fine and upstanding members of society?

      I think there are plenty of companies out there that are respectable outfits but make some poor choices vis-a-vis email

      • by TheLink ( 130905 )
        I'm sure a combination of google, twitter, facebook, discussion boards etc can help malware avoid the use of blacklisted DNS domains.

        Nobody is going to blacklist those.

        Is Vixie promoting yet another complicated (or even "Rube Goldberg"ish ) solution to problems?
  • I predict it will take about 0.00000023s for anyone with an agenda and means to manipulate this to their will. Corporate America? The Media? Politics of all types? Government?

    Pick your poison and enjoy it before it kills you.

  • by Yaa 101 ( 664725 ) on Friday July 30, 2010 @04:35PM (#33089776) Journal

    Are we satisfied of that other reputation system called SSL certificates?

    • Re: (Score:3, Informative)

      by Korin43 ( 881732 )
      No. SSL certificates are useful for providing encryption and a better sense of security, but they're far too corporate. The certificate companies aren't going to spend much time checking people are who they say they are for a cheap certificate because it will cost them money. Not to mention that they aren't used on most of the internet (because they're a waste of money on personal sites). This creates a way to come up with better security information for every site.
      • Re: (Score:3, Interesting)

        by Dr. Evil ( 3501 )

        The main reason I'm not using them and I'm sure most others aren't is because SSL sucks for virtual hosts. Else, I'd have a self-signed or cacert cert on all my domains.

        • by Korin43 ( 881732 )

          Self-signed certificates are one of my biggest problems with SSL. It gives you the same general level of security as SSH[1], but browsers are set up to make people trust sites with self-signed certificates less than site with no certificate.

          [1] You can't be sure it's the right computer the first time you connect (unless you already have the certificate), but every time after that you can know it's the same computer and the connection is encrypted.

        • by amorsen ( 7485 )

          because SSL sucks for virtual hosts

          This has been fixed with TLS. See SSL with Virtual Hosts Using SNI [apache.org]. It doesn't work with IE6, but then, what does?

          • by osgeek ( 239988 )

            According to Wikipedia, it doesn't work on any browser running on Windows XP. Ugh. It'll be 10 years before those things are significantly depleted from the population.

            • by amorsen ( 7485 )

              Wikipedia is wrong then. Internet Explorer doesn't do SNI on Windows XP, but Firefox is fine. More specifically the library SChannel is broken on XP, and therefore the browsers depending on SChannel have a problem. That includes Internet Explorer and it at least used to include Chrome, although Google has been working on an alternative NSS implementation. It seems that Chrome M6 has the problem fixed.

    • Re: (Score:3, Insightful)

      by _Sprocket_ ( 42527 )

      Aren't CAs establishing (at best) identity and not reputation?

    • SSL isn't rep, it's security if you can call it that. And it's opt-in, not forced.
  • Stopping the names from resolving leaves the user wondering whether they're experiencing a network failure. What is needed is a new response, and support for this response, and not simply resolution failure.

    • Re: (Score:3, Insightful)

      by bsDaemon ( 87307 )

      It doesn't just prevent the name from resolving, though. It will also return the fact the query was blocked by RPZ via a STATUS code. At that point, I think it should be up to the application, such as the browser, which is causing the DNS query, to read the STATUS code for the query and provide the appropriate message, such as "server not found" in response to a query with an NXDOMAIN status.

      I actually think this is pretty cool and am excited about it, although I suspect that I'm in the minority on this h

      • oh, and follow-up (Score:4, Informative)

        by bsDaemon ( 87307 ) on Friday July 30, 2010 @04:56PM (#33090070)

        it looks like you can also define policy in the RPZ zone so that the domain you're trying to block can pointed to a web server were you have a block message up, presumably describing the policy reason that the site is being listed.

        additionally, there is no requirement that says one must subscribed to a Spamhause-style service, that's just a hypothetical option. Besides, if your recursive DNS servers are blocking stuff you want to get to anyway, you can choose different ones, or set up your own. Setting up BIND as a recursive DNS server is ridiculously easy, and you can ignore RPZ zones to your hearts content then.

        • by tepples ( 727027 )

          Besides, if your recursive DNS servers are blocking stuff you want to get to anyway, you can choose different ones, or set up your own.

          Unless, for example, your ISP has a transparent proxy that redirects all outgoing traffic to known public recursive servers (e.g. Google's 8.8.4.4) to your ISP's recursive server instead. Do any ISPs in the developed world do this?

      • It doesn't just prevent the name from resolving, though. It will also return the fact the query was blocked by RPZ via a STATUS code.

        It sounds like it could be a fantastic thing if my web browser does something intelligent with the response code. You can tell I'm too lazy to RTFA

  • Is it April Fool's Day already?
    This strikes me as viscerally wrong on so many levels, but one is immediately articulable: This would be an attempt to solve a social issue via technical means, and such efforts are usually doomed to failure. But not before wasting a lot of money, effort, and billable hours...
  • But we are creating a framework through which names can be taken and through said metadata asses can be located and kicked.

    .
  • Paul Vixie already has quite the reputation for high-handed wholesale blocking of sites deemed to be improper. MAPS RBL was his baby and while the political fallout from that misadventure cost him much of his reputation - it looks like he's trying to keep at it but put the blame on someone else this time.

    Regardless of that, this scheme will be afflicted with the same problems that MAPS had. When what the people can see or read depends upon the ratings applied by some special (and probably secret) group then they'll twist this power to serve themselves. Malware or spam? Blocked. Porn? Blocked. Negative opinions about the blocking? Blocked. Wrong political position? Blocked. Didn't pay protection or get approval from the government? Blocked.

    Paul Vixie is undeniably talented and knows a lot about networking. But his knowledge of human nature and how society works is woefully inadequate. Something that is always true: when you attempt to apply technological solutions to societal problems, it doesn't solve the problems and introduces new and usually worse problems. See RIAA / MPAA VS. Everyone for insight as to how blocking creates more problems than it solves.

  • And Then What We Really Need is a technology that will allow 'cooperating good guys' to provide and consume reputation information about reputation information providers.

  • by mrbene ( 1380531 ) on Friday July 30, 2010 @05:48PM (#33090664)

    A whole lot depends on implementation. The initial intent seems to be to provide a mechanism of blocking domain names that have just been created and have high probability of being phishing/spamming/whatever nefarious. Theoretically, DNS could be updated to include the age of the record to help clients make up their own minds of whether to connect or not, but then you'd start on a slippery slope of additional information about records.

    By building the protocol around a layer of abstraction, additional information can be considered - the actual IP that it's resolving to, how rapidly that's changing, how many different domain names are being created against the netblock that this one is created against, and so on. Much richer information, and theoretically can provide much more useful results.

    The implementation? It's going to be problematic for some, since the decision is being made by a 3rd party as to what is trusted. But this is the case with many ISPs DNS servers anyway - if it doesn't resolve, you end up at a search page instead of getting a DNS error. This won't affect the majority of users in a way they perceive. Is that a good thing? Most of the time...

    Overall, if the DNS server I used was smart enough to prevent successful lookups of records created recently (>1 day), records associated with IPs that saw more than n records added per time period, and a maybe one or two other basic things, I'd probably have a significantly reduced vulnerability to drive by downloads, bots depending on fast fluxing C&C servers, and other actively nefarious threats.

    • It seems there is no doubt that this will be used the wrong way. Just look at all the domains that don't resolve which your ISP tries to "help" by sending you to lots of lovely ad pages. What if Wikileaks gets on the Blacklist? No matter what this is goodbye net neutrality.
  • Filtering the DNS should be at the bridge to the LAN, and/or the end-user's machine.

    Reputation belongs on reputation servers. We don't have many of those yet, and what we do have are implemented wrong, but that's where they belong.

    It would be good in many cases for the user's machine to be able to pop up a notice on first access to a domain:

    "This domain has a reputation for attacking visitors with malware."

    or that kind of thing. But that's not DNS, that's reputation, and people should be able/required to ch

"An idealist is one who, on noticing that a rose smells better than a cabbage, concludes that it will also make better soup." - H.L. Mencken

Working...