Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Google Microsoft Security Technology

When Is It Right To Go Public With Security Flaws? 126

nk497 writes "When it comes to security flaws, who should be warned first: users or software vendors? The debate has flared up again, after Google researcher Tavis Ormandy published a flaw in Windows Support. As previously noted on Slashdot, Google has since promised to back researchers that give vendors at least 60-days to sort out a solution to reported flaws, while Microsoft has responded by renaming responsible disclosure as 'coordinated vulnerability disclosure.' Microsoft is set to announce something related to community-based defense at Black Hat, but it's not likely to be a bug bounty, as the firm has again said it won't pay for vulnerabilities. So what other methods for managing disclosures could the security industry develop, that balance vendors need for time to develop a solution and researchers' needs to work together and publish?"
This discussion has been archived. No new comments can be posted.

When Is It Right To Go Public With Security Flaws?

Comments Filter:
  • by Kalidor ( 94097 ) on Tuesday July 27, 2010 @09:27AM (#33044634) Homepage

    ... and posted them elsewhere. So here's a quick copy paste and what my thoughts are.
    ======================
    Procedure :
    Step 1) notify manufacturer of flaw

    Step 2) Wait an appropriate time for response. This depends on the product. OS could be as much as months depending on how deep the flaw is. Web-browsers probably 2-3 weeks.
    Corollary 2a) If manufacturer responds and says its a will-not-fix you have some decisions, see 3a.

    Step 3) If no response, make an announcement of doing a proof of concept exhibition with a very vague description. People asking for details say it was probably as vague as possible. The company has already been contacted, so they know the issue or can contact you from the announcement. Schedule it with enough time for the company to release a fix.
    Corollary 3a) How critical is the flaw. If marked as will-not-fix and its very detrimental you might have to sit on it.

    Step 4) Do exhibit. With luck flaw has been fixed and last slide is about how well manufacturer did.

    Step 5) ...Profit!!!! (While this is the obligatory joke post, Check out E-Eye security to see how it's happened before)
    ===============
    WRT to 3a: You'd be surprised how often this is done. There are two long-standing issues against a certain software that, while being uncommon and not often thought of attack vectors, are less than trivial to exploit and gain full access. Manufacturer has, in fact, responded with a "works as designed, will not fix." People in the information securities industry have found the flaws so detrimental that they've imposed a self-embargo about openly discussing it. Without manufacturer buy-in, a fix just can't come in time if that particular information was released and the effect would be significantly widespread. The only thing releasing the information would do is cause a massive Zero Day event that would only harm consumers or leave them without the services of the software for several months. With no evidence that the exploit is being used in the wild, save for handful of anecdotal reports, the issue has become a bi-annual prodding of the manufacturer.

    • by Anonymous Coward on Tuesday July 27, 2010 @09:46AM (#33044880)

      WRT WRT 3a: So the industry and the manufacturer are basically patting each other on the back, happy in the knowledge that if no-one from the club talks about the problem, it's impossible to discover otherwise? It's going to be slightly icky to say "we told you so" when this is discovered independently and causes "a massive Zero Day event that would only harm consumers or leave them without the services of the software for several months." (Note that I used "when this is discovered", not "if". As you may be aware, if something could be done, it's only a matter of time until somebody does it)

      • Re: (Score:3, Interesting)

        by Anonymous Coward

        I like especially how this ignores the human angle and assumes that all involved parties are even able to shut up for years (well, I don't know, maybe they receive... err... gratitude to shut up).

        • by TimSSG ( 1068536 )
          I sure hope they were NOT paid;
          it would make them part of an
          conspiracy to cover up flaws.
          And, when someone uses that flaw
          would make them and the Companies
          they work for possibly liable to
          large amount of damages and
          possible jail time.

          IANAL, but like to play one on the web.

          Tim S.
      • by mea37 ( 1201159 )

        It's probably worse than that. GP didn't give us much to go on about the nature of the attack, but generally a flaw described in such severe terms either (1) offers a foot in the door for the attacker to go after other systems on the network, or (2) exposes sensitive information. By contrast with flaws that allow DoS (for example), it isn't typically obvious when a flaw of that type is exploited.

        So the question isn't "how do you know someone won't discover the flaw? what will you do when you notice it bei

    • by Nadaka ( 224565 ) on Tuesday July 27, 2010 @09:50AM (#33044932)

      This is standard operating procedure and responsible disclosure as far as I can tell.

      The problem is that the company is likely to file an injunction to stop the presentation and possibly file blackmail charges against you.

      You need to amend the above procedure with anonymous notification and demonstration in order to protect the safety of those following responsible disclosure.

      • Companies are always a threat. Disclose anonymously to the company, wait, then disclose to public without warning.

        There is no reason to offer your neck to your enemies. There is no reason to want recognition from them.

        Remember basic security, tell no one who you are, and don't go attention-whoring after you release.

        • by fishbowl ( 7759 )

          >Remember basic security, tell no one who you are, and don't go attention-whoring after you release.

          You've identified the real issue, but this is often ignored. The problem isn't the disclosure itself. The problem is that so many people with such disclosures to make, seem to want credit/attention for their efforts, but also want to be free of the risks associated with seeking that attention. Anonymous channels exist. Release information via one of those, and then if somebody is upset about it, they c

          • by gorzek ( 647352 )

            This is very true. Whistleblowers have some protection but it is still dangerous to be one, and especially dangerous to be a very public one.

    • by hAckz0r ( 989977 ) on Tuesday July 27, 2010 @10:00AM (#33045104)
      You need to notify CERT, and then they have the ability to apply more pressure on the manufacturer, as they simultaneously publish a very vague notice to the community of a flaw being worked on. If CERT is involved you have a much higher probability of not being ignored or told "will-not-fix" because it is already public knowledge that there is an exploit that needs fixing. Its in the record. The official "report cards" for the vendors then have the clock start ticking the minute you report the flaw, and the vendor can not deny that they were notified and/or aware of the problem. In other words, they can't sweep it under the rug very easily, and you have done the best you can do without causing mass pandemonium.
      • by Lehk228 ( 705449 )
        it works better if you just post details in an image on 4chan's /b/ and let nature take it's course
      • by Kalidor ( 94097 )

        What makes you think some of the people that know aren't at CERT...

        That said you are correct; Cert should be notified at the same time as the manufactuer IMHO.

      • by adtifyj ( 868717 )

        Does CERT publish all notices, eventually? I think they should have pre-determined release dates, which could be influenced on the vendors past history in resolving incidents promptly.

    • So, your "people in the information security" are basically helping the vendor selling faulty software while withholding crucial information from users of said software at the same time? If the issues you mention are indeed "less than trivial" you help the vendor to cheat people into thinking that they are safe with the software.

      "People in the information security" have the job of making the IT environment safer. You must force the vendor to fix these holes even if it takes a vulnerability disclosure and a

      • by Kalidor ( 94097 )

        Never said it was an easy position. Not one of them likes the situation they are in, but no one has been able to come up with a good solution.

        To be fair I think as wide spread, detremental, and unknown as these two problems are it's very unlikely that there are more than a handful of such cases in software out there today and it's only done in the most extreme of situations. At least, that's my sincere hope.

        As for it being used right now. To be honest we don't know that it isn't. But generally, it's a widel

    • by cynyr ( 703126 )

      lets see, time to patch any major portion of GNU/linux, probably less than 2 weeks. thats from bug report to update in distros repos. If OSS can do it with free labor in 2 weeks, paid for devs should be able to do it in less, say half, 1 week. I know my apple keyboard wasn't fully supported when i bought it, less than a week later there was a patch applied to mainline stable kernels that corrected the issue. and that was just for some of the FN keys not working as advertised. So a month max sounds good. I w

    • by Hatta ( 162192 ) on Tuesday July 27, 2010 @11:12AM (#33046410) Journal

      Huh? If there's a severe vulnerability and the manufacturer refuses to fix it, you should release it immediately. Then at least those affected can mitigate their vulnerability. Otherwise, the black hats have free reign.

    • Re: (Score:2, Flamebait)

      by dissy ( 172727 )

      Quote: The only thing releasing the information would do is cause a massive Zero Day event that would only harm consumers or leave them without the services of the software for several months.

      ---

      So you prefer the alternate option, where you sit on it and only the black hats have access to the zero day event that would harm consumers and leave them without services of the software for several months.

      I see the wide difference.

      You would prefer you kept your exploits open and vulnerable, so no one can protect t

  • by RJarett ( 114128 ) on Tuesday July 27, 2010 @09:27AM (#33044640)

    I discovered a large DoS within VMware 3.5-4.0 last march. I opened up a support case on it to at least find a workaround. The engineer closed the ticket after an hour or 2 as "unsupported OS".

    The DoS reboots ESX/ESXI out from under the VM when you power the VM on.

    This leads to serious issues, and the closed the ticket quick. No further investigation. This is a perfect example of releasing details and source to force the company to fix the issue.

    • Re: (Score:3, Insightful)

      by gorzek ( 647352 )

      "Unsupported OS" means "unsupported OS." The vendor disavows any responsibility for bad things that happen when using their software on your unsupported platform.

      This is a common thing for software vendors to do to close out tickets quickly. If it's an unsupported scenario (hardware, software, use case, etc.) then they can close it and keep their average ticket lifetime down.

      A little shady, I guess, but if they never claimed to support your platform I don't see what you could really complain about.

    • Re: (Score:1, Redundant)

      by ImprovOmega ( 744717 )

      I don't see that as being a very easy attack vector to exploit. If the attacker is to the point where he can install a guest OS on your VMWare server, you were already completely owned. If it's a disgruntled sysadmin then the solution is "fire him/her and change the passwords". So...yeah, unsupported OS.

      Now, if you find a way to exploit that from a *guest* OS that was support and got owned (like from within Windows Server 2008 or something) and you can run something that blows up ESX, then that may be a

  • Never (Score:4, Funny)

    by SeriouslyNoClue ( 1842116 ) on Tuesday July 27, 2010 @09:30AM (#33044686)
    Time after time it's been proven that the safest security is the security that is shrouded in the most mystery. Why can't anyone hack Windows 7? Because it's new and no one knows how it works. People like Ormandy are a bane to the community because they steal code from Microsoft (there is no other way they could know about these flaws) and then once they stolen it, they release it for virus writers to hurt the common man. They are a public enemy and I'd suspect he has contacts inside Microsoft (if you're reading this Steve Ballmer, I suggest you begin purging those who doubt you and those closest to you).

    I cannot believe Google would show support to someone who is most obviously a criminal aiding and abetting other criminals.

    Nobody wants their source code shown to malware writers for obvious reasons so let Microsoft have its privacy. Why do individuals get privacy rights but not Microsoft? Did you ever stop to think about that? No, you didn't, because you were too busy helping the bad guys.

    You should never reveal a security flaw. It's called common sense about saftey and protecting everyone around you.
    • Re:Never (Score:4, Insightful)

      by Whalou ( 721698 ) on Tuesday July 27, 2010 @09:47AM (#33044886)
      You do your user name proud.
    • I can't tell if this is sarcasm or not. The US never revealed the security flaw for ENIGMA because they were using it against the Germans, while the Germans believed ENIGMA was secure and unhackable. We had them by the balls.
      • Re: (Score:3, Insightful)

        by jgtg32a ( 1173373 )

        I thought the Brits cracked all of that?

      • Re: (Score:3, Insightful)

        by John Hasler ( 414242 )

        Actually it was the Poles and the Brits who broke Enigma: the USA broke the Japanese codes. Irrelevant in any case though. The Germans had developed Enigma themselves and were using it only internally: there were no trusting "users" at risk.

        • Re: (Score:3, Insightful)

          The vulnerabilities are the same regardless of who is at risk. The argument is that only 'good guys' are able to find vulnerabilities, and that 'bad guys' don't find or can't keep hold of such information, or just can't use it. The GP purports that keeping problems a secret will never result in secret underground cults developing a cohesive, structured approach to abusing those problems.
  • "Who should be warned first: users or software vendors?"

    Tell both. But if you announce something, please doc how you did it and don't brush off the vendor. (Email from users and press can get pretty thick after you announce something - if you're ethical and really want to fix the problem all that noise should be lo pri...)

    • Simultaneously you mean? That leaves the vendor no time to fix the flaw.

      The question basically boils down to: "I want to be an ethical person. How much time is appropriate to wait after reporting a flaw to the vendor?"

      • If you want to be an "ethical person" you will want to warn the users ASAP.

        • But by warning users you are also informing the people who would use it against users. That's the double edged-ness of the situation.

          • by cynyr ( 703126 )

            yep, thats under the assumption though the bad guys need the good guys to tell them the holes. But even so, if win7 can be killed by a packet on port X, It's simple for the users to mitigate upstream by blocking port X at the firewall(you do have a separate one right?)

            If there is no way to mitigate it outside of a vendor patch, let the vendor know first, and tell them they have say 2 weeks to be making progress...

            Also this all reinforces my belief that commercial software development needs to work like engi

      • Simultaneously you mean? That leaves the vendor no time to fix the flaw.

        Simultaneously you mean? That forces Microsoft to fix the flaw, instead of letting it stew for years or decades.

        Fixed that for ya! :-)

        Sarcasm (even if true) aside, the simple fact is, the largest problem with any of these scenarios is the ill will Microsoft has caused in the security community. Regardless of who wants to argue about it being caused by the complexity of the products, or the lack of willingness of Microsoft to fix issues, or a combination of both, the simple fact is, Microsoft has, in the

  • by Anonymous Coward
    I agree with MS on this, deadline always isn't feasible. They have to test on many different levels before they could release the update. Google just used Ormandy to have some positive PR on themselves. Frankly, from my point of view, Google screwed this one up and Ormandy or any other researcher cannot hold companies at gun point to release fix asap. If he had given them 60 day disclosure and even after that, if MS had not provided any response then releasing the bug details would make sense. The way Orman
    • A deadline's always feasible. It may not be possible to come up with a clean fix in a short timeframe, but you can always come up with either a workaround or something the users can do to mitigate the damage. This may not be ideal from the vendor's point of view, but it's not the vendor who's in danger of having their systems attacked so I'm not overly concerned about their public-relations heartburn.

      • A deadline's always feasible. It may not be possible to come up with a clean fix in a short timeframe, but you can always come up with either a workaround or something the users can do to mitigate the damage.

        So publish the workaround along with the vulnerability.

      • by Rockoon ( 1252108 ) on Tuesday July 27, 2010 @12:26PM (#33047700)

        This may not be ideal from the vendor's point of view, but it's not the vendor who's in danger of having their systems attacked so I'm not overly concerned about their public-relations heartburn.

        If you are not concerned about the vendors public-relations, then why release at all? It seems to me that the justification for release is precisely that the researchers ARE concerned about the vendors public-relations.. intent on harming it.

        Its end users that dont follow security issues that are most at risk, where the releasing of exploits hurts them pretty much directly and immediately.

        If its a critical bug in software that a typical grandma (and other non-geeks) uses, I claim that it is ALWAYS irresponsible to release the details of the exploit into the wild. Every single time, no matter how much time has passed waiting for a fix. This belief is formulated on the premise that the vendor's public-relations dont mean shit either way , that its the end users that mean something.

  • It's not fair (Score:3, Interesting)

    by Anonymous Coward on Tuesday July 27, 2010 @09:37AM (#33044762)

    to threaten the guys who find vulnerabilities with jail time or fees. I uncovered a major security flaw in a piece of software (allowed an attacker to spawn a shell as root with extreme ease) and also found a way to circumvent the DRM and what happened.... I got stiffed. Instead of FIXING the problem (which is still intact to this day) the company attempted to sue for copyright infringement, among a few other "charges". Luckily, I had a great lawyer and I had documented EVERYTHING from 0 to 60. I was lucky.

    This makes me sick. One minute, corporations are talking about providing "rewards" for unearthing flaws/vulnerabilities and then the next, they are trying to sue for every penny. If it wasn't for us, their systems wouldn't last a week without some script kiddie coming along and bringing the whole thing to it's knees.

  • Who is at fault? (Score:2, Interesting)

    It's interesting that the talks center around the responsibility of the researcher and the vendor, but often little attention is paid to the responsibility of the user. Are they as liable? For example, if a manufacturer sells a door lock with flaws but the user keeps the windows (ha) open and someone on the street shouts, "Dude, you're using a Schock Pax H23 and it can be opened with a loud scream!" who is responsible?

    As primarily a Linux user, I used to think that the tools just didn't exist on Windows t

  • by Rogerborg ( 306625 ) on Tuesday July 27, 2010 @09:38AM (#33044778) Homepage

    Never, ever a responsibility. You didn't write the bug, you didn't miss it in testing, you didn't release it. You owe the developer nothing.

    The only ethical consideration should be your sole judgement about the best method to get a fix in the hands of vulnerable users.

    You don't like that, Microsoft? Then do you own vulnerability testing and don't release software with vulnerabilities: the problem goes away overnight. Until then, sit down, shut up, grow up, and quit your bitching about being caught with your pants down.

    • You owe the developer nothing.

      The flaw in this thinking is that it's not the developer who is ultimately harmed by a disclosure... and I rather doubt that the x-million users of the software will appreciate that you released the information for their own ultimate good.

      • Re: (Score:2, Insightful)

        by mOdQuArK! ( 87332 )

        Technically speaking, you don't owe the other users anything either - it's still a matter of courtesy.

        • Re: (Score:1, Interesting)

          by Anonymous Coward

          That depends on your life philosophy.

          In my opinion you owe your fellow human beings a lot more than mere courtesy, but it appears I am quickly joining a minority.

          • That depends on your life philosophy.

            In my opinion you owe your fellow human beings a lot more than mere courtesy, but it appears I am quickly joining a minority.

            Nah, I pretty much draw the line at courtesy until you earn more than that ;)

        • by mea37 ( 1201159 )

          Not owing someone something, doesn't mean you can act without regard to that person. I don't owe you anything, but I still have to stop at a crosswalk if you're walking through it.

          The question isn't "do I owe you anything?" as though disclosure were inaction and delaying disclosure were action I might undertake as a favor. Disclosure itself is an action, and the question is "if I do this, am I liable for resulting harm that may befall you?"

          I know you want to say "no, it's the fault of whoever wrote the so

          • > If a court were to find that a specific attack occured because of your
            > disclosure and would not have occured otherwise, you may be held partially
            > liable to that attack's victim even if your disclosure ultimlately prevented
            > many more attacks.

            Not likely in the USA. Absent a contract you have no duty not to utter true statements.

            • by mea37 ( 1201159 )

              Interesting... are you talking about how things are, or how you want them to be?

              The reason I ask is, if such a blanket statement were a true description of civil liability, I don't think the EFF would spend so much time talking about how to limit your liability when you publish a vulnerability (i.e. utter true statements).

              For example... [eff.org]

              What I'd really like to see is a citation to some case history, since little else is meaningful in predicting how civil liability will play out; but I've been unable to find

              • I'm talking about disclosing vulnerabilities, not publishing exploit code. From your link:

                Publication of truthful information is protected by the First Amendment. Both source code and object code are also protected speech. Therefore truthful vulnerability information or proof of concept code are constitutionally protected. This protection, however, is not absolute. Rather, it means that legal restrictions on publishing vulnerability reports must be viewpoint-neutral and narrowly tailored. Practically spea

        • Technically speaking, you don't owe the other users anything either - it's still a matter of courtesy.

          Technically speaking, sociopaths need not apply.

      • by Weezul ( 52464 )

        A security researcher has no particular duty to users either, but some may assume one for themselves. If so, releasing depends upon whether you're suspicious that exploits exist in the wild.

        If bugs are actively being exploited, they are most likely being exploited by the worst people, so publicly enabling all mostly harmless the script kiddies will help matters by forcing the developer to issue faster fixes, possible in multiple stages. If a bug isn't be exploited, fine just tell the developer, and publis

        • Agreed - but my comment was made in the in the context of the original poster who seems to think that there should be no responsible behavior exercised by the researcher...

          Never, ever a responsibility. You didn't write the bug, you didn't miss it in testing, you didn't release it. You owe the developer nothing.

      • by adtifyj ( 868717 )

        You owe the developer nothing.

        The flaw in this thinking is that it's not the developer who is ultimately harmed by a disclosure... and I rather doubt that the x-million users of the software will appreciate that you released the information for their own ultimate good.

        The current users may not appreciate it, but then they may also decide to find a better vendor if they are more acutely aware of the time that the vendor has had to fix the problem.

    • If you find a brand new vulnerability and go straight to IRC with it you are not just hurting Microsoft or sticking it to the man. Your hurting everyone that runs that software. You are also creating bigger botnets which can then be further used in DDOS attacks and extortion attempts etc... So in effect you are damaging the Internet and making it a bigger cesspool. There are ethical issues around vulnerability disclosure. You strike me as the type that collects bots and so probably don't care but the rest o

      • There's also another ethical issue: keeping me (as an administrator of vulnerable systems) in the dark about the vulnerabiility puts my systems at risk and prevents me from protecting my systems. You are hurting me in a very direct way by not disclosing the problem to me. If I know the problem exists I can for instance shut down the vulnerable services (if they aren't necessary for my systems to operate), block access to those services at the firewall and/or replace the vulnerable software with equivalent s

        • I can for instance shut down the vulnerable services (if they aren't necessary for my systems to operate),

          Why are the services running if they aren't necessary?

          Someone should have presented a business case for every process running on the server. Some of these are trivial ("without a kernel, the server won't run"). But there shouldn't be any 'nice to have' or 'may come in handy one day' services running.

          • In a lot of cases they're convenient but not necessary. I'd prefer to run them, they make life simpler for everyone, but I can live without them if that's what's required to keep things secure. Eg., webmail or web access to a ticket tracking system. They're nice to have, and there's a compelling business argument for having them available as an alternative to dedicated client software, but not such a compelling one that we'd be willing to sacrifice security to have them. So if they're secure we want them ru

        • Why are you running unnecessary services on a bastion host? You are not a very good administrator if you are doing that. Also what if there is no workaround and you need the service to conduct business you are basically screwed. At the very least the disclosure lets the whole world know that your are vulnerable until you get around to implementing the workaround on a presumably high profile production machine that's probably high risk for in line changes. Some favor.

          People who disclose security vulnerabilit

          • Problem here: you're assuming that the disclosure causes the problem. That's incorrect. I was just as vulnerable, and just as likely to have someone exploiting the vulnerability, before the disclosure as after. If the researcher found it, odds are the black hats found it long ago and have been actively using it. But now I know about the problem and know what to look for to see if we've been breached (or are in the process of being breached).

            The ostrich reaction, the "if I don't know about it it doesn't exis

            • I am not assuming anything I know from real world experience that irresponsible disclosure causes more problems than it ever solved.

              I am not making excuses for poorly engineered software that deserves it's share of derision.

              However releasing 0 day exploits to script kiddies is not in any way making anyone more secure. It's a selfish act of personal enrichment pure and simple.

              You found a vulnerability great go through the process of responsible disclosure give the vendor at least 60 days to respond t

      • "If you find a brand new vulnerability and go straight to IRC with it you are not just hurting Microsoft or sticking it to the man. Your hurting everyone that runs that software."

        Uh... no. The one hurting the user is the company that didn't put enough effort into their development and QA practices. The one that prevents other market rivals to offer a properly engineered work by going cheap against them.

        It's funny how big corporations are able to mutate public opinion in such weird ways and they even get s

  • Once you suspect a security flaw, flare a public mailing list with developers on it. Ask them for help tracking down the issue, until you as a group determine if you've discovered a hole and get a proof of concept running, all in public discussion.
    • by AHuxley ( 892839 )
      The the best and only way, the light of truth into dark places.
    • Yeah, help the malware writers by telling them where to look for issues.

      • by Zerth ( 26112 )

        You are assuming that malware writers don't already know.

        You only know that the public is ignorant of it and thus can't take measures to prevent it, such as uninstalling the broken software or not opening vulnerable file types.

        • A perfectly reasonable assumption. Take a look at the CVE list some time and understand that most of those vulnerabilities were not being exploited when they were discovered.

  • by Anonymous Coward

    No one is bright enough to find a security whole that couldn't have been discovered elsewhere before. So it's pretty likely the flaw is either known to the vendor who might not have had seen the need for fixing this, or it is known to an attacker, who already uses the flaw and just didn't appear (yet) on the radar of any researcher or the vendor. As it might be possible that you yourself are monitored by anybody else, your finding might be in the open that way. So it makes no sense in keeping the public

  • Do not give bad guys the possibility to learn about a flaw earlier than the users who are affected. If you don't publish the flaw, there is a certain possibility that it will be sold at black markets and kept secret to be able to use against customers. You can see that full disclosure groups are targets of commercial crackers. Full disclosure is like destroying business of criminals.

    A customer should always be aware of a flaw and know how to protect himself against it.

    There is no need for exploit code. You

  • Being allowed (Score:1, Interesting)

    by Anonymous Coward

    The problem with "responsible disclosure" is being allowed to do it. Reporting a bug to a vendor might get you a "fix" response (best case), might get you ignored (average case), or might get you hit with a Gag Order lawsuit (worst case). Disclosing the bug after the worst case can get you arrested and even if you manage to avoid jail, you have spent a lot of money in defending yourself. This is the reason behind the full disclosure movement, to prevent vendors from gagging researchers who discovered bug

  • Whenever you damn well please unless you are contractually obligated to do otherwise.

    • by fishbowl ( 7759 )

      >Whenever you damn well please unless you are contractually obligated to do otherwise.

      And "contractually obligated" necessarily involves an exchange of valuable consideration (e.g., they give you money in return for your agreement to keep your mouth shut). In general, software EULAs are not contracts for exactly this reason.

  • Comment removed based on user account deletion
  • if i ever run across a vulnerability in any closed source software i will submit that information anonymously to prevent the authorities from treating me as if i was a criminal or terrorist, the only exception to that rule would be if i found a vulnerability in something licensed under the GNU/GPL then i will simply submit a bug report through the regular channels or email the author of the software directly.
  • As long as the vendors get a grace period or as in some cases forever as a timeframe the incentive to fix the real issue wont go away.

    The discussion about full disclosure/responsible disclosure is a side issue to the real questions. Why dont the vendors do proper testing before releasing software? Why do they refrain from fixing bugs they fully know about? Why should researchers take any responsibility for the vendors customers when its obvious the vendors wont think twice about security or Q & A?

    Its no

  • Vendor, then users (Score:3, Interesting)

    by Todd Knarr ( 15451 ) on Tuesday July 27, 2010 @11:05AM (#33046306) Homepage

    In most cases you warn the vendor first, providing complete details including exploit code so they have no excuse for not being able to duplicate the problem. If the vendor won't acknowledge your report within a reasonable time (say 7 days), will not commit to a timeline for having either a fix, a workaround or a mitigation strategy for users within a reasonable time (say 14 days from acknowledgement, with the deadline being 30-90 days out depending on severity) or fails to meet the deadline, then you disclose to users including full details, exploit code (so the problem can be independently verified without having to rely on your word that it exists) and a recommended mitigation strategy. Demanding payment for the report is never appropriate unless the vendor has publicly committed to a "bug bounty" and your demand is what they've publicly committed to.

    There'd be occasional exceptions to the above. If for instance the vulnerability is theoretical and you can't create actual exploit code for it, demanding the vendor fix it is inappropriate (by the same token, though, it's far less of a problem to discuss the problem in public if it truly can't be feasibly exploited). The deadline should be more flexible for less severe vulnerabilities. If the vendor has a track record of responding inappropriately to reports (eg. by threatening legal action against the researcher), immediate anonymous disclosure may be a better approach.

    • I found a ring 0 exploit in a popular operating system, whereby any unprivileged user-mode process could get ring 0 access. It's been about a month since I told the developer, and they haven't said when a fix would be coming.

      It's a ring 0 exploit, but actually turning it into a a root exploit is annoyingly complex due to the design of this operating system. There is nothing computer-theoretic stopping it, just complexity regarding the way page tables work. The exploit gives ring 0 in your control very ea

      • If it's merely difficult, remember this: with computer code it takes just one single genius somewhere figuring it out to enable every 2-bit script-kiddie with a mouse and an ego to use it. Once there's a working method it's just a matter of packaging it up into an easy-to-use kit.

        The trick, however, is in figuring out the method. Now, if you've got ring 0 then by definition you've got more power on the system than root has. If you can't turn this into user-mode root then I have to be skeptical of your claim

  • The No Code Publish approach seems reasonable to me: Publish flaw to everyone - including CERT, Vendors but include no code or exploit for anything publicly readable. Give the vendor your exploit code an a deadline after which you will publish the exploit. If no fix appears by the deadline then you publish.
  • If you're really a whitehat, tell the vendor first. This will keep the exploit away from blackhats while the vendor fixes the hole. Security through obscurity works, up until the time it doesn't. So if the vendor does not fix the hole quickly, and you suspect the blackhats are about to discover it, then you need to inform the people who are vulnerable to it. If possible without broadcasting it to the blackhats and script-kiddies. Yes, that's rarely possible, but if it's possible it's the right thing to

  • Giving the vendor an opportunity to apply a fix is all good and dandy, but any researcher must remember this:

    Real blackhats don't wait around for a patch before they go on the prowl for systems to exploit. And they don't announce their discoveries in public.

    Vendors are not only racing the "egotistic researcher" looking to score points by pulling their pants down, but also against the crackers looking to not only pull their pants down, but rape them in the ass.

    No matter who is on what side of the security d

  • Let FOX News know and tell them that it will open up millions of computers to potential identity theft scams, destroy the integrity of national security, and could cause you to be impotent.

    Then see how fast it gets fixed.
  • Publish details about the bug as soon as you find it; publish an exploit as soon as possible. If every discoverer of security flaws did this, software devs would learn very quickly to have second thoughts about releasing unchecked code. I say that as a software dev.

    Seriously, you think you're smarter than everyone else? That you're the only one who discovered a flaw? Puh-lease. The Chinese government alone is probably throwing more manpower at finding flaws in US software than there are developers in t

  • Please explain to me what "right" is in the context of your assertion and I'll then try to answer your question. Otherwise it's really impossible.

  • Contact vendor and a reputable third party, such as CERT, simultaneously.

    Give vendor a -very- small window (at -most- a week or so) to respond, with (1) contact information; (2) an assigned issue identifier (at least one while triaging), and (3) a specific time frame till a follow-up response, not to exceed N days (your choice; 14, 30, 60, 90, etc.). This response does not need to be a full triage and verification, just a real response of "we assigned this to John Doe to research as issue number 54321-unver

  • Short answer? When no one else is willing to buy it from you.
  • Let me just start with a wikipedia entry, on Kerchoff's principle: http://en.wikipedia.org/wiki/Kerckhoffs'_principle [wikipedia.org]

    If you get the point of Kerchoff's principle, you understand why *if* all things are equal *then* open source code is inherently better than closed source code because public disclosure finds the flaws in the source code faster so they can be fixed faster.

    If you want to force a *proprietary* vendor to *immediately* fix a vulnerability, you have to disclose it to the public first, as the

  • What it sounds like to me is that MS thinks that you owe them. They're doing you a favor by fixing any problems that you find (and if it takes them 6 months, well darn it it's because they were busy patching another Windows activation exploit and that is certainly more important). Were I to find a security flaw with Windows, I would probably release it for all the world to see... without notifying MS. They have paid employees to find and fix these problems, guess they don't need my help.

Ocean: A body of water occupying about two-thirds of a world made for man -- who has no gills. -- Ambrose Bierce

Working...