Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Google Microsoft Security Technology

When Is It Right To Go Public With Security Flaws? 126

nk497 writes "When it comes to security flaws, who should be warned first: users or software vendors? The debate has flared up again, after Google researcher Tavis Ormandy published a flaw in Windows Support. As previously noted on Slashdot, Google has since promised to back researchers that give vendors at least 60-days to sort out a solution to reported flaws, while Microsoft has responded by renaming responsible disclosure as 'coordinated vulnerability disclosure.' Microsoft is set to announce something related to community-based defense at Black Hat, but it's not likely to be a bug bounty, as the firm has again said it won't pay for vulnerabilities. So what other methods for managing disclosures could the security industry develop, that balance vendors need for time to develop a solution and researchers' needs to work together and publish?"
This discussion has been archived. No new comments can be posted.

When Is It Right To Go Public With Security Flaws?

Comments Filter:
  • by Kalidor ( 94097 ) on Tuesday July 27, 2010 @10:27AM (#33044634) Homepage

    ... and posted them elsewhere. So here's a quick copy paste and what my thoughts are.
    ======================
    Procedure :
    Step 1) notify manufacturer of flaw

    Step 2) Wait an appropriate time for response. This depends on the product. OS could be as much as months depending on how deep the flaw is. Web-browsers probably 2-3 weeks.
    Corollary 2a) If manufacturer responds and says its a will-not-fix you have some decisions, see 3a.

    Step 3) If no response, make an announcement of doing a proof of concept exhibition with a very vague description. People asking for details say it was probably as vague as possible. The company has already been contacted, so they know the issue or can contact you from the announcement. Schedule it with enough time for the company to release a fix.
    Corollary 3a) How critical is the flaw. If marked as will-not-fix and its very detrimental you might have to sit on it.

    Step 4) Do exhibit. With luck flaw has been fixed and last slide is about how well manufacturer did.

    Step 5) ...Profit!!!! (While this is the obligatory joke post, Check out E-Eye security to see how it's happened before)
    ===============
    WRT to 3a: You'd be surprised how often this is done. There are two long-standing issues against a certain software that, while being uncommon and not often thought of attack vectors, are less than trivial to exploit and gain full access. Manufacturer has, in fact, responded with a "works as designed, will not fix." People in the information securities industry have found the flaws so detrimental that they've imposed a self-embargo about openly discussing it. Without manufacturer buy-in, a fix just can't come in time if that particular information was released and the effect would be significantly widespread. The only thing releasing the information would do is cause a massive Zero Day event that would only harm consumers or leave them without the services of the software for several months. With no evidence that the exploit is being used in the wild, save for handful of anecdotal reports, the issue has become a bi-annual prodding of the manufacturer.

  • by RJarett ( 114128 ) on Tuesday July 27, 2010 @10:27AM (#33044640)

    I discovered a large DoS within VMware 3.5-4.0 last march. I opened up a support case on it to at least find a workaround. The engineer closed the ticket after an hour or 2 as "unsupported OS".

    The DoS reboots ESX/ESXI out from under the VM when you power the VM on.

    This leads to serious issues, and the closed the ticket quick. No further investigation. This is a perfect example of releasing details and source to force the company to fix the issue.

  • by Anonymous Coward on Tuesday July 27, 2010 @10:32AM (#33044710)
    I agree with MS on this, deadline always isn't feasible. They have to test on many different levels before they could release the update. Google just used Ormandy to have some positive PR on themselves. Frankly, from my point of view, Google screwed this one up and Ormandy or any other researcher cannot hold companies at gun point to release fix asap. If he had given them 60 day disclosure and even after that, if MS had not provided any response then releasing the bug details would make sense. The way Ormandy and Google acted on this was cheap.
  • It's not fair (Score:3, Interesting)

    by Anonymous Coward on Tuesday July 27, 2010 @10:37AM (#33044762)

    to threaten the guys who find vulnerabilities with jail time or fees. I uncovered a major security flaw in a piece of software (allowed an attacker to spawn a shell as root with extreme ease) and also found a way to circumvent the DRM and what happened.... I got stiffed. Instead of FIXING the problem (which is still intact to this day) the company attempted to sue for copyright infringement, among a few other "charges". Luckily, I had a great lawyer and I had documented EVERYTHING from 0 to 60. I was lucky.

    This makes me sick. One minute, corporations are talking about providing "rewards" for unearthing flaws/vulnerabilities and then the next, they are trying to sue for every penny. If it wasn't for us, their systems wouldn't last a week without some script kiddie coming along and bringing the whole thing to it's knees.

  • Who is at fault? (Score:2, Interesting)

    by digitalhermit ( 113459 ) on Tuesday July 27, 2010 @10:38AM (#33044776) Homepage

    It's interesting that the talks center around the responsibility of the researcher and the vendor, but often little attention is paid to the responsibility of the user. Are they as liable? For example, if a manufacturer sells a door lock with flaws but the user keeps the windows (ha) open and someone on the street shouts, "Dude, you're using a Schock Pax H23 and it can be opened with a loud scream!" who is responsible?

    As primarily a Linux user, I used to think that the tools just didn't exist on Windows to see what the system is doing. On my Linux box, I can do a "netstat -tlnw" or an "iptables -L" or "fuser -n tcp xxx" and get lots of information. Using that I can disable services, lock them off with TCP Wrappers or IPTABLES, or even sandbox them very easily.

    When it was necessary to use a Windows XP system in a relatively hostile network, I was worried. Then I started poking around. Netstat is available on Windows and does the same thing. There's a process listing. There's even a grep workalike ('find' of all things). With those tools it's possible to get a good picture of what's happening on the system.

    The gist of this post is that though I enjoy the expanding marketshare of Linux, I am worried that it brings hordes of users that do not make the effort to know their systems. Should they? I think so. It's similar to carrying a firearm. It's great to be able to do so, but you must be responsible about it when you do carry.

  • Re:when.. (Score:2, Interesting)

    by Anonymous Coward on Tuesday July 27, 2010 @10:47AM (#33044894)

    When it makes microsoft look bad so we can trash them on slashdot?

    How about giving the vendor time to issue a patch if said vendor has earned the goodwill of the community or at least not earned the ill will of the community? Abuse of monopoly as found in various courts of law? Immediately go public. Vendor lock-in practices? Immediately go public. Silly patent lawsuits over ideas that are not really original? Immediately go public. Public statements about how they now take security very seriously and it is a top priority for them and then no substantial improvement? Immediately go public. Using their power and influence to bribe standards committees? Immediately go public. Deceptive marketing practices? Immediately go public. Building strict DRM as an integral and non-removal component of the OS? Immediately go public. This list is not exhaustive and would apply to all vendors.

    Found a vendor that does not engage in these practices? Work with them. Give them time to develop a patch. Help them fix the flaw, if you are so inclined and have the skill. Note that there is no such urge to make them look bad when they don't use all the plotting and planning and manipulation and control, and decide to make thier money by producing good products that people want to buy. Crazy concept, I know. For those companies, it would be wrong to immediately go public in order to make them look bad. Microsoft is not one of those companies.

    And if you say "but what about the users who suffer exploits" I have an easy answer. You mean the users who reward abusive companies with their money and continue to fund more of the same? You mean those users? Heaven forbid if doing business with abusive companies might not be entirely free of negative repercussions for them...

  • by Anonymous Coward on Tuesday July 27, 2010 @10:54AM (#33044982)

    I like especially how this ignores the human angle and assumes that all involved parties are even able to shut up for years (well, I don't know, maybe they receive... err... gratitude to shut up).

  • by hAckz0r ( 989977 ) on Tuesday July 27, 2010 @11:00AM (#33045104)
    You need to notify CERT, and then they have the ability to apply more pressure on the manufacturer, as they simultaneously publish a very vague notice to the community of a flaw being worked on. If CERT is involved you have a much higher probability of not being ignored or told "will-not-fix" because it is already public knowledge that there is an exploit that needs fixing. Its in the record. The official "report cards" for the vendors then have the clock start ticking the minute you report the flaw, and the vendor can not deny that they were notified and/or aware of the problem. In other words, they can't sweep it under the rug very easily, and you have done the best you can do without causing mass pandemonium.
  • Being allowed (Score:1, Interesting)

    by Anonymous Coward on Tuesday July 27, 2010 @11:11AM (#33045310)

    The problem with "responsible disclosure" is being allowed to do it. Reporting a bug to a vendor might get you a "fix" response (best case), might get you ignored (average case), or might get you hit with a Gag Order lawsuit (worst case). Disclosing the bug after the worst case can get you arrested and even if you manage to avoid jail, you have spent a lot of money in defending yourself. This is the reason behind the full disclosure movement, to prevent vendors from gagging researchers who discovered bugs (security by obscurity), and allow Administrators to work around bugs by disabling the service(s), firewalling, sandboxing, etc. It's been said many times, but is worth repeating, that just because the bug hasn't been put on bugtrac, doesn't mean the black hats don't know about it. It's also worth noting that bugs that have been in existance for years, were only addressed once they were disclosed to the community.

  • by Anonymous Coward on Tuesday July 27, 2010 @11:15AM (#33045366)

    That depends on your life philosophy.

    In my opinion you owe your fellow human beings a lot more than mere courtesy, but it appears I am quickly joining a minority.

  • Vendor, then users (Score:3, Interesting)

    by Todd Knarr ( 15451 ) on Tuesday July 27, 2010 @12:05PM (#33046306) Homepage

    In most cases you warn the vendor first, providing complete details including exploit code so they have no excuse for not being able to duplicate the problem. If the vendor won't acknowledge your report within a reasonable time (say 7 days), will not commit to a timeline for having either a fix, a workaround or a mitigation strategy for users within a reasonable time (say 14 days from acknowledgement, with the deadline being 30-90 days out depending on severity) or fails to meet the deadline, then you disclose to users including full details, exploit code (so the problem can be independently verified without having to rely on your word that it exists) and a recommended mitigation strategy. Demanding payment for the report is never appropriate unless the vendor has publicly committed to a "bug bounty" and your demand is what they've publicly committed to.

    There'd be occasional exceptions to the above. If for instance the vulnerability is theoretical and you can't create actual exploit code for it, demanding the vendor fix it is inappropriate (by the same token, though, it's far less of a problem to discuss the problem in public if it truly can't be feasibly exploited). The deadline should be more flexible for less severe vulnerabilities. If the vendor has a track record of responding inappropriately to reports (eg. by threatening legal action against the researcher), immediate anonymous disclosure may be a better approach.

  • by Rockoon ( 1252108 ) on Tuesday July 27, 2010 @01:26PM (#33047700)

    This may not be ideal from the vendor's point of view, but it's not the vendor who's in danger of having their systems attacked so I'm not overly concerned about their public-relations heartburn.

    If you are not concerned about the vendors public-relations, then why release at all? It seems to me that the justification for release is precisely that the researchers ARE concerned about the vendors public-relations.. intent on harming it.

    Its end users that dont follow security issues that are most at risk, where the releasing of exploits hurts them pretty much directly and immediately.

    If its a critical bug in software that a typical grandma (and other non-geeks) uses, I claim that it is ALWAYS irresponsible to release the details of the exploit into the wild. Every single time, no matter how much time has passed waiting for a fix. This belief is formulated on the premise that the vendor's public-relations dont mean shit either way , that its the end users that mean something.

There are two ways to write error-free programs; only the third one works.

Working...