Google Security Expert Finds, Publicly Discloses Windows Kernel Bug 404
hypnosec writes "Security expert Tavis Ormandy has discovered a vulnerability in the Windows kernel which, when exploited, would allow an ordinary user to obtain administrative privileges of the system. Google's security pro posted the details of the vulnerability back in May through the Full Disclosure mailing list rather than reporting it to Microsoft first. He has now gone ahead and published a working exploit. This is not the first instance where Ormandy has opted for full disclosure without first informing the vendor of the affected software."
Who cares. (Score:2, Insightful)
Re:Who cares. (Score:5, Informative)
http://xkcd.com/1200/ [xkcd.com]
Re: (Score:3)
http://xkcd.com/1200/ [xkcd.com]
I've always found its funny that the admin account on that comic looks strangely like a scrotum.
Re:Who cares. (Score:5, Insightful)
That is correct for home users.
But for corporate users, a system level exploit allows things like installing sniffers and key loggers so that more passwords can be collected. Including the admin/root passwords.
Which can be used against the computers in the Accounting department to transfer money from the corporate accounts to "money mules".
Re: (Score:2)
Let's not forget multi-user systems too. If you're really paranoid, you can keep one account for the important stuff and one for general day-to-day crap.
Re:Who cares. (Score:4, Informative)
But for corporate users, a system level exploit allows things like installing sniffers and key loggers so that more passwords can be collected. Including the admin/root passwords.
Absolutely. What takes it to the next level is that most (effectively all) Windows sysadmins will log into workstations using their user credentials which are members of the Domain Admins group. If a standard user is able to gain administrative access on their computer and then get a sysadmin to log in to "look at a problem" (very easy), they will likely gain full control over the local domain. This includes the ability to distribute a malicious binary over the network to every computer in the domain, allowing them to collect personal credentials and information from every other person in the company.
Even without getting a Domain Admin to log into their workstation, there is potential for other security problems. For example, the user might extract the hashed passwords stored in the active directory credential cache which likely contains an entry for a Domain privileged user. They could then attempt brute force decryption on this (salted and hashed) cached password. With modern GPU farms such brute force attacks aren't as crazy as they used to be, especially if the password is weak.
Re:Who cares. (Score:4, Informative)
No, user level programs can't generally do that. Since Vista user privileges don't give access to other app's data or any system files. There is no easy way to steal credentials out of a browser or read email or anything like that.
That is why viruses often try to trick the user into granting them admin level permissions via a UAC warning prompt. In this case a way has been found to take those permissions without a prompt, giving the user a false sense of security and not alerting them to potentially dangerous behaviour.
As for drivers even a kernel level exploit usually won't be able to install them these days. Drivers need to be signed before Windows will allow them to be installed. On Windows 7 you can installed unsigned code after the user gives permission, but Windows 8 flat out refuses to install unsigned binaries as drivers.
Re: (Score:3)
That is why viruses often try to trick the user into granting them admin level permissions via a UAC warning prompt. In this case a way has been found to take those permissions without a prompt, giving the user a false sense of security and not alerting them to potentially dangerous behaviour.
You described a trojan. Viruses exploit a vulnerability to install themselves and spread.
As for drivers even a kernel level exploit usually won't be able to install them these days. Drivers need to be signed before Windows will allow them to be installed. On Windows 7 you can installed unsigned code after the user gives permission, but Windows 8 flat out refuses to install unsigned binaries as drivers.
I haven't written shellcode for Windows since XP (I work on the defensive side of security now), but I do suspect you are not correct here. If you can get your shellcode to execute in kernel space, it can do anything. You could read a driver file from the network, copy it into kernel space and execute it, completely bypassing the signature check. You could also disable the signed-driver requirement so that a rootkit
Re: (Score:2)
You described a trojan.
I meant it as a generic term for malware, apparently should have been more specific.
If you can get your shellcode to execute in kernel space, it can do anything.
If you get in right at the very lowest level you can theoretically do pretty much anything. Practically though there are two things stopping you.
Firstly getting in at that level is hard. The kernel is not monolithic, and the different parts have different permissions. That's why you don't see many viruses that actually do that any more - all the attack vectors that are exposed are for stuff that runs outside the core kernel
Re:Who cares. (Score:4, Informative)
Firstly getting in at that level is hard. The kernel is not monolithic, and the different parts have different permissions. That's why you don't see many viruses that actually do that any more - all the attack vectors that are exposed are for stuff that runs outside the core kernel level we are talking about.
It is typically hard, but this exploit runs at ring-0.
Even if you can get in at that level it still isn't easy to just install your driver. The driver management code won't accept unsigned code even from the inner kernel. You would have to replicate those routines yourself and patch it directly into the driver system. Bypassing the driver loading system, as you say. Hardly trivial.
I don't think you understand what it means to "install your driver". I'm not talking about adding a .dll and .inf file, I'm talking about actually executing driver/shellcode in the kernel. This exploit executes code in ring-0 which gives full access to the kernel memory, hardware, OS, filesystem, registry... everything. There is no need to bypass anything. You've already "installed the driver" and anyone with the skill to exploit a kernel vulnerability will have no trouble overwriting the crypto check function in program space with a "return success" stub. Since this attack does not require the exe to be signed, it can permanently install itself by adding a startup entry in the registry. SecureBoot won't protect against that.
What SecureBoot does protect against is some malware permanently installing itself on the system /after/ the OS has been patched.
Re:Who cares. (Score:5, Informative)
No, user level programs can't generally do that. Since Vista user privileges don't give access to other app's data
I'm sorry, but you are incorrect. Programs running under the same user's security context are all on equal footing and can inspect and interact with each other. Notepad could, for example, read the entire contents of Firefox's private memory. I can create a remote thread in the Firefox process to do whatever it pleased. Vista did not change this.
There is no easy way to steal credentials out of a browser or read email or anything like that.
This is also not true. Firefox clearly stores passwords using reversible encryption (how else could it send the plaintext passwords to websites?). Both the encrypted password and the decryption key is available to any program running under the user's context.
"Reading email" is a little vague, but if absolutely nothing else, a program could capture the text being displayed in the email application using any number of Win32 API / accessibility calls.
That is why viruses often try to trick the user into granting them admin level permissions via a UAC warning prompt
UAC does nothing to prevent a program from gaining adminstrative access (elevating). This has been reliably demonstrated many times by different people, and even Microsoft has said that UAC is not a security boundary. It was created (essentially) for one thing: to force software vendors to start writing programs that did not assume or require the user to have administrator rights. It had a positive side effect of making Microsoft look more focused on security.
As for drivers even a kernel level exploit usually won't be able to install them these days. Drivers need to be signed before Windows will allow them to be installed.
I'm sorry, but this is also incorrect. Keep in mind there are multiple meanings of a "driver", but once you are executing code inside kernelspace, all bets are off. As Raymond Chen likes to say, It rather involved being on the other side of this airtight hatchway [msdn.com].
Windows 8 flat out refuses to install unsigned binaries as drivers
That's unfortunate for independent/small software development shops and open-source software projects. I remember when I had control over what ran on my computer; those were good days. If, however, malicious code has found its way into the kernel your machine is still fully compromised.
Re: (Score:3)
The comic (as previously posted) was amusing and also wrong; a user-level exploit might be able to get you those things, if credentials aren't encrypted. Browser exploit can probably scrape your pages or similar, which is of course bad. However, a system-level exploit can do all this and more:
Re: (Score:2)
I'm not sure, but this may already be possible (for the current user) now, without root.
Even if it's not in general, you could still do something like install a browser extension for the user that does it while they're in the browser. (At least for Firefox; not sure if Chrome extensions are powerful enough to do that.)
Re: (Score:2)
Not to mention with access to a privileged account the malware becomes substantially harder to remove.
Re: (Score:3)
I think you're making some assumptions here about user capabilities and how encryption is used that are incorrect.
if credentials aren't encrypted
User credentials are never encrypted in such a way that the current user cannot access them. What would be the point? Secure storage exists to protect users from other users, and to some extent from nosy administrator (though you can't protect *anything* from a determined and nosy administrator). Bob needs to be able to read Bob's plaintext password or Bob cannot make use of it.
Browser exploit can probably scrape your pages or similar
No exploit nee
Re: (Score:3)
So who cares? Me, and everyone even remotely versed in security.
Re: (Score:2)
Generally user-land viruses will be immediately picked up by antivirus, while a kernel-level exploit can install undetectable keylogger drivers.
huge conflict of interest (Score:5, Insightful)
Re:huge conflict of interest (Score:5, Insightful)
You don't know his motivations, you're making an assumption.
Re:huge conflict of interest (Score:5, Insightful)
I'm curious if he also publically discloses any Android/Chrome related vulnerabilities he finds without first talking to his employer.
Re:huge conflict of interest (Score:5, Insightful)
Re:huge conflict of interest (Score:4, Interesting)
Why does it matter? Full disclosure is the only responsible choice. That doesn't change no matter who your employer is.
Re: (Score:3)
I also don't see him posting that he is doing this as a Google employee or really, that he is related to them in any way. It's an interesting fact, but not necessarily relevant.
Re: (Score:2)
IMHO, full disclosure after a reasonable period of private disclosure is the responsible choice. Such a policy should be applied uniformly to all vendors regardless of relationship; although I suppose you could argue that if there's a partnership then it's quasi-internal. You might even be bound to nondisclosure by the partnership agreement.
Anyway, I digress. By keeping it private for a fixed time and then disclosing, you give the subject time to fix it before an exploit gets produced and you give them a
Re: (Score:2)
IMHO, full disclosure after a reasonable period of private disclosure is the responsible choice.
Why give an attacker a window of time in which he can use his exploit freely? Inform the public immediately, and they can stop using the software, or decide if it's worth the risk.
you give the subject time to fix it before an exploit gets produced
Why do you assume an exploit does not already exist? If you can find it, an attacker can find it too. The prudent assumption is that any bug that can be exploited is b
Re: (Score:3, Insightful)
Absolutely. Immediate disclosure to the public means that they can immediately take measures to reduce their risk. If you tell me that there's a bug in a package I use, I can stop using the package. If you tell the vendor that there's a bug in a package I use, I can't do anything to protect myself.
Re: (Score:3)
If the package is something that can be trivially changed and the flaw is obvious enough that it's likely to be rediscovered quickly, I'd perhaps agree with you. But:
1) Risk of exploitation increases with the number of people aware of the flaw. Immediate public disclosure has ballooned this figure from a handful (most likely just 1) to hundreds of thousands.
2) Most people are not able to trivially switch operating systems. Changing from one OS to another without disrupting progress of essential work that
Re:huge conflict of interest (Score:5, Insightful)
Absolutely. Immediate disclosure to the public means that they can immediately take measures to reduce their risk. If you tell me that there's a bug in a package I use, I can stop using the package. If you tell the vendor that there's a bug in a package I use, I can't do anything to protect myself.
Absolutely not. Your fairy-world imagined utopia is unrealistic.
To use the inevitable car analogy, if a researcher discovers that all automobiles manufactured by GM, Ford, Chrysler, and Honda can be unlocked, started, and driven with the use of a paperclip and that researcher adopts your policy, what happens? Oh, no worries... we North Americans can just immediately take measures to reduce our risk. Like emptying our fuel tanks and buying a bicycle. Or taking our car to a wrecker and buying a nice new Tata import.
"I can stop using the package" is a mindless statement when that "package" is the best-selling OS on the planet. Just like replacing our vehicles so they don't vanish from our driveways, changing OS isn't something that can practically happen overnight. No, thanks to Mr. Full Disclosure we KNOW we're going to get digitally raped by an onslaught of blended-threat spyware-laden remote exploits that finally have a great way to install rootkits even on systems where users don't have admin rights.
Maybe immediate and full disclosure is the right policy for open-source hobbyist software like Linux. I mean, hey, just go compile your own kernel, right?
Clue: if he waited and waited until there WAS an exploit in the wild created by a Black Hat, MS might have patched in time. Because he didn't, MS definitely hasn't. Now he is the Black Hat.
Re: (Score:2)
Motives matter
If he had bad motives he wouldnt have disclosed it in the first place.
If microsoft are too dumb to monitor popular outside forums where faults in their products are discussed then they deserve a black eye, doesnt matter who gives it to them.
Re: (Score:2)
MS does out of cycle updates for critical issues like this...Please be informed before shooting off your mouth...
Target Microsoft (Score:5, Interesting)
If it hadn't been Microsoft, Google may have been a bit more responsible about this, but since it makes their competitor look bad, time to forget about "do no evil".
Re:Target Microsoft (Score:5, Funny)
Re: Target Microsoft (Score:2)
Re: (Score:2)
You cannot be more responsible than full disclosure. The responsible thing to do when you find a bug is to inform those who are at risk from the bug. Any delay leaves those people at risk unnecessarily, and is irresponsible.
Re: (Score:2)
No, the responsible thing to do is to inform those who are at risk because of the bug. They are the party that needs to know first, because they will suffer the harm.
Re: (Score:3)
...forget about "do no evil".
Google is still better than AT&T, whose motto is "Now I am become Death, the destroyer of worlds." Executive bonus recovery fee tagged to your wireless bill: $0.96
only way to get it fixed (Score:3, Insightful)
Re: (Score:3)
Re: (Score:2, Insightful)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
I'm betting this is the only way to get MS to fix the problem in a timely fashion. If it's in the wild, they HAVE to fix it, and fast. Guys had to do this with Apple, as well, because they never fixed any bugs unless absolutely forced to.
So why not report it, wait two weeks, and then disclose it publicly?
This entire conversation assumes reporting it to the vendor and disclosing it publicly are mutually exclusive. Report to the vendor, and give them a deadline as to when you'll disclose it. If they don't patch by the deadline, it gets disclosed. Thus they have to patch it quickly.
Full disclosure and open/closed source (Score:5, Interesting)
The irony of the difference between closed source and open source is that while Ormandy has posted an exploit to this Windows bug, in the open-source world he potentially could have posted a fix too, considering he's the one who seems to understand the bug itself the best...
Just Desserts (Score:2, Insightful)
Been a long time coming, but we finally don't have Microsoft pushing us around any longer.
Some of us with long memories see absolutely no issue with disclosing MS bugs on public forums.
aiding and abetting 8 computer fraud and abuse act (Score:5, Interesting)
Can google and/or this guy be prosecuted for this because releasing the working demo is basically aiding and abetting a criminal
Re: (Score:2)
subject should be 1896 fraud and abuce act - didnt proofread the subject - Do'H
Re: (Score:3)
If MS had done this to Google or Apple... (Score:3)
I guarantee every talking head on TV would be calling for the DoJ to look into it...
This is all about PR and image, Google and apple are sexy, MS is big and boring, but arguably more critical to daily life (you have no idea how many devices and backend systems you use everyday are on Windows)
Carriage return (Score:2)
Re: (Score:2)
works :)
for
me
Re: (Score:2)
Win 32bit only? Meh (Score:4, Interesting)
The code is clearly targeted for x86 only, not for x64 (__declspec(naked)).
I don't have x86 PC.
On Win7x64 the code plainly crashes.
Unimpressed.
I dislike M$ as much as the next guy.... (Score:4, Insightful)
...but not disclosing it to the vendor first and giving them a chance to release a fix is both unprofessional and irresponsible. Add in the fact that this is coming from a Google employee makes it inexcusable, and reflects poorly on Google. If I were his manager he would certainly receive a reprimand.
Re:I dislike M$ as much as the next guy.... (Score:4)
Re: (Score:2)
Re: (Score:2, Insightful)
It's news that a Google employee is being a dick, since they do have a "do no evil" policy. I hate M$ as much as the next /. reader, but we do have to support windows. We don't put our non-technical friends and family on Linux (still waiting for the year of the Linux desktop). Cut us sysadmins some slack already. @$$.
Re: (Score:2, Redundant)
He reported the bug back in May.
If I recall, the proper thing to do when there is neither a timeline nor a patch in a reasonable timeframe is to post the PoC to force the vendor to respond.
Re: (Score:2)
Why do you care what OS your family uses? Why do you need to "support" it?
Every computer is their owner's responsibility.
Re:Seriously, (Score:5, Insightful)
Some of us have empathy and like to live in a working society.
Not all of us can be narcissistic sociopaths.
Re:Seriously, (Score:5, Interesting)
It's news that a Google employee is being a dick, since they do have a "do no evil" policy.
No, they don't. They have a "do no evil" slogan. They have been just as actively evil as everyone else for years.
Re: (Score:2)
My Mom is using Linux. It's what she started with when she asked me to show her how to use that web/browser/www thing.
It's just not that hard. The parts that are 'hard' are the things a typical Windows user needs help with anyway.
Re: Seriously, (Score:3)
Re: (Score:2)
Ditto what AC said - speak for yourself - my missus is (finally!) using an iPad, which is about all she ever really needs for what she does online.
The only Windows machinery left in my house are all on VMs that I control personally (they're usually off).
Re:But not to give them a chance to correct it fir (Score:5, Insightful)
Yeah, ok. troll better please.
it's been 4 weeks. Clearly we should go after those who disclose vulnerabilities instead of those responsible for fixing them. /sarcasm
Re:But not to give them a chance to correct it fir (Score:5, Insightful)
That's bad. That's destructive and dangerous
No more dangerous than publishing the blueprints for a gun or the instructions to 3d print one. Someone could use that information to perpetrate a crime. Why do you throw freedom of speech out the window when it comes to software bugs?
The general tolerance of latent vulnerabilities and the expectation that whitehats should give companies time to patch them at least expense is what's truly destructive and dangerous.
Re: (Score:3, Insightful)
Why do you throw freedom of speech out the window when it comes to software bugs?
Get on your soapbox much? Nobody is infringing on Freedom of Speech since there is no law against this. There are issues of being reasonable and responsible though that have nothing to do with the law. Nor is anywhere here suggesting that he shouldn't publish, just that he should inform Microsoft directly, instead of assuming that everyone on the planet should read that mailing list, and give them some reasonable time to fix it before publishing.
Re:But not to give them a chance to correct it fir (Score:5, Insightful)
Security through obscurity is no security at all.
A security hole is a security hole. A hole that is not widely known about is not in any credible sense "safer" than one with a demonstration exploit posted on mailing lists.
I would rather that news of exploitable security holes be widely published, so that mitigating secondary security blocks can help cover the hole, and reduce the attack surface as soon as the exploit is discovered. While you can't recompile the kernel on day-0, you CAN filter network traffic, isolate unprotected systems, and take other affirmative actions to safeguard company and private data from unauthorized persons, and prevent the silent execution of malicious software early.
The problem one runs into there, is that most software out there today is not so much "secure", so much as it actually is analogous to a block of aged swiss cheese. Hardened in some places, and totally see-through in others. Managing many disparate suites of software packages means dealing with, and mitigating the risks, of a great, great many peepholes.
But again, a security hole is a security hole, and security through obscurity is no security at all. Wishful thinking that "if nobody says anything, then its perfectly safe to let slide for now!" Puts systems, data, and people at risk for the sake of convenience.
Look at the fallout of the near miss between that german drone aircraft and a small passenger plane that just came to light. Secrecy of the problem does not make the problem go away, and hiding the risks from people (for any reason) who are at risk is beyond inconscionable.
Re: (Score:3)
Security through obscurity is no security at all.
That's not really relevant because the choice is between disclosing to the software makers and disclosing to the public, not leaving the hole in the product. Given the hole already exists, is it more secure to let the public (consisting of both good and bad actors) know or not?
The answer to that can change depending on the nature of the vulnerability (can the public protect themselves by changing a setting for example) and the way the software company can be expected to respond (will they sit on their hands
Re: (Score:2, Interesting)
PS3 encryption== security through obscurity. (That salt doesn't need to ACTUALLY be random--each and every time-- does it? Cause, that would be a pain to implement!)
PROPER key pair generation == impossible to realistically derive the secret key from the public key and the payload, due to addition of true random salt. (Where "reasonable" means within the attacker's lifetime.) There simply is not enough information to derive all the factors to refactor the secret key. This is by design, and is considerably di
Re:But not to give them a chance to correct it fir (Score:5, Insightful)
Except that he's right. The "Security through obscurity is no security at all" mantra is the first thing that people who know nothing about security fall back on again and again. Asymmetric keys are merely *better* obscurity than most other means. You're still just counting on not being a sufficiently interesting target that your keys are not going to be put to the test by somebody with access to a proper compute cluster (or maybe a quantum computer), or that they won't bypass that and exploit you some other way.
You should know this already. Speaking generally, all security mechanisms can be broken, so you need to ensure the cost of exploiting is greater than the thing you get access to after exploiting.
Re:But not to give them a chance to correct it fir (Score:4, Interesting)
I never said I believed in "unbeatable protection". That's a strawman. I basically said that "out of sight, out of mind!" Is not a proper risk mitigation practice. Most certainly NOT the same thing as professing a belief in perfect security.
Proper keypair generation attempts to make it more costly for the attacker to profit from the action of hacking, and actually demonstrates this fact for them, should they try anyway.
Shitty obscurity based half-assery fakes being strong, to detur attempts, but fails easily on inspection. Something like using a password to XOR a file, and calling it "encrypted.", or doing what sony did and reusing the sae salt over and over again, completly defeating the purpose of the salt in the process.
Relying on "don't tell anybody! We'l get to it eventually, and if you don't tell, nobody will find out!" Is bullshit, which is what typically happens with so called "responsible disclosure." I have heard of serious exploits hanging around for YEARS after being "responsibly disclosed."
I understand that you can't fix the hole instantly, and that the patch needs to be tested to make sure it doesn't poke another hole elsewhere. However, informing the people at the most risk, (customers), that they need to take some mitigating actions to reduce the threat, and to watch for signs of exploit until the patch is ready is what is the responsible thing for the software vendor to do. NOT hide the exploit and try to forget about it, while less scrupulous crackers silently use it in combination with other exploits to commit fraud, steal company prividleged information, steal user persona data, build botnets, and worse, while pretending that "it won't happen, because nobody squealed!"
Re:But not to give them a chance to correct it fir (Score:5, Informative)
Asymmetric keys are merely *better* obscurity than most other means.
You are using a false information model.
"Obscurity" in the context of IT security does not refer to private information of any kind.
"Security through Obscurity" refers to the false assumption that my ROT13 encryption algorithm is any better if I don't tell you that I'm using ROT13. The assumption being that it'll take you additional time to figure out what algorithm I'm using, making it more difficult to crack my code.
That assumption is false, because with any actual security measure, the amount of work required to figuring out the algorithm is insignificant compared to the amount of work required to break it.
Asymmetric keys are not "better" obscurity. You can't break a good encryption algorithm with even a huge cluster. That's the whole point - that I don't need obscurity. I can tell you what algorithm I used, what size my key is, absolutely everything except the key itself - and it'd still take you a century with all the current computing power on the planet to break it.
Obscurity is usually a weak algorithm that can be broken in minutes once you've figured out the one "trick" they keep secret.
If you still don't see the difference, re-read Applied Cryptography.
Re: But not to give them a chance to correct it fi (Score:5, Interesting)
The nonsequitor there, is in asserting that because the whitehat hasn't disclosed his findings, that others haven't also independently found the hole, and been more mum about it.
Which is more profitable for a person who makes their living by stealing company secrets, laundering money through wire fraud, or selling stolen identity information?
Using an exploit that has been publicly discosed, and thus, everyone is super paranoid about it, and actively trying to plug it-- OR-- a nice little treasure trove of privately discovered exploits that aren't public knowledge that you can quiety switch to once the hole you are currently using gets discovered?
"Saving face" for the company fascilitates the real blackhats by keeping admins and users ignorant of the threat.
All public disclosure does is make real blackhat attackers silently move to their next vector, and cause a spike in script kid activities. (And of course, make the software vendor look bad.)
Re: But not to give them a chance to correct it fi (Score:4, Insightful)
Re:But not to give them a chance to correct it fir (Score:5, Insightful)
I never said I believed in "unbeatable protection". That's a strawman. I basically said that "out of sight, out of mind!" Is not a proper risk mitigation practice. Most certainly NOT the same thing as professing a belief in perfect security.
"out of sight, out of mind!" is a bigger strawman than anything I said. Responsible disclosure, so MS has at least a chance to respond -- that's all people are calling for. And the point wasn't about unbeatable protection -- the point was to dispel of this silly one-liner that only serves to hinder meaningful discussion of security issues.
Shitty obscurity based half-assery fakes being strong, to detur attempts, but fails easily on inspection. Something like using a password to XOR a file, and calling it "encrypted.", or doing what sony did and reusing the sae salt over and over again, completly defeating the purpose of the salt in the process.
*This* is a strawman. Don't point out stupid shit that other people did, and claim that it makes your point valid. Remember again the general recommendation -- the cost of breaking your scheme must be greater than the value of what you're protecting. If you're using the scheme above, you should be using it to protect minesweeper scores at best.
Relying on "don't tell anybody! We'l get to it eventually, and if you don't tell, nobody will find out!" Is bullshit, which is what typically happens with so called "responsible disclosure." I have heard of serious exploits hanging around for YEARS after being "responsibly disclosed."
This is a strawman again. Simply, disclose responsibly. The patch cycle is well documented. If 1 cycle goes without a patch, you can remind them. If they second one goes by and no patch, disclose. How hard is that? Answer -- not hard at all. When you're not out to fuck people over, and don't have some agenda you're trying to further, it's really not that hard to be reasonable.
I understand that you can't fix the hole instantly, and that the patch needs to be tested to make sure it doesn't poke another hole elsewhere.
It's not just that. The patch needs to be tested to ensure that it actually works! That was an issue the last time Ormandy did this -- he provided a binary patch that did not fix the issue! In addition to that, it has to not cause other bugs (not necessarily exploits -- but bugs -- because those too can cause work stoppage etc.). When the hole is being exploited already, all this goes out the window -- exchange information openly and get that shit fixed ASAP. When it's not yet being exploited actively, you can spare users a lot of headache, and a lot of lost productivity by simply following responsible disclosure guidelines that are well documented and well-known to Ormandy himself.
However, informing the people at the most risk, (customers), that they need to take some mitigating actions to reduce the threat, and to watch for signs of exploit until the patch is ready is what is the responsible thing for the software vendor to do.
Dude, you can drop the veneer about caring about MS's customers. Ormandy can drop that too. There's a clear course of action by which Ormandy and MS could have done right by them together. Ormandy made sure that's no longer an option, and they are in greater danger now than was strictly necessary. And you are defending his actions out of glee that MS is looking like an idiot.
NOT hide the exploit and try to forget about it, while less scrupulous crackers silently use it in combination with other exploits to commit fraud, steal company prividleged information, steal user persona data, build botnets, and worse, while pretending that "it won't happen, because nobody squealed!"
Nobody is asking to HIDE anything! You complained about a strawman earlier??? Responsible disclosure does not imply infinite time. Ormandy works for Google right? He can
Re: (Score:2, Insightful)
Umm. Many do.
Do you know if the 3 to 5 guys who own that codebase in MS read that site?
Microsoft never gets off its ass and fixes stuff before it goes public.
Quite simply untrue.
So. Fuck it. Publish. Make em work.
So, no -- responsible disclosure first. Extreme measures after that. Don't be an asshole. Not being an asshole is generally not hard.
Re:But not to give them a chance to correct it fir (Score:5, Insightful)
Microsoft never gets off its ass and fixes stuff before it goes public.
Really? Every bug fix they ever made was from public disclosure? News to me, since I personally have seen them fix things disclosed only to them.
What you actually mean is that you, a home user, with a best a handful of machines, thinks its better to rush a patch out that could break shit, than to do a proper fix and test cycle.
What this lets the rest of us know is that you have no fucking clue what its like to deal with large scale software maintenance. Any admin worth his salt knows that if you can mitigate the problem away and wait for a proper patch that has been thoroughly tested is about 10 billion times better than some random hack made by some guy at 3am this morning.
There are few exploits that can not be mitigated in some way. This particular issue is easy to mitigate at most companies by simply firing any jack ass caught exploiting it. It requires local access (via RDP counts), so its not like we're talking about an internet facing, anyone can take you down, kind of bug.
On top of that, any admin worth his salt his going to do proper testing, which means even if they got a patch 10 seconds after the exploit was found, its STILL GOING TO BE A WHILE BEFORE THE ADMIN DEPLOYS THE PATCH ... unless he is some ignorant clueless douche like you who doesn't have any idea what he's doing.
All your post does is shows your complete ignorance of the bigger picture.
Re: (Score:3, Insightful)
That's bad. That's destructive and dangerous
No more dangerous than publishing the blueprints for a gun or the instructions to 3d print one.
This is closer to posting a list of homes where firearms are registered. Exposing the vulnerabilities without letting the homeowners without guns know that they're about to be greenlighted for burglary.
The general tolerance of latent vulnerabilities and the expectation that whitehats should give companies time to patch them at least expense is what's truly destructive and dangerous.
Now everyone has to scramble as script kiddies within their organizations implement this (internal attackers are still most dangerous). A balance must be struck. He's not looking to keep people secure; he's looking to make MS Windows operating systems a battlefield.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Either way, bad analogy... sorta.
Military classified material are formed and protected to prevent both discovery of vulnerabilities, and to prevent discovery of new advances or knowledge of technology, intelligence, and so forth. revelation of such can have a very high probability of endangering lives and civilian security.
This Windows bug is, well, the result of deficiency, nothing more. The worst that can happen? Well, if someone were both a flaming dumbass and exposed a SCADA box unprotected to the Inter
Re: (Score:2)
Re: (Score:2, Informative)
History tells us that telling Microsoft privately puts it on their radar for three to five years out. Disclosing publicly actually gets a patch to users.
Re: (Score:2, Insightful)
Re:But not to give them a chance to correct it fir (Score:4, Insightful)
"Doesn't matter what history shows."
That's the refrain of the conquered and the unscientific.
Re: (Score:2)
From the paritynews article:
He also noted that another working exploit may already be circulating in the wild.
Whether this means before he posted or not?
Google is in competition with Microsoft ... (Score:5, Insightful)
The only reason not to do would be if you knew someone was already taking advantage of the vulnerability in the wild.
Google is in competition with Microsoft. Google would prefer people to use chromebooks and android so raising anxiety about Microsoft based products furthers their corporate goals. It could easily be as simple as that.
Re: (Score:2)
Any real admin will simply mitigate the issue away until a patch can be tested and installed. Real sysadmins don't have retarded knee jerk reactions to exploits.
Devil's Advocate: You can't mitigate what you don't know about. See also the (semi-)infamous WPF bug.
Re: (Score:2)
It's is short for it is or it has. This is a 100% rule. It cannot be used for anything else. If you cannot expand it's to it is or it has, then it is wrong.
Its
Its is like his and her.
Read more at http://www.grammar-monster.com/easily_confused/its_its.htm#ofYKtpWvWVT8w4VO.99 [grammar-monster.com]
Re: (Score:2)
History tells us that telling Microsoft privately puts it on their radar for three to five years out. Disclosing publicly actually gets a patch to users.
This guy gave them 4 weeks before publishing actual exploit code (not just vulnerability info), and did not report it to Microsoft before publishing the vulnerability. To produce and, most importantly, QA a patch to the most used OS environment in the world is not trivial and takes time. Even if you want to stick it to MS, this is a big middle-finger from this Google guy to user all over the world.
Re: (Score:2)
Fuck it. have them patch to Linux Mint 15.
Re: (Score:3)
Re: (Score:2)
The same thing happened last time if I remember correctly. It's a tricky situation ... his employer shouldn't be able to control his hobbies, but he shouldn't be making them look like dicks either. Does he advertise himself as a Google employee, or is this the usual anti-Google FUD campaigners throwing this information in where it's not warranted?
Re: (Score:2)
Comment removed (Score:5, Insightful)
Re:Seriously, (Score:5, Funny)
News? TFS is flamebait.
This Fucking Site?
Re:Seriously, (Score:5, Informative)
News? TFS is flamebait.
This Fucking Site?
The Friendly Summary.
Re: (Score:2)
Re: (Score:2)
This just in: Windows is even hackable by really really stupid morons!
Re: (Score:3)
A username/password is different.
This is a flaw in a system. It's the difference between "Joe Blogg's car has code XXXX" and "All Fords let you in if you do XXXX". The personalisation of the information brings it under different laws - and preventing people from discussing flaws will also stop people from, for example, discussing faults in systems (e.g. cars that have faulty brakes etc.) which brings about whole new levels of capability for companies to forgo their responsibilities and claim they didn't k