Questioning Google's Disclosure Timeline Motivations 73
An anonymous reader writes "The presence of 0-day vulnerability exploitation is often a real and considerable threat to the Internet — particularly when very popular consumer-level software is the target. Google's stance on a 60day turnaround of vulnerability fixes from discovery, and a 7-day turnaround of fixes for actively exploited unpatched vulnerabilities, is rather naive and devoid of commercial reality. As a web services company it is much easier for Google to develop and roll out fixes promptly — but for 95+% of the rest of the world's software development companies making thick-client, server and device-specific software this is unrealistic. Statements like these from Google clearly serve their business objectives. As predominantly a web services company with many of the world's best software engineers and researchers working for them. One could argue that Google's applications and software should already be impervious to vulnerabilities (i.e. they should have discovered them themselves through internal QA processes) — rather than relying upon external researchers and bug hunters stumbling over them."
You suck (Score:5, Insightful)
If your company is producing security critical software and doesn't have a plan for quickly dealing with bugs you suck.
Re:You suck (Score:5, Insightful)
Also, even if they can't patch it quickly the point is to inform users so they can take appropriate precautions.
Then most of the world sucks (Score:1)
"Security critical software" is anything on the Internet.
Let's say I write a clever new web game, called "Cyber Tic Tac Toe." It's based on the classic idea of tic tac toe, but with the novel twist that it's on the Internet! As fun as my game is, and despite the fact that you are totally addicted to my awesome game and play it about 20 hours every week, questing to get your 3 Xs in a row, some people would say this is completely mundane and unimportant. (Killjoy bastards!) They would say my game, no matte
-1 Flamebait (Score:5, Insightful)
For article. Sadly you can't moderate an article.
Re: (Score:1)
What article? All I see is some idiot's comment that somehow got posted as if it were an article. Timothy must be drunk again.
Exactly... (Score:5, Informative)
Re: -1 Flamebait (Score:2, Insightful)
Agreed. Submitter sputtered the industry line, but if the industry wanted different standards they should try setting *some* standards instead of pretending that it's OK to ignore vulnerabilities.
Good catch (Score:2, Interesting)
Google has a decent point, but they're also trying to nudge the industry in a direction where more businesses will leave the driving to them [wikipedia.org], or to cloud competitors like Amazon.
Just Google? (Score:5, Insightful)
Why single out Google? Shouldn't traditional software vendors have also run programs through QA?
Re:Just Google? Not at all (Score:2)
Having worked in software development for 30 years, let me tell you a dark little secret.
QA programs are never enough.
It matters not how good your QA section is, things will get through. Why? Because the QA people have looked at the code, because QA managers are fond of believing that filling out paperwork is the same thing as careful retrospection, because the QA people work for a company whose vested interest is in getting that chunk of software out the door.
In order to do a thorough job, one needs Diff
Re: (Score:3)
Oh, I'm completely with you: Developers should run QA, but there will always be bugs that slip through. I was responding to the AP's insinuation that Google should be catching all their security problems in internal QA but that Microsoft should get a pass for some reason.
Re: (Score:2)
Having worked in software development for 30 years, let me tell you a dark little secret.
QA programs are never enough.
Dropping previous moderation to comment here:
Having worked in SQA for over 17 years I can tell you PM NEVER listens to SQA unless the app would be DOA on more than 20% of the target customer's machines. On occasion a XXO would override PM and tell them no... you cannot RTM until you fix *THIS* issue... but security, data loss, customer satisfaction were never significant guidance to PM or XXO decisions.
And these issues are not deep... this is low hanging fruit, more often than not. PMs and DEVs hate SQA
Re:Just Google? (Score:5, Insightful)
Most other companies doesn't have well funded FUD campaigns directed against them.
Re: (Score:3)
It's Google who are pushing a 7-day deadline for vulnerability disclosure. These companies are in effect saying "That's not fair, we can't fix our software that quickly!". It would be a real PR foot bullet if people understood what they're saying.
Re: (Score:2)
Or they have QA, but lack of time and resources.
Deceptive summary (Score:1)
Re: (Score:2)
"People that over promise and under deliver typically aren't around too long."
You are kidding, ain't you?
Perhaps Google does tell companies... (Score:5, Informative)
Until I publicly announce them on platforms like Twitter, then you have their full attention.
Re: (Score:2, Insightful)
If you say "I'm telling you privately but going public in 7 days" it sounds like a threat and the most likely response if a lawsuit or the police hammering on your door. Even if you don't say anything about going public there is a fair chance of that happening. Your actions are pretty risky.
Unfortunately unless the company has a history of dealing with security bug reports in a timely and proper manner the most responsible thing to do is just post about it anonymously to one of the full disclosure mailing l
Critical vulnerabilities under active exploitation (Score:5, Informative)
Frankly, even 7 days is too long for active attacks. Publishing the vulnerability lets users to use a workaround or shut down the service or app entirely until a fix is released.
Re: (Score:2)
You miss read.
What Google is saying is "You have seven days before I tall people how you customers are being screwed."
Once you have an active malware attack exploiting a vulnerability, what is the harm of full disclosure? They only thing I can see this buying is time for the PR flacks to get the story together.
You're right, 7 days isn't good enough (Score:3, Insightful)
A vulnerability that is already being exploited needs to be fixed right away. It's called 0-day for a reason, not 7-day. It should be disclosed immediately to force the vendor to do something about it.
Re: (Score:2)
What?! (Score:5, Insightful)
a 7-day turnaround of fixes for actively exploited unpatched vulnerabilities, is rather naive and devoid of commercial reality. As a web services company it is much easier for Google to develop and roll out fixes promptly — but for 95+% of the rest of the world's software development companies making thick-client, server and device-specific software this is unrealistic
Hello there, mr/ms/mrs anonymous COWARD, what are you saying there? It COSTS TOO MUCH to prompty (as in a week) fix ACTIVELY EXPLOITED vulnerabilities? When you get the actual problem handed to you on a silver platter? What company do you work for?
Re: (Score:2)
It also seems to rely on the(severely dubious) assumption that Google's disclosure will have much of an effect, on anything aside from how slow the vendor looks, for 'actively exploited unpatched vulnerabilities'.
This isn't a 'Hey, we found a bug that nobody else knows about yet, we are going to release it just for giggles!' situation. Clock. Is. Already. Ticking. Google's 'deadline' may be the one at which you start looking increasingly incompetent in public; but the deadline at which everyone running your
Slower? He's Saying Slower?!? (Score:4, Insightful)
a 7-day turnaround of fixes for actively exploited unpatched vulnerabilities, is rather naive and devoid of commercial reality.
I read that and I was thinking, "Well, yeah, sure - I shoot for one hour and can't recall the last time it took more than a day to get a critical bug patch out, but that's not really reasonable for everyone. The team I work on is pretty focused on keeping the tracks polished so we can get high priority things through. I think 7 days is OK. It could be better, but it's OK. And Google isn't even saying it will take 7 days, they're saying 7 days is the max. But, whatever, I guess -- ultimately agitating for faster patches is something I support."
for 95+% of the rest of the world's software development companies making thick-client, server and device-specific software this is unrealistic.
What?!? You mean it's not realistic to get the patch available within 7 days? I mean, obviously you can't expect users to have their systems patched immediately, and sometimes a third party (like a walled garden approval path) can lock you out. But is the writer saying 95% of companies can't even have a patch pushed for release in 7 days?
If that is true, we, as a society, need to drop what we're doing and focus on security, build management, QA workflow, whatever it is that is making that a reality. 7 days is acceptable. 95% of companies can't hit 7 days? First, that is not true in my experience. But if it is? That is not acceptable, if it is true. There really are bad people out there trying to root our electronics. Seven days to get a patch out for an actively exploited in the wild vulnerability is enough. Work the problem. Figure out why you can't hit that number, and fix it.
Re: (Score:2)
I think the disconnect here is not in writing the patch, but rather certifying it doesn't break anything else. Any time you touch code, you risk breaking something else. Our QA department requires a regular 20 day lead time, although you can escalate emergencies for production fixes and get them into QA almost immediately, but it still often takes 2-3 days to get a minimal once-over from QA. They do not evaluate all of the code line by line. They test all functionality, ensure that it works as documented, a
Re: (Score:1)
Where do Muppets come into this? (Score:1)
So the OP is bitching Google is like the Empire offering rewards that people like Boba Fett go after? I don't even
I smell a flack (Score:3)
It's no wonder this article was posted anonymously. The whole tone and writing style is exactly what one would expect in a position statement cranked out by a corporate PR flack. I wonder whose flack it is.
Naive and devoid of reality? (Score:4, Insightful)
I think what you're saying, is that if someone is going around stabbing people in the heart, and if a doctor says these victims all need immediate medical attention (even the victims which are in isolated areas far from hospitals), then that doctor is being naive and devoid of medical reality.
I personally think you should quit blaming the doctor for the unfairness and horror that is inherent in the situation. Declaring the urgency of a problem being addressed, isn't "naive". It's not naive, even if addressing the problem is incredibly hard or even if it's effectively impossible.
If the doctor truly thinks the victims all really will get "immediate medical attention" then he'd be naive. But advising it isn't naive. Yelling at people "get that victim to the ER as fast as you can!" isn't naive. Telling people that heart stab wounds are very serious, isn't naive.
And the analogy with Google here, is that you just got stabbed in the heart, they're advising and hoping you get immediate medical attention, and 7 days from now, if your wife asks Google if they've seen you lately, they're going to tell your wife, "I heard he got stabbed in the heart last week. You took him to the hospital, right? If not, you better get on that, right now." You're concerned Google is going to scare your wife?! Be concerned that you're not at the hospital yet!
You think Google is being naive with unreasonably high expectations, but the need for those high expectations isn't their fault!
Nice double standard (Score:3)
Google's standards are too hard to be realistic for the rest of the industry, but the standards for Google itself are 0 security defects ever. How does that work again?
(disclosure: I work for Google. I don't speak for them.)
Re: (Score:3)
Re: (Score:1)
dumb (Score:2)
What a dumb opinion piece.
The main difference between a client-installed application and a web-app these days is that a patch on a web application is available as soon as you deploy it, while the patch for the client application needs to be downloaded and installed, which is mostly done automatically.
So, in terms of time, the difference is on the order of minutes, hours at most.
Is it more difficult to create and/or test updates for clients or for browsers? Hard to say, but the difference isn't fundamental.
irrelevant industry (Score:2)
Industry is irrelvant, if your customers are being actively exploited you are or ought to be liable if you don't fix it as fast as possible
It's scary the OP thought anyone would agree. (Score:1)
The debate has always been about whether to give any advance notification at all. That's why full-disclosure@lists.grok.org.uk is called "full disclosure". That's the spectrum of behaviour that's allowed, anything up to 0, "notify vendor and public simultaneously":
https://en.wikipedia.org/wiki/Full_disclosure#Arguments_against_Coordinated_Disclosure
Even if not for that, the complaining seems to rest on a vague suggestion that Google's acting somewhat in their own interest instead of always doing t
Vendor's processes not relevant (Score:5, Insightful)
As a user, I don't care about the vendor's ability to fix it quickly. Really I don't. That's their problem. My problem is that my systems are vulnerable to compromise and I have to do something about it. I need to know what the vulnerability is, in enough detail to understand it myself, and I need to know the possible workarounds (not just the vendor's recommended one(s), which is another reason I need to know what the vulnerability actually is so I can understand all the other possible ways of dealing with it). I need to evaluate my options and take whatever steps I need to to protect my systems. If the vendor needs a month to get the fix through their change-control process, I still need to protect my systems today.
The vendor's advice will be based on their most-likely scenario. Problem is that my situation may be radically different from the vendor's most-likely one. There's definitely going to be local considerations even if my situation's one the vendor's workaround covers. I need to understand the vulnerability to be able to evaluate it intelligently. It may not even be relevant to my setup. If it is, I may have less-intrusive workarounds (eg. for the SSH OTP authentication bug, if we've got a purely internal network that isn't accessible to the outside world or the Windows desktop portion of the network it may be less intrusive to just monitor for attempted exploits and defer doing anything until I see someone having gotten past the air gap rather than changing an authentication method that a lot of people depend on and that can't be exploited easily without being physically in the building). And if I need to take drastic steps like disabling the vulnerable SSH authentication method, I may have clients who insist they must be able to use it (maybe because their systems are based on it and they need my systems to integrate with their authentication because I'm providing services to them) and I need to be able to intelligently discuss exactly what's wrong and why it's simply not possible to use that method without being vulnerable and we've got to change to a different method despite the disruption. I can't do that unless I understand the vulnerability.
Notice that in all the above I haven't mentioned the vendor at all. Like I said, the vendor isn't relevant at all. It's my systems that're vulnerable and me that has to do something about it. If the vendor already has a fix then well and good, but if they don't it doesn't change my situation. When vendors say they need more time, they're asking me to leave my systems vulnerable without telling me they're vulnerable. Sorry, but no. Not, that is, unless they're willing to shoulder 100% of all the costs resulting from that vulnerability being exploited. Not just direct costs, things like the costs of lost business and clean-up if the vulnerability is exploited and liabilities I may incur because of the compromise. If a vendor isn't willing to take on that liability, then they don't get to tell me I shouldn't have the information I need to protect myself from that liability. If they don't like it... this is the sound of the world's smallest violin, playing the world's saddest song just for them.
Re: (Score:2)
When vendors say they need more time, they're asking me to leave my systems vulnerable without telling me they're vulnerable. Sorry, but no. Not, that is, unless they're willing to shoulder 100% of all the costs resulting from that vulnerability being exploited.
The bit that you're ignoring is that by telling you about the vulnerability they're also telling all the black hats about it. So while your systems are vulnerable either way, the choice is between you and all the hackers knowing or you and most of the hackers not knowing. Whether this increases or decreases your actual exposure depends on who is interested in attacking you and whether or not they already have this exploit.
While you may be capable of implementing countermeasures to limit your vulnerability
Re: (Score:2)
Except that the black hats already know about it, and are actively using it to compromise systems. Telling me about it doesn't give the black hats any information they didn't already have. The only way keeping the information confidential helps me is if it's something the black hats can't possibly figure out without being told, and the track record says there's no such thing. I have y
Re: (Score:2)
First problem is that sure, not every black hat will have the details, but any black hat may. In this age of automated toolsets for attackers, once one black hat figures it out and includes it in a toolset it's only one update away from being in the arsenal of any attacker using that toolset. In that environment you can't wait for them to prove they have the information, by the time that happens you're compromised and cleaning up and suffering all the financial, legal and PR liabilities that come with being
Re: (Score:2)
Except that the black hats already know about it
Some do. Which is a distinction I clearly drew in the post you responded to, apparently without reading all of it.
Re: (Score:2)
No, because it's a distinction you can't draw. There's a line from a Dr. Who episode: "Not every shadow, just any shadow.". The same applies to attackers. Not every attacker will know the details, but any attacker may. And since by the time you know whether any particular attacker knows the details or not it's too late to defend, you have to assume that any attacker knows and defend yourself bef
Somebody call them a waaaaaambulance... (Score:1)
Pitiful cries of "*SNIFFFF* Butttt, butttt, we've been using paying customers as guinea pigs for decades, we'd actually have to pay someone to test our shit code now!!"
"That's just not fair, we charge more because we have to pay stockholders and multi-million dollar bonuses to the dickheads who make our fucktarded decisions!"
"Without paying testers, how else can we rake in more cash, more quickly, especially when our EULAs make it so they cannot sue us when our security software kills their PC!"
Waaaah! My business model sucks! (Score:1)
Let's cut through several layers of meta-issue here and get to the heart of the matter, and then work our way back outwards to the issues this article is so wrong about:
1) Software Development is arguably one of the most difficult, complex, and daunting tasks humankind can endeavor at. That is, assuming you care about factors like quality, efficiency, security, and correctness. What makes it so difficult is that you're molding things out of the most flexible clay ever dreamed of. Physics is pretty much t
Because all of Google's products are web sites... (Score:3)
I guess Chrome, ChromeOS, Android, numerous Android apps, Google Earth, Google Drive, Picasa and Google's many other traditional installable software products don't count.