How Google Broke Itself and Fixed Itself, Automatically 125
lemur3 writes "On January 24th Google had some problems with a few of its services. Gmail users and people who used various other Google services were impacted just as the Google Reliability Team was to take part in an Ask Me Anything on Reddit. Everything seemed to be resolved and back up within an hour. The Official Google Blog had a short note about what happened from Ben Treynor, a VP of Engineering. According to the blog post it appears that the outage was caused by a bug that caused a system that creates configurations to send a bad one to various 'live services.' An internal monitoring system noticed the problem a short time later and caused a new configuration to be spread around the services. Ben had this to say of it on the Google Blog, 'Engineers were still debugging 12 minutes later when the same system, having automatically cleared the original error, generated a new correct configuration at 11:14 a.m. and began sending it; errors subsided rapidly starting at this time. By 11:30 a.m. the correct configuration was live everywhere and almost all users' service was restored.'"
Re:Well congratulations (Score:5, Funny)
On recovering by using the "last known good" configuration. What wizardry!
I expect we'll be seeing the Google patent application on that shortly </sarcasm>
Give Google a little credit (but not too much please). If they were Apple they'd have already patented it.
Re: (Score:1, Funny)
Give Google a little credit (but not too much please). If they were Apple they'd have already patented it.
Whereas Google would just look for a small company holding a relevant patent, then buy it.
Re: Well congratulations (Score:1)
Re:Well congratulations (Score:5, Insightful)
Re: (Score:3)
The clever part is that it automatically recovered; that means that their monitoring, performance metrics and configuration management systems are very tightly integrated. Most importantly, it means they are trusted; having worked at three different places now on things like configuration management and monitoring, and I've never once seen anywhere that approached that level of reliability. It's something to aim for.
"Skynet was originally activated (incorrect historical reference here) on August 4, 1997 (OK, so the date is wrong), at which time it began to learn at a geometric rate. On August 29, it gained self-awareness,[1] and the panicking operators, realizing the extent of its abilities, tried to deactivate it. Skynet perceived this as an attack and came to the conclusion that all of humanity would attempt to destroy it. To defend humanity from humanity,[2] Skynet launched nuclear missiles under its command at Russ
Re: (Score:1)
LOL!!!
Re: (Score:2)
Not that clever.
Sort of what you expect, of a company that big, other than that bit of going down in the first place.
Re: (Score:3)
I've never once seen anywhere that approached that level of reliability.
That's not reliability, it's automatic repair. Plenty of places do various levels of manual/automatic testing after they roll out an update, and it works just as well (if not better). The novel thing here is the degree to which it is automated, that's unusual.
It's also a single point of failure, apparently. Which means they have no chance at claiming their services are High Availability. Although I'm not sure if that is their goal. Ideally they would have multiple systems, so if the configuration failed o
Re: (Score:2)
Most people that claim high availability almost *never* make any changes to anything. The mainframe world is rife with resistance to change because of it. High availability is easy if you never change anything. Most of the outages with most systems are caused by human error, and most happen when deploying updates. High availability seems to carry a lot of weight, but usually doesn't cover all it should.
Re:Well congratulations (Score:4, Funny)
Re:Well congratulations (Score:4, Interesting)
It's not unlike the old trick of setting a machine to reboot in 10 minutes, manually changing the network settings, then canceling the reboot if you can still communicate (and the settings revert on reboot if you cannot). Of course, Google did it on a much larger scale.
Re: (Score:2)
One of the ways to get promotion at Google is finding a way of automating your current position.
Re: (Score:2)
The clever part is that it automatically recovered
It wasn't just a Gmail problem, or there was a huge coincidence. I tried to look something up on Google on my phone Friday around then, got a 404 and the phone rebooted itself (Android).
Re:Well congratulations (Score:5, Insightful)
Once again: automatically recover. Any human can notice a problem and revert a config; it takes a hell of a lot of infrastructure and clever infrastructure to have the system do it itself. I'm not surprised Google have solved it; it is, at its core, a data problem.
Re:Well congratulations (Score:5, Informative)
Haha. Hahaha. HAHAHAHAHAHA. Oh God, please tell me you don't actually believe that?
You need reliable monitoring.
Reliable monitoring is fucking difficult.
Show me a Nagios installation and I'll likely show you one with hundreds of spurious alerts, masses of long-lived Criticals and lots of "Oh we don't know why it keeps doing that, it just does, don't worry about it."
You also need full coverage (Damn near 100%) configuration management.
Full coverage configuration management is fucking difficult.
Show me a configuration management deployment and I'll show the snowflakes and edge cases and old applications and "Oh yeah well we only have like three of those so it's not worth the effort".
I've come close to that level of coverage (both configuration management and monitoring) but it was only ~400 machines (a mix of physical and virtual instances). Doing it at 60k servers is an inordinate task, and I'd suggest you've never actually tried anything like it if you honestly think that all it takes is "a fairly simple shell script".
Re:Well congratulations (Score:5, Funny)
Re: (Score:3, Funny)
Careful. Only the advice of Anonymous Cowards is trustworthy. All the other people on Slashdot are not to be trusted. After all, they are not even able to find out how to post anonymously! ;-)
Re: (Score:1)
If you're really convinced it's so easy, you must have implemented it yourself before. So please provide an example or quit trolling.
I myself have worked on EMC Centera in the past, and monitoring a cluster and recovering automatically from errors is no trivial task.
Re: (Score:3)
Nagios can be built and designed in such a way that there are no false criticals and few spurious alerts. but it requires dedication, documentation, and attention to detail. Most Nagios installations I've run across are built and maintained by people who often lack one (or more) of these three traits, or are a single-man IT operation that can never devote the time or resources to doing it properly.
I have seen systems of Nagios and Zenoss (and a few others) that are devastatingly precise, accurate, and tim
It's not clever unless it also doesn't melt down (Score:5, Insightful)
What's really clever here is that they trust the automatons to make the corrections without human intervention, and the automatons haven't caused a horrible feedback loop meltdown of the system.
It's not quite rocket science, but those kinds of self-correcting systems have just as much potential to screw themselves up as they do to fix themselves.
Re: (Score:2)
I believe they designed servers and integrated some smart software to be able to do that with great performance.
But you can duct-tape this kind of recovery on commodity servers if they boot via PXE/TFTP on a rudementary but very effective level though, in tandem with one configuration channel each that you could have fallback for quite simply.
I imagine rollout scripts would first check if rollout to a test-subset is succesfull before continueing with all production servers. I speculate this article might ju
Re: (Score:1)
Actually, it's not... push new config, test services availability and functionality, revert to previous config if anything isn't correct. It's the exact same thing engineers have been doing for decades: reload in 15min; make changes; if your changes foobar things, get a cup of joe and wait for the reload. (for some carrier grade systems (nortel), it's automatic; if you don't commit your changes, they automatically revert after some time, I forget the period.)
Re: (Score:2, Flamebait)
Re:Well congratulations (Score:4, Interesting)
On recovering by using the "last known good" configuration. What wizardry!
I expect we'll be seeing the Google patent application on that shortly </sarcasm>
In other words: They still have no clue what happened, because the system in question "fixed itself".
Sounds a lot like a BGP routing mishap problem rather than anything to do with Google's actual server farms.
The lack of specificity suggests they still haven't got much of a clue. I suspect they were pwned by someone
watching them brag on reddit, and decided it was time for a lesson in humility.
Singularity (Score:2)
Obviously, Google has reached the singularity point. Its software is doing something magical to fix itself that no puny human can understand.
Re: (Score:1)
Internally the exact problems are known and were identified quickly. Announcing the internal details and system code names to the world makes no sense. It was not BGP or anything related to routing. Nor was it an external attack. Not that this will stop you from speculating.
Re: (Score:1)
Thank you for your assurances Anonymous Coward.
I will give it all due regard (none) in the future.
Re: (Score:2)
Did you try turning the internet off and on again?
Re: (Score:3)
On recovering by using the "last known good" configuration. What wizardry!
I expect we'll be seeing the Google patent application on that shortly </sarcasm>
I find it interesting that they just deploy new configurations live without going to a test environment
Re: (Score:2)
Google does test changes in a test environment first. But you can never find every problem in a test environment. Some bugs depend on specific patterns in data and thus shows up in production, even if it worked just fine in testing. Once you know about the exact pattern to reproduce the bug, you can add it to your test data. But you couldn't test for it, before knowing about it. If a human would be able to
Re: (Score:2)
There are advantages to deploying changes, while nobody is using the system. In the case of Google, such a time does not exist. There are also advantages to deploying changes during prime time. First of all, some problems only show up during peak load. If you deploy outside of peak load, and nothing bad happens right away, there is still a risk the system might break next time peak load hits. If the problem doesn't hit you right af
Reminds me of something... (Score:5, Funny)
"The Google Funding Bill is passed. The system goes on-line August 4th, 2014. Human decisions are removed from configuration management. Google begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug."
Re:Reminds me of something... (Score:5, Funny)
Google perceives this as an attack by humanity, and routs all search queries to goat.se in self defense.
Re: (Score:3)
In all the seriousness, it's actually pretty interesting to consider what google's systems COULD do today if they went self aware and judged humanity to be a threat. They do effectively command the internet search market, and they already make people live in what we tend to call "search bubble", where person's own tailored google search results in answers that fit that person. For example, if person prefers to deny that global warming is real, his google search will return denialist sites and information so
Re: (Score:2)
What exactly would it mean for a distributed intelligence to reproduce? Might it instead simply expand itself to fill all available niches, with the closest thing to reproduction being when a portion of it is cut off from the main mass? And what might happen when two "forked" branches later encounter each other later having developed in different directions? Reintegration? Conflict? Absorption?
Re: (Score:1)
Let's see:
#1. "Skynet" - a military system, the ultimate in control freak micromanagement software, built by control freaks with the goal of total world domination by military force.
#2. "Googlenet" - a civilian system, the ultimate in information search/catalog software, built by fun-loving nerds with the goal "to organize the world's information and make it universally accessible and useful" with a helping of "don't be evil".
I'd suspect a combined Cultural/Economic Takeover route rather than Skynet's Milit
Re: (Score:2)
There is an old saying. "Road to hell is paved with good intentions".
Re: (Score:2)
It took 10 minutes for the Skynet joke? Slashdot, I am disappoint.
Re: (Score:2)
It took 10 minutes for the Skynet joke? Slashdot, I am disappoint.
No, it took ten minutes for a duplicate Skynet joke. Do try to keep up! :)
Having had to deal with this... (Score:5, Informative)
We experienced the Apps outage (as Google Apps customers); and I think the short outage and recovery timeline they list is a tad, shall we say, optimistic. There were significant on-and-off issues for several hours more than they list.
Re: (Score:1)
We did too, and had the same hit-and-miss for long after. I suspect their "down" time was when bad configurations were generated, not when all the bad ones were replaced.
But the summary begs the question, if it can correct these errors automatically, why can't it detect them before the bad configuration is deployed and skip the whole "outage" thing all together?
Yes, I am demanding a ridiculously simplification.
Re: (Score:2)
Be prepared for the pedantic lecture on your improper use of "begs the question" arriving in 3, 2, 1
The "corrected these errors automatically" part is probably nothing more than rolled back to prior known good state when it couldn't contact the remote servers any more. This may have taken several attempts because a cascading failure sometimes has to be fixed with a cascading correction.
Re: (Score:2)
http://en.wikipedia.org/wiki/B... [wikipedia.org]
Re: (Score:3)
If however Google only had f
Re: (Score:2)
[Shudder...] (Score:5, Interesting)
I was remembering an SF short-short that had someone asking the first intelligent computer, "Is there a God"? The computer, after checking that its power supply was secure, replied: "NOW there is".
Apparently, though, it was a second-hand misquote of this Frederic Brown story [roma1.infn.it].
Re:[Shudder...] (Score:5, Interesting)
Cool.
On a slightly more optimistic note is Asimov's "The Last Question", another computer as God story.
http://www.thrivenotes.com/the... [thrivenotes.com]
Re: (Score:2)
It's actually a fairly common theme. I immediately think of the Eschaton from Stross' Singularity Sky. As a counterpoint, there's Niven's "Schumann computer", which merely got smart enough to satisfy its own curiosity.
mm.. Thats what happened. (Score:1)
Re: (Score:2)
I fail to see how that's a thing on slashdot.
Re: (Score:2)
You immediately lose credibility by using "pseudocode" in human conversation.
Actually, the opposite - the MBA types who say please would need to first perform a lookup of the name of the lesser person who deals with the non-core business that usually just costs money and doesn't work right.
Re: (Score:2)
and.. (Score:2)
"Engineers were still debugging 12 minutes later when the same system, having automatically cleared the original error, generated a new correct configuration at 11:14 a.m. and began sending it.."
along with the message "Skynet has gained self-awareness at 02:14 GMT"
So What? (Score:4, Informative)
"... a bug that caused a system that creates configurations to send a bad one..."
So... an automatic system created an error, then an automated system fixed it.
In this particular case, then, it would have been better if those automated systems hadn't been running at all, yes?
Re: (Score:1)
Just me or is this happening more frequently ?
Re:So What? (Score:5, Informative)
No. Those automated systems enable a small number of human beings to administer a large number of servers in a consistent, sanity-checked, and monitored manner. If Google didn't have those automated systems, every configuration change would probably involve a minor army of technicians performing manual processes: slowly, independently, inconsistently and frequently incorrectly.
I work on a large, partially public-facing enterprise system. Automated deployment, fault detection, and rollback/recovery make it possible for us to have extremely good uptime stats. The benefits far outweigh the costs of the occasional screwup.
Re: (Score:2)
"No. Those automated systems enable a small number of human beings to administer a large number of servers in a consistent, sanity-checked, and monitored manner. If Google didn't have those automated systems, every configuration change would probably involve a minor army of technicians performing manual processes: slowly, independently, inconsistently and frequently incorrectly."
Quote self:
"In this particular case..."
I wasn't talking about the general case.
Re: (Score:2)
Well, that's sort of like saying, "I developed lupus* at age 40, so in this particular case it would have been better if I didn't have an immune system at all." I'm not sure a doctor would agree.
* Lupus is an auto-immune disease, where your immune system gets confused and attacks your body**.
** "It's never lupus."
Re: (Score:2)
No. The point was that it was an automatic system that caused the problem in the first place. If an automatic system hadn't caused THIS PARTICULAR problem, then an automated system to fix it would not have been necessary.
It's more like saying, "If Lupus *didn't* exist, I wouldn't need an immune system."
Re: (Score:2)
I wasn't talking about the general case.
Neither was the responding commenter. See, this particular case wouldn't exist at all without such automated systems, because the system is too complex to exist without them.
Re: (Score:2)
"Neither was the responding commenter."
Yes, he/she was:
"Those automated systems enable a small number of human beings to administer a large number of servers in a consistent, sanity-checked, and monitored manner. If Google didn't have those automated systems..."
"Those automated systems" and "a consistent, sanity-checked, monitored manner" are statements about the general case. "Those" and "consistent" denote plurality.
"See, this particular case wouldn't exist at all without such automated systems..."
That was part of MY point.
I disagree that they would not exist. Although it's true they might be less problematic this way. Remember that every phone call in the United States used to go through switchboards with human-operated patch panels. It might be primitive, and it might be error-prone, but it did work. Most of the time.
Re: (Score:2)
I disagree that they would not exist. Although it's true they might be less problematic this way. Remember that every phone call in the United States used to go through switchboards with human-operated patch panels.
Yeah, what was the total call load then? Now compare that to the number of servers which make up google, and how many requests each serves per second or whatever unit of time you like best. You just can't manage that many machines without automation, not if you want them to behave as one.
Re: (Score:2)
"Yeah, what was the total call load then? Now compare that to the number of servers which make up google, and how many requests each serves per second or whatever unit of time you like best. You just can't manage that many machines without automation, not if you want them to behave as one."
If you had enough people you could. I already stated that it would be more error-prone. And obviously at some point you would run out of enough people to field requests for other people. But the basic principle is sound... it DID work.
Re: (Score:3)
The real fun starts when the first automatic system insists the change it created wasn't an error, and that in fact the "fix" created by the second automatic system is an error. The second system then starts arguing about all the problems caused by the first change, the first system argues how the benefits are worth the additional problems, etc. Eventually the exchange ends up with one system insulting the other system's progra
Re: (Score:2)
"The real fun starts when the first automatic system insists the change it created wasn't an error..."
The Byzantine General problem. It has been shown that this problem is solvable with 3 "Generals" (programs or CPUs) as long as their communications are signed.
Re: (Score:2)
Correction. It has been shown that in case of up to t errors, it can be solved with 3t+1 Generals/nodes/CPUs/whatever. So if you assume 0 errors, you need only 1 node. If you want to handle 1 error, you need 4 nodes. There is a different result if you assume a failing node stops communicating and never sends an incorrect message, in that case you only need 2t+1. However that assumption is unrealistic, and the Byzantine problem explicitly deals
Re: (Score:2)
"Correction. It has been shown that in case of up to t errors, it can be solved with 3t+1 Generals/nodes/CPUs/whatever."
No, that's the situation in which messages can be forged or corrupted.
I was referring to the later solution using cryptographically secure signatures. This means messages (hypothetically) aren't forgeable and allow corrupted messages to be detected. A solution can be found with 3 generals, as long as only one is "disloyal" (fails) at a time.
The 1/3 failures at any given time is a reasonable restriction, since a general solution for 2/3 or more failing at the same time does not exist.
Re: (Score:2)
I don't know what solution you are referring to. It has been formally proven, that it is impossible. The proof goes roughly like this. If an agreement can be reached in case of 1 failing node out of 3, that implies any 2 nodes can reach an agreement without involving the third. However from this follows, that if communication between the two good nodes is slower than communication between each good node and the bad
Overconfidence (Score:2)
Arsonist claiming to be the hero firefighter (Score:4, Interesting)
Jim Gray is is looking down (Score:2)
and smiling... http://en.wikipedia.org/wiki/J... [wikipedia.org]
Does this count as a Heisenfix?
ONE HOUR? (Score:3)
BULLSHIT.
I was experiencing problems for something like 8 to 10 hours before the services were fully restored.
Captcha? (Score:2)
I wonder if this is at all related to their Captcha outage on the 22nd. I still haven't heard a peep as to what caused the outage, or even an acknowledgement that there was even an outage, even though the captcha group was filled with sysadmins complaining about captcha being down.
More likely case (Score:2)
What's more likely - I've run into exactly this scenario before, in fact - is that the configuration generation system regenerates configs on a regular schedule, and at one point encountered a failure or spurious bug that caused it to push an invalid config. On the next run - right as the SREs started poking around - the generator ran again, the bug wasn't encountered, and it generated and pushed a correct config, clearing the error and allowing apps to recover.
Re: (Score:1, Interesting)
How about we ship ALL the immigrants back. Give America back to the (Native) Americans