Why Is Less Than 99.9% Uptime Acceptable? 528
Ian Lamont writes "Telcos, ISPs, mobile phone companies and other communication service providers are known for their complex pricing plans and creative attempts to give less for more. But Larry Borsato asks why we as customers are willing to put up with anything less than 99.999% uptime? That's the gold standard, and one that we are used to thanks to regulated telephone service. When it comes to mobile phone service, cable TV, Internet access, service interruptions are the norm — and everyone seems willing to grin and bear it: 'We're so used cable and satellite television reception problems that we don't even notice them anymore. We know that many of our emails never reach their destination. Mobile phone companies compare who has the fewest dropped calls (after decades of mobile phones, why do we even still have dropped calls?) And the ubiquitous BlackBerry, which is a mission-critical device for millions, has experienced mass outages several times this month. All of these services are unregulated, which means there are no demands on reliability, other than what the marketplace demands.' So here's the question for you: Why does the marketplace demand so little when it comes to these services?"
because they've been conditioned (Score:5, Insightful)
The marketplace has been duped into believing that this is the best technology can provide. People don't have time to know, understand, or research history and find that technology really can be reliable.
I'll get modded troll, but I lay much of this at Microsoft's feet. I laughed them off when I first heard of them and their goal of taking over the industry. After all, I'd been working on systems that ran 24x7 with five-9 reliability for years, and DOS/Windows couldn't touch that.
One time I had an opportunity to visit Microsoft and have lunch with a friend there. I figured while there I'd take the opportunity. I asked them in hushed tones, "Just how do you configure Windows so that you don't have to reboot it all of the time?" They looked at me like I was crazy.
Technology can provide reliability. The general public is no longer even aware that it's possible.
Re:because they've been conditioned (Score:5, Insightful)
Partly correct (Score:3, Interesting)
I don't think they measured squat. Just did their best. Only thing was that there were nobody who could properly design an O/S and complexity, instead of simplicity, ruled the day.
What we are seeing is the very best they as group are able to produce.
They have never been great at mark
Re: (Score:3, Interesting)
Re: (Score:3, Insightful)
Introducing the EULA (Score:5, Informative)
After all, liability plays a large part in defining QA policies. If software companies were held to the same liability standards most product manufacturers face, I'd bet software development would be more of the engineering practice it should be.
To quote part of Microsoft's EULA for Windows XP.
http://www.microsoft.com/windowsxp/home/eula.mspx [microsoft.com]
ALSO, THERE IS NO WARRANTY OR CONDITION OF TITLE, QUIET ENJOYMENT, QUIET POSSESSION, CORRESPONDENCE TO DESCRIPTION OR NON-INFRINGEMENT WITH REGARD TO THE SOFTWARE.
Re:because they've been conditioned (Score:5, Insightful)
This I agree with whole-heartedly. Its a fundamental basis of a market driven economy. Spending effort on things that are too good for the market wastes resources that could be spent elsewhere on items that the market (ie. people) do want. Capitalism does not - and must not - build the best, merely the just barely good enough.
Most people don't give a crap about quality, and if they do then somebody else should pay for it. Its all about the latest and greatest bling and appearing to be better than your neighbours.
So everything we have in our lives - every product, service, and system - is just good enough to work for most of the people most of the time and no more. Our transport largely gets people from A to B (eventually), our health system keeps most people alive a few years longer with not much discomfort, our communications work most of the time for most people in most places, and our politicians mostly look after us OK.
Oh, and most of us do most of our work most of the time when we have to. And no more!
Re:because they've been conditioned (Score:5, Informative)
Re:because they've been conditioned (Score:5, Insightful)
It's not just that the returns are diminishing, they're -NEGATIVE-. It's not just that countries that spend 30-40% less on healthcare compared to USA have similar health and life-expectancy, several of them actually have significantly BETTER results for LESS money.
The reason is basically what you state: Giving EXTREME healthcare to those who already have GOOD healthcare provides little if any benefit, but providing the BASICS to those who are lacking them is cheap and efficient.
So, USA has very very high spendings for those who are "in", but fall quite deeply on the rankings because you fail to provide GOOD healthcare to everyone living in the USA. That's why you're not in the top 40 for any of the most used healthcare-indications despite being undisputed as number one in spendings.
Norway, for example, has similar healthcare to USA, not quite as extreme on the top mainly due to less panic about courts, but still come out way ahead, because healthcare is truly universal.
Costs less, gives more health. What is not to like ?
Re:because they've been conditioned (Score:5, Insightful)
Why, in general terms, do we redistribute wealth forcibly ?
The short answer is: Because we live in a democracy and the majority of politicians vote in favor of doing that.
The longer answer is; Because living in a stable, healthy population with a safety-net has benefits, even if you're not among the direct recipients of the welfare.
In South-Africa earning $100.000/year means living in a castle surrounded by 10-feet concrete topped with broken glass and barbed wire, surveiled by video-cameras, in a "gated community", driving your kids wherever they need to go for fear of kidnapping and *still* accepting that your odds of being killed by someone desiring your wealth are non-negligible.
In Norway, earning $100.000/year means living wherever the hell you want, surrounded by a garden with strawberries in it, never even having the thougth "kidnapping" cross your mind in relation with your children, posessing no security-camera and indeed unless you live in a major city you'll probably not bother locking the door. Still, even without the precautions, your odds of being killed by someone desiring your wealth is, essentially zero. (more than 2 orders of magnitude lower)
I don't know what that's worth. But it's worth -something-.
I'm much more skeptical of all the corporate welfare, truth be told. If I could directly change what my tax-dollars are used for, my vote would be to cut drastically on subsidizes to dinosaur-industries that are uncompetitive (it's insane that *tobacco*-farmers and coalminers are the two groups receivin the most subsidies in the EU) and to *UP* support of those people who need it the most. Primarily EDUCATION -- I'm the opinion that that is the most sensible support you can give a weak group. It's the only help that can help them with time becoming independent.
Re: (Score:3, Insightful)
Re:because they've been conditioned (Score:4, Insightful)
Truly, your courage is an inspiration to us all!
In fact, though, I can tell you that in the pre-Windows days, electricity had outages, television had outages, telephone service had outages, gas service had outages... For the same reason we have them today -- people aren't willing to accept the economic and aesthetic costs of providing those services at the level of reliability you and the author are demanding.
Incidentally, is it most people's experience that "We're so used [sic] cable and satellite television reception problems that we don't even notice them anymore"? There were some glitches in a broadcast of Zoolander on TBS last weekend, which I'll admit is cause for complaint. (Especially since one wiped out "I feel like I'm taking crazy pills!") But on the whole, I can't say I've seen substantial problems when there wasn't a blizzard or hurricane, and if I'm forced to to stop watching TV for an hour or two, it's not the end of the world.
Reality Check (Score:5, Interesting)
I was born in 1964. I have no recollection of POTS telephone service ever being unavailable.
Electricity was expected to drop out a few times every summer, and until someone figures out how to tell lightning where to go, I expect it will continue to happen. In my part of Canada, however, power is continuously available from October to April no matter what. Even if you don't pay your bill. The only winter power outage of note I can think of offhand was the great Ice Storm of 1998 [wikipedia.org], one of the most spectacular cases of force majeure I've witnessed in my life.
In my part of the world, at least, power and telephone were life-and-death services and legislation mandated their reliability.
Re:Reality Check (Score:4, Funny)
Your neighbors evidently didn't own a backhoe. ;)
Re:Reality Check (Score:5, Interesting)
First responders are police, paramedics, firefighters, etc. There was an incident about a year ago where two cops were being assaulted (and losing the fight) in a basement. Their radios were not working, so they couldn't call for backup.
Luckily for them, a bystander called 911 on their cell phone.
Lucky for me, too, since I got called to the carpet for calling the reliability of the system into question. I probably would have been fired, but the above-mentioned incident was in the paper the morning of my "meeting".
The new radios are controlled by internet-connected computers. As the Farkism goes, "this should end well."
Re: (Score:3, Interesting)
Unfortunately nobody seems to realize just how much money goes into one radio tower.
In my county we had decent coverage even though we were the second largest county in the state, but had the fewest number of people.
We had three towers serving the entire area. Each one cost around $1
Re: (Score:3, Interesting)
I have always had several telephone service failures per year, every year, for the last several decades, where I live here in Northern Arizona. First of all, when it rains, the telephone lines sometimes become wet and I loose my dial-tone for a day or so. Then, when I call the telephone company, they usually say, if your telephone lines have not dried out and started working within 48 hours, we will send someone out then then. Can't they figure out how to water proof the phone lines and boxes and other
Re:because they've been conditioned (Score:5, Insightful)
Eventually, with all of these little points of failure, you're going to get a good sized chunk of fail. Add in things like the inherent instability of wireless technologies and our nation-wide problem with an aging electrical infrastructure, and you have the sorts of occasionally mildly-inconviencing issues that you see today.
Right now it seems like the things users want to optimize most for are A: speed and B: cost. One day every other month where our home internet is down doesn't seem like the end of the world, especially with the cost of the alternative.
Re: (Score:3, Informative)
Keeping internet services online suffers from the problem of black swans. Nassim Taleb, who invented the term, defines it thus: "A black swan is an outlier, an event that lies beyond the realm of normal expectations." Almost all internet outages are unexpected unexpecteds: extremely low-probability outlying surprises. They're the kind of things that happen so rarely it doesn't even make sense to use normal statistical methods like "mean time between failure." What's the "mean time between catastrophic floods in New Orleans?"
http://www.joelonsoftware.com/items/2008/01/22.html [joelonsoftware.com]
once in a while (Score:3, Funny)
Re: (Score:3, Interesting)
I have to agree.
I've stated before 'Every 9 of reliability increases the cost 10 fold'. Now, this is only the vaguest estimate, with vast numbers of variables, unseen incidents, competency, etc...
Take a car that's 90% reliable. It'd be used, of course, and probably cost you only $100-500. You can get a car that's 99% reliable for $1-5k. 99.9% reliabili
Re: (Score:3, Interesting)
and I can tell you that in the post-windows days... well, people have this concept of rebooting when things don't work. "it will auto-magically fix itself" (tm). cell-phones, managed switches, home routers... you name it, the first thing tech-support will do is ask you to "turn it off and on again". so much so that that is a standard gag in "the IT crowd".
i had this incident in our data center where this nincompoop kept futzing around with a managed switch.
Re: (Score:3, Insightful)
I do not believe that this is the cause.
As is correctly noted above, there are only market pressures involved. When that's the case, customers rarely factor 7 or 8 different metrics (eg. price, quality, reliability etc.) into their decision making. Rather, they identify what they want, then find the cheapest supplier, and provided that there is no compelling reason to avoid the supplier, do the deal.
This means that
Re:because they've been conditioned (Score:5, Informative)
I work for a cable company, by the way. I design a lot of the building-out of our system, so i know the actual costs associated with creating that kind of reliability. Whenever someone needs that kind of reliability, I actually recommend getting a second ISP as a low-speed backup solution. It is the only smart way to go to get complete reliability, as pretty much any company advertising 99.999% reliability in this area is outright lying to the customer. (I know this from experience. I have switched customers over to our ISP from a week-long (or longer) outage of every ISP here, and there are quite a few.) Besides, a good router will split bandwidth between the ISPs so you're not paying for something you're not using. (called "bonding")
I still get amazed when people yell at me for being offline for a few hours after maybe 3, 4, 5 years of uptime. They say that they are losing thousands of dollars per day they are offline. Yet, they don't want to pay for a $40 roll-over backup. THESE are the vast majority of customers who complain so much about 99.999% uptime.
On another note, I think anyone claiming 99.999% on POTS is anecdotal. Growing up, I had my power cut out at least twice a year, and the phone system was hardly 99.999%. Trees fall on lines, and people cut buried lines for all sorts of accidental reasons. Just like you insure anything worth enough value, just like you back up data in multiple locations, you need a fallback plan if your ISP goes out if it means that much to you.
Re: (Score:3, Insightful)
Thousands of dollars per day? That's all? I work for a web hosting company. When one of our customers' servers goes down for more than 10 minutes, they immediately claim to be losing
Because it's not worth the enormous cost! (Score:3, Insightful)
99.999% uptime is orders of magnitude more expensive than 99.99%, which in turn is orders of magnitude more expensive than 99.9% uptime, and so on.
The added cost is simply not worth it, in any sense of the word, to the ge
Re: (Score:3, Interesting)
[1] If so, let me forward
At what price? (Score:5, Insightful)
No. They believe it is the best the technology can provide at a given price. Why do people "put up" with cars that only give them X amount of protection in a car crash even though there is technology out there that would make them safer? Because they aren't willing to pay the marginal cost for the extra protection. Arguing about what is possible with technology is pointless. What matters is what a piece of technology can do at a given price.
Everything is a trade-off. The sooner Slashdot learns this the less we will have these stupid "Why don't consumers use the latest, greatest, most expensive technology? We need to force them somehow!" articles.
Statutory liability for software defects (Score:3, Insightful)
They believe it is the best the technology can provide at a given price. Why do people "put up" with cars that only give them X amount of protection in a car crash even though there is technology out there that would make them safer? Because they aren't willing to pay the marginal cost for the extra protection.
This reminds me of why Bruce Schneier's dream of legislating liability for software defects is misguided. Sure, statutory liability would make software more reliable, but it would mean that the many who don't need the additional reliability (and currently aren't willing to pay for it) would be forced to subsidize the handful who do. It would also likely claim volunteer-developed software as a casualty.
Re: (Score:3, Insightful)
My company offers up to gigabit fiber optic in the city. As you get more into the country areas, you're outside our service coverage, and no ISP will offer that withou
Re: (Score:3, Insightful)
Re:because they've been conditioned (Score:5, Interesting)
Re:because they've been conditioned (Score:5, Insightful)
One time I had an opportunity to visit Microsoft and have lunch with a friend there. I figured while there I'd take the opportunity. I asked them in hushed tones, "Just how do you configure Windows so that you don't have to reboot it all of the time?" They looked at me like I was crazy.
In a certain sense.. you were crazy, at least at Microsoft.
The origins of an OS really show through a lot of the time. Windows started out as a single user OS, so rebooting was OK because the only person you messed up was the guy sitting in front of the screen. It eventually evolved into a multi-user OS, but the "just reboot!" mentality persists to this day.
Linux/Unix on the other hand started out life as a multi-user OS. Rebooting was a big no-no, because you'd affect countless people logged in, and you'd get yelled at for ruining someones work.
It's funny the attitude that comes from the users of each OS. Windows administrators categorically will try rebooting the damn thing first to fix any problem (and it usually works). Linux administrators will only try this as a last resort (and it almost never works).
Anyway, at Microsoft the idea that you can somehow tweak windows just right so rebooting isn't necessary is crazy. They designed the damn thing so "just reboot!" will fix any problem. This of course is an unacceptable solution to a lot of people out their, but for a lot of people it's obviously reality.
Re: (Score:3, Informative)
I think that about 80% of the time I will know beforehand if rebooting a Linux system is going to solve a particular problem. But even if I'm convinced that a reboot would solve the problem, I usually spend some time looking for a solution that does not involve rebooting. There are multiple reasons why I look for other solutions. Sometimes a reboot is inconvenient because of all the programs that have to be shut down and st
Re:because they've been conditioned (Score:5, Interesting)
The origins of an OS really show through a lot of the time. Windows started out as a single user OS, so rebooting was OK because the only person you messed up was the guy sitting in front of the screen. It eventually evolved into a multi-user OS, but the "just reboot!" mentality persists to this day.
Windows NT (ie: contemporary Windows) has been a multiuser OS since it's first release.
The reason the "just reboot" mentality persists is simply becaus e99% of the time it *is* used as a single-user OS, and no-one else is impacted. This has _zero_ to do with the architecture and everything to do with the user. Linux would be (and is) treated in the same way in similar situations.
Linux/Unix on the other hand started out life as a multi-user OS. Rebooting was a big no-no, because you'd affect countless people logged in, and you'd get yelled at for ruining someones work.
UNIX actually started out as a single-user OS and the multiuser aspect was bolted on later. Linux didn't, of course, because by the time Linus banged together his UNIX rip-off, UNIX had been multiuser for quite a while.
However, again, the attitudes towards how their relevant users treat servers and workstations have about 10% to do with their architectures and 90% to do with their knowledge. DOS and OS/2 were single user, yet frequently had BBSes and similar running off them. You can be assured the people running those BBSes were far less like to have the "just reboot" mentality.
Further, the other reason most people have that attitude is because to them a computer is just another appliance. When other appliances act up, pretty much the first thing _everybody_ does is turn it off and back on again. Why on Earth would you expect them to treat a computer any differently ?
Windows administrators categorically will try rebooting the damn thing first to fix any problem (and it usually works). Linux administrators will only try this as a last resort (and it almost never works).
No. Inexperienced admins will try rebooting first, regardless of platform. Experienced admins will not. Incidentally, there are numerous classes of problems on Linux (and UNIX in general) which are more quickly and easily "fixed" with a reboot.
Anyway, at Microsoft the idea that you can somehow tweak windows just right so rebooting isn't necessary is crazy.
I can't even remember the last time I had to reboot any of my Windows machines without a good reason (eg: patching).
Finally, there's nothing wrong with rebooting _anyway_. If your service uptime requirements are affected by a single machine rebooting, your architecture is broken. All the reboot does is demonstrate that it's broken without a real problem actually occurring.
Sysadmins comparing machine uptimes is like ricers comparing spoilers.
Re: (Score:3, Informative)
This has _zero_ to do with the architecture and everything to do with the user. Linux would be (and is) treated in the same way in similar situations.
This is simply not true. Anyone that's ever installed software, or run "windows update" knows that rebooting is a very likely part of this process. The dependencies and non-modular approach of Windows are quite apparent. Software vendors say "just reboot" because of all the complexities and dependencies within windows.
The same simply isn't true for Linux.
Re:because they've been conditioned (Score:5, Insightful)
This is simply not true.
Yes, it is. People who use Windows, when using Linux, are going to respond exactly the same way to problems - by rebooting.
Anyone that's ever installed software, or run "windows update" knows that rebooting is a very likely part of this process. The dependencies and non-modular approach of Windows are quite apparent. Software vendors say "just reboot" because of all the complexities and dependencies within windows.
No, they do it because it's a simple step for the ignorant end user to understand.
The same simply isn't true for Linux. Replace a critical shared library? No problem, running programs still have a hook to the old version. Any new process that starts will get the new version of the library. Why reload the whole damn OS when restarting a process will do the same thing?
Because for people who don't know that, it's easier to say reboot.
You are conflating knowledgable end users with typical end users. This is at best naive and at worst deliberately deceptive.
You're trying to tell me with a straight face that the BBS market influenced Microsoft? (Which flies in the face of what we've all experienced with Windows).
No, I'm telling you that a random individual's attitude towards rebooting is going to be vastly more influenced by their skill level ad what they're using their computer for than the OS it runs.
No, the reason people have this attitude is because it freaking works.
Exactly. Now, again, why do you think they're going to treat computers any differently ?
I've been administrating Linux machines for 13+ years. I can count on one hand the number of times a reboot solved any problem. The only class of problem this solved is a kernel bug, or the kernel crashing (usually from a hardware problem).
Not done much work with NFS then, I take it ? Or services that have long timeout periods and don't die nicely ?
I struggle to believe anyone has been using Linux for "13+ years" and can only "count on one hand the number of times a reboot solved any problem". Either you've not used Linux anything close to "13+ years" or you've not used it in a very wide range of situations.
Why would anyone reboot without a "good reason"?
The fact that you even need to ask disqualifies you from any useful input to this discussion. Fucking hell. People hit the rest button on their PCs because the monitor power-saving kicked in and for dozens of other reasons that aren't even that good.
The point is that Linux simply has less "good reasons", and requires less reboots. Linux requires FAR less reboots for "patching".
Linux also makes a lot more assumptions about its users (and "users" in this sense reaches from Grandma to software developers).
Wow. Now I know you've really drank the Microsoft kool-aid. Not everyone can afford multiple machine redundancy just to fix the endemic problems of Microsoft who advocate "Just reboot!" to fix so many problems. There's really no reason why I need to reboot just to update what's essentially some new versions of DLLs. The Microsoft architecture is essentially broken if you have to buy another damn machine for the SOLE purpose of maintaining high availability.
Yeah, like I thought. "13+ years" and 12 of those were probably using it on your home PC.
The only meaningful difference between a "reboot" and a hardware failure is the amount of warning. I'll say it again. If your business continuity is vulnerable to individual machine outages (be they from reboots or motherboards going up in smoke), then it's broken. Period. If you can't afford "multiple machine redundancy" then you don't need 24/7 uptime. If you don't need 24/7 uptime, then either scheduled machine reboots (eg: for patching) are irrelevant, or brief outages are acceptable.
Any sysadmin who thinks he can run a high-availability operation without multiple machine redundancy is incompetent. Any sysadmin who is purporting to do so, is grossly negligent. The fact that there's a hell of a lot of people whose Linux (and UNIX in general) bias puts them into these categories, does not make them any less incompetent or negligent.
Re: (Score:3, Insightful)
Amen. Hoping for a long, stable uptime on a linux machine that does very intensive and sustained NFS I/O provde to be pipe dream for me. Things did get much better after applying the plethora of nfs.org patches, but you still get some awesome kernel failures.
But I don't care, because I have many machines accessing the NFS mount (mailboxes, btw). I lose one, and keep on ticking. If I lose o
Re: (Score:3, Interesting)
It's funny the attitude that comes from the users of each OS. Windows administrators categorically will try rebooting the damn thing first to fix any problem (and it usually works). Linux administrators will only try this as a last resort (and it almost never works).
It's even less than a last resort. I have, once or twice, had true problems that required a reboot of a Linux machine to fix. The one in most recent memory, it took three weeks before realizing that a reboot was (or at least, could be) the s
Re: (Score:3, Interesting)
No, it should be viewed as fumigating your house. You all move out, wait a few days, then move back in. When you reboot you don't lose the computer, you don't lose the archived data, and all the users can return in a short amount of time.
Burning down your house loses all the contents and ensures you'll never return...
Re:because they've been conditioned (Score:5, Insightful)
That's three weeks of hard core debugging, tweaking, and hair pulling.
The fact that you were able to wait *three weeks* demonstrates that the problem was, at most, insignificant.
When thousands of dollars (or more) are being lost every minute that a service is unavailable[0], you don't fuck around with idiotic philosophising about how "its UNIX, I shouldn't need to reboot for anything"[1], you just DO IT.
[0] We shall ignore here for a minute the false economy of not just investing in a properly redundant architecture where individual machine outages do not impact availability.
[1] I've been there myself and had arguments with my (at the time) boss about it. It is the difference between how geeks think and how businesspeople think. The geek is interested in figuring out wtf is wrong. The businessman is interested in whether or not his business is still operating.
Re: (Score:3, Funny)
I'm not saying Windows is better, but the above means you don't have to work a lot with NFS clients on Linux...
Very true.
I consider NFS to be the devil. If given the choice, I'll choose a different protocol every time.
Re: (Score:3, Interesting)
Frequent reboots haven't been required since win2k.
(snicker)
I've been running windows for years, and this statement is just very funny to me. You must be running some entirely different magical version of windows that I've ever seen, but reboots are EXTREMELY common on 2000, XP, and Vista. The "just reboot" instinct I've seen from multiple different Windows guys is common, and DOES work. I was looking forward to Vista, which claimed it didn't require rebooting as often. That didn't really turn out to be
Re:because they've been conditioned (Score:4, Funny)
Re:because they've been conditioned (Score:5, Insightful)
No way... (Score:5, Insightful)
I've been in the technology services business for a long time, and with few exceptions, 80%+ customers want their services are delivered as cheaply as possible. Most hospital systems don't even have a 99.999% availability requirement. The 20% the want varying levels of higher than normal availability usually have a government regulation, SLA or other mandate requiring that they do so.
Re: (Score:3, Interesting)
Oh Zonk (Score:4, Funny)
The way it has always been (Score:3, Insightful)
Re:The way it has always been (Score:5, Interesting)
In truth, most consumers won't complain when they should, so there is no marketplace pressure on those businesses to aim for five nines uptime.
Re: (Score:3, Informative)
Having been on the other end of these types of calls, this sort of thing can be *very* annoying. People do call all of the time with the expectation that because they do five or six thousand dollars worth of business in a day, the ISP is somehow responsible for those thousands of dollars when some idiot Verizon contractor accidentally cuts our ca
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
Re:The way it has always been (Score:4, Interesting)
Back in the day, I was an IRC Operator for a large Undernet server, and there came a time where the new thing for troublemakers was to use open proxies on cable connections to flood channels/servers. One cable provider had a particularly large number of clients whose setup was used to attack the network and generally cause trouble.
At first, being in the area of that provider, I called tech support and escalated the issue as much as I could. My point was that they were ultimately responsible for the abuse coming from their network. Long story short, for months I got nothing but "we'll look into it".
After a particularly nasty week, and after consulting with the server admins, we decided to ban the whole ip range of that provider from using our server (they could still use the rest of Undernet, but our server was popular for them). The ban kicked > 1000 clients from the server with a message like : Your provider does not respond to abuse complaints. Contact your provider's technical support to have this issue resolved.
10 minutes later, there was a 30 minute wait at the provider's tech line. On a sunday afternoon. One hour later, I got an email saying they were blocking inbound port 1080 at their router to protect their clients machines from being abused.
I guess the point is, when something generates enough backlash, preferably with a nice surprise effect, things can change. The hard thing is to organize people enough to harass the company about it.
The cost (Score:5, Interesting)
Re:The cost (Score:5, Informative)
Re: (Score:3, Insightful)
Simply said, because the equipment isn't/hasn't been able to support it, the only way to build 5 9's or better has been to add more equipment, which increases operations costs, capital costs, etc across the board in an almost linear fashion.
The market has for the most part established the level of service available by establishing the price point the customer is willing to pay for said service.
People love
Bingo (Score:5, Insightful)
Re:The cost (Score:5, Interesting)
We always want to compare service levels for newer tech with POTS and complain when they don't approach the same levels, but I'd expect that if we were to be still using the same equipment for ISP/Cellular service in a hundred years, it would be as stable and robust as the current (ok, previous generation) iteration of POTS. Problem is, we are constantly demanding better, faster, and cheaper: this has to be traded off for reliability, and for the most part people are happy with that tradeoff. Just like we're happy to buy crappy consumer goods from China at Wal*Mart because they're cheaper than domestic products.
Re: (Score:3, Interesting)
I would have settled for only 99% from comcast. The fact that the cable modem was only ~70% reliable is just embarassing, to this day I cringe when I hear that people are relying upon comcast for emergency calls. It would be out for hours every day, and we did ditch them for DSL.
YMMV, but I use my residential Comcast connection as a backup monitor for a server I administer. Every 10 minutes my home machine (which is running 24/7) pings the server and waits for a response - it's a cheap way of tracking the server's availiability, although of course it's really checking the availability of both the server and my home internet connection. I just checked the records for all of of February, and it only recorded one failed attempt for the entire month, which translates to a success ra
the simple answer - we have more options... (Score:3, Insightful)
Basically, we don't rely so much on a single system that a brief outage can be tolerated because there are alternatives to choose from.
This is also the basis of Clayton Christensen's theories on disruptive innovation - that a consumer of something (technology, etc.) is willing to trade off some of these aspects, like reliability, for cost or performance benefits (however you wish to define those benefits...).
true for blackberry too (Score:3, Insightful)
Costs increase geometrically (Score:2)
Re:Costs increase geometrically (Score:5, Informative)
This
Uptime (%) Downtime 90% 876 hours (36.5 days)
95% 438 hours (18.25 days)
99% 87.6 hours (3.65 days)
99.9% 8.76 hours
99.99% 52.56 minutes
99.999% 5.256 minutes
99.9999% 31.536 seconds
I work for a software shop where we can do high availability, but more often than not, folks chose to lower the uptime expectation rather than pony up for the stupid money it takes to have the hardware / software / infrastructure to get there. Most companies know the customer will not pay the extra cash for the uptime, thus... you get what you pay for.
Even Simpler... (Score:4, Insightful)
99.999% over a year is 31.526 seconds.
No matter how good your staff, no matter how many people you have on site, no matter how robust your systems, no matter how many failsafes you have standing by, ready to be plugged in...
IF something does go down, even the fastest tech on earth is unlikely to identify, pull out, replace and have fired back up whatever the faulty item is in under 30 seconds.
99.999% uptime is essentially fictional. It's simply an impressive sounding number that says, "We'll do everything realistically possible to keep you up 100% of the time. In a typical year, you won't see anything bring you down. You can now tell your investors/clients this and make them feel warm and fuzzy."
It ignores the second part, "But, honestly, if it does go down, we won't have it back within 30 seconds, 100% of the time. Sorry, but welcome to reality. But, for what it's worth, our board's happy to pay you outage fees because it's a small enough risk and the amounts are capped enough, that we're happy to take the risk and costs in exchange for advertising a service we know no one can deliver."
Let's look at regulated phone service, the example in the original post. Can anyone point to a major carrier that hasn't had a major outage at some point? Be it an idiot in a switch room, a power outage affecting a whole side of the country, an anchor ripping up an undersea cable? And how many of them have actually been back within the mandated 30 seconds?
It doesn't happen. That two hour outage is going to take quarter of a millenium of absolutely no more faults to earn back at 30 seconds/year. With luck, it only hit one in 250 customers so you can pretend you're well within your 99.999% uptime but that 1 in 250 isn't really going to agree they got 99.999% after they were down for 1:59:30 more than their contract said they would be.
So, no, 99.999% doesn't exist. It's just a really cool story we tell ourselves whilst being willing to pay whatever the penalties are for missing it, on rare occasions, in exchange for great advertising.
No, it does exist (Score:3, Informative)
The problem is, of course, going for that can be really expensive. Not only does the system itself have to have a bunch of redundancy, but so does everything supporting it. For example in the case of a web server you'd not only
Re: (Score:3, Insightful)
Re:Costs increase geometrically (Score:5, Funny)
Here's an easy one. (Score:5, Insightful)
Re:Here's an easy one. (Score:5, Interesting)
Re: (Score:3, Insightful)
Re: (Score:3, Funny)
Low price or high-quality? (Score:5, Insightful)
We're not talking about software, we're talking about hardware and man-hours. Those will never be free.
because its ridiculous (Score:3)
Whenever people talk about 99.999 uptime for a service delivered over the internet I laugh in their faces.
Re: (Score:3, Informative)
Clearly, a ridiculous number.
Re: (Score:3, Insightful)
On the other hand, if google was unavailable for 9.863 seconds per day, every day (which is the equivalent of 1 hour per year), who would care? Just resubmit your query.
What's important about reliability is often not the total downtime but the duration of downtime.
More physics in action. (Score:4, Funny)
I can tell you why... (Score:2)
It's a market-wide problem. (Score:2, Informative)
As consumers, we're made to feel helpless. The worst we can do (without litigation) to a company is complain or refuse to use their services, but what harm can that do to a giant conglomerate? And in situations in which one company has a monopoly in a certain area of the country, for example, consumers may not have the ability to switch or do without.
As a personal example, Comcast owes me a refund check for Internet services I canceled six months ago. If I, as a consumer, had allowed my debt to go unpaid
Re: (Score:3, Insightful)
It varies by state, but usually it costs 15 dollars to take a company to court, and no lawyers are required. It is generally quick and painless, and people at your local courthouse can fill you in on the details and help you through the proces
Because we are patient (Score:2)
Really so common? (Score:5, Interesting)
Re: (Score:3, Insightful)
Hint: Just because you live somewhere without such problems does not mean they don't exist. Ditto for lost e-mail.
Re: (Score:3, Interesting)
because 'misson critical' is a myth (Score:5, Insightful)
Because it's not necessary? (Score:4, Informative)
The telephone as we know it was the first genuinely instantaneous, worldwide communications medium that anyone could use, it was seen as a necessary component for national security during the cold war, and was built out as such. We've had over a century to perfect it, and vast amounts of money were spent doing so. Despite its origins at DARPA, the Internet as we know it today, although more useful, is by and large less of a basic need, is far more complex, and large portions of it are still built on top of the telephone infrastructure, besides.
I can't help but think that most people understand this sort of thing, and understand that bringing such modern conveniences up to five nines of reliability is difficult and expensive, and people have evidently decided that a certain tradeoff to make such things affordable isn't out of line.
The shorter, more pessimistic version of this is probably, "It's cheaper to suck."
You don't have to take it anymore (Score:5, Insightful)
When Microsoft decided that I didn't own the rights to my own media and stopped me from being able to copy my own DVDs, I decided to drop them for my media development system and I switched to Linux and Apple. Microsoft doesn't want my business so I went with the people who do. No problem.
When my Long Distance company decided to charge over $1.00 per minute for International calls, I switched to AT&T and their 17 cents a minute program. No problem.
When Frigidaire washers charged extra for the warm water cycle but only give you 5 seconds of hot water and thus, never any, it was no problem to return the unit and buy a different brand. Sure, the salesman wasn't happy but, that is now his problem and not mine. I bought a different brand that did give me what they advertised and promised. No problem.
The list is endless and across all businesses and domains.
The point being is that there are alternatives but, many (or most) people are either too lazy to do anything about it or, like this article, they are too apathetic to do anything about it.
The choice is up to the consumer and, if the consumer would take action, the industry would have to adapt because the market demands it. So far, the market is willing to accept this and thus, the industry sees no reason to change. The less the consumer will accept for their dollar the less they will receive. That, is the problem.
It's simple confusion (Score:4, Funny)
O RLY? (Score:5, Insightful)
Just one point ... (Score:4, Informative)
Of course, when you don't have transmitters with overlapping coverage, this doesn't work.
Here's your citation about email (Score:3, Interesting)
[citation needed] I call bullshit on that one.
And I call BS on your BS. Clearly you're not familiar with the state-of-the-art as far as email goes. You've certainly not had to set up and run a private email server.
Here's one good reference [realfreewebsites.com]. It mostly mirrors my experience, except that it's been going on longer than the writer has observed.
The basic problem is that Yahoo, Hotmail, ATT and other large email providers, or ISPs, simply r
Gas Prices? (Score:3, Interesting)
WTF would I do with 99.999% uptime? (Score:5, Interesting)
My consumer grade equipment isn't 99.999% uptime (with luck, maybe I guess but there's no ECC, redundant power etc).
My software isn't 99.999% uptime (ok, so the kernel is stable. When X crashes, so does everything of importance on a desktop)
If there's something urgent, you CALL me anyway.
I'd rather take a line with 99.5% uptime (that's two days without internet per year) that's 10x faster and costs 10x less. Which doesn't include that I have Internet at work, or via my cellphone, or via a webcafe or any number of other easily available sources. The only real killer I can think of is if you only telecommute and can't go to work, but even then I figure the nearest Starbucks will let you occupy a corner with some purchases.
Reality check (Score:3, Informative)
Let's look at the numbers: 99.9% uptime translates to about 9 hours of unscheduled downtime a year. That can be one 9-hour block once a year, 36 minutes per day, 1.5 minutes per hour, 1.5 seconds per minute, or one dropped packet per thousand. Sure, it's easy to spot a 9-hour blackout, but as the slices of downtime get thinner, they get harder to notice at all, or to identify as USD specifically.
99.999% uptime translates to about 5 minutes of USD per year, and is of questionable value. You can't identify a network outage, call in a complaint, and get the issue resolved in the given timeframe. 99.9999%? It is to laugh. You can't even look up the tech support phone number without blowing your downtime budget for the year. Get hit by a rolling blackout for an hour? Kiss your downtime budget goodbye for the next 120 years.
Getting back to 99.9% uptime, let's move on to standard utilization patterns. USD really only becomes an issue if people notice it
If we have 2 seconds of usage and 2 seconds of downtime per minute, the odds of a collision are around 15:1 with an average overlap of 1 second when a collision does happen. Simply interleaving usage and downtime that way increases the perceived uptime by an order of magnitude since 90% of the outages happen when no one is actually using the network. And larger blocks of downtime get lost in larger blocks of non-utilization exactly the same way
Granted, if you have higher utilization you'll have a better chance of hitting a chunk of downtime, but you'll also have higher chances of queuing latency within your own use patterns. If you're already using 99% of your bandwidth, you can't just plunk in one more job and expect it to run immediately. It has to wait for that 1% of space no one else is currently using. And when you get to that point, it's really time to consider buying a bigger pipe anyway.
And that brings us to the main point: People don't buy network connectivity in absolute terms. They buy capacity, and the capacity they buy is scaled to what they think of as acceptable peak usage. "Acceptable peak usage" is a subjective thing, and nobody makes subjective judgements with 99.999% precision.
Because they're cheap and unobtrusive (Score:3, Insightful)
Here's Why America Puts Up with It (Score:3, Interesting)
My landline is up 99.999% meaning my phone is available to me when I need it 19.998% of the time.
I'm out and about (and coherent) 40% of the time.
My cell phone works 90% of the time meaning it is available to me when I need it 36% of the time.
Clear winner, cell phone.
Sometimes we lose site of reality while studying statistics.
On redundancy (Score:4, Informative)
In the entire history of electromechanical switching in the Bell System no central office was ever out of action for more than thirty minutes for any reason other than a natural disaster. On the other hand, step-by-step (Strowgear) switches failed to connect about 1% of calls correctly, and crossbar reduced that to about 0.1%. With electronic switching, the failure rate is higher but the error rate is much lower.
This reflects the fact that, in the electromechanical era, the hardware reliability was low enough that the system had to be designed to have a higher reliability than any of its individual units. In the computer era, the component reliability is so high that good error rates can be achieved without redundancy. This is why computer-based networks tend to have common mode failures.
If you're involved in designing highly reliable systems, it's worth understanding how Number 5 Crossbar worked. Here's an oversimplified version.
The biggest component of Number 5 crossbar were the crossbar switches themselves. Think of them as 10x10 matrices of contacts which could be X/Y addressed and set or cleared. Failure of one crossbar switch could take down only a few lines, and they usually failed one row or column at a time, taking down at most one line.
The crossbars had no smarts of their own; they were told what to do by "markers", the smart part of the central office. Each marker could set up or tear down a call in about 100ms. Markers were duplicated, with half of the marker checking the other half. If the halves disagreed, the transaction aborted. Each central office had multiple markers (not that many, maybe ten in an office with 10,000 lines), and markers were assigned randomly to process calls.
When a phone went off hook, a marker was notified, and set up a "call" to some free "originating register", the unit that understood dial pulses and provided dial tone. The marker was then released, while the user dialed. The originating register received the input dial info, and when its logic detected a complete number, it requested a random marker, and sent the number. The marker set up the call, set and locked in the correct contacts in the crossbars, and was released to do other work.
If the marker failed to set up the call successfully (there was a timeout around 500ms), the originating register got back a fail, and retried, once. One retry is a huge win; if there's a 1% fail rate on the first try, there's an 0.01% fail rate with two tries. This little trick alone made crossbar systems appear very reliable. There's much to be said for doing one retry on anything which might fail transiently. If the second retry fails, unit level retry as a strategy probably isn't working and the problem needs to be kicked up a level.
The pattern of requesting resources from a pool at random was continued throughout the system. Trunks (to other central offices), senders (for sending call data to the next switch), translators (for converting phone numbers into routes), billing punches (for logging call data), and trouble punches (for logging faults) were all assigned on a random, or in some cases a cyclic rotation basis. Units that were busy, faulted, or physically removed for maintenance were just skipped.
That's how the Bell System achieved such good reliability with devices that had moving parts.
Note that this isn't a "switch to backup" strategy. The distribution of work amongst units is part of normal operation, constantly being exercised. So handling a failure doesn't involve special cases. Failures cost you some system capacity, but don't take the whole system down.
We need more of that in the Internet. Some (not all) load balancers for web sites work like this. Some (but not all) packet switches work like this. Think about how you can use that pattern in your own work. It worked for more than half a century for the Bell System.
Not So Simple (Score:5, Insightful)
Re: (Score:3, Informative)
It doesn't mean that each and every individual phone will be up 99.999 percent of the time, it means that the system as a whole will be up 99.999% of the time.
Its quite possible for an entire town to be down for an entire year and still meet this criteria.
Yet modern cell operators STILL can not come close.
Re: (Score:3, Insightful)
That being said, there is a market for 99.999. Upper-middle class and higher would pay for it.
Um, no. The thing that got the upper class to where they got is either a) dumb luck or b) an ability to distinguish which costs are unnecessary and avoiding them. A savvy spender doesn't give a damn whether the cell will not get a signal for 50 minutes during the year, instead of five minutes, if the costs he will incur are double. A savvy spender determines what he needs and then finds the most cost-effective solution that will fit his needs.