Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Communications The Internet

Why Is Less Than 99.9% Uptime Acceptable? 528

Ian Lamont writes "Telcos, ISPs, mobile phone companies and other communication service providers are known for their complex pricing plans and creative attempts to give less for more. But Larry Borsato asks why we as customers are willing to put up with anything less than 99.999% uptime? That's the gold standard, and one that we are used to thanks to regulated telephone service. When it comes to mobile phone service, cable TV, Internet access, service interruptions are the norm — and everyone seems willing to grin and bear it: 'We're so used cable and satellite television reception problems that we don't even notice them anymore. We know that many of our emails never reach their destination. Mobile phone companies compare who has the fewest dropped calls (after decades of mobile phones, why do we even still have dropped calls?) And the ubiquitous BlackBerry, which is a mission-critical device for millions, has experienced mass outages several times this month. All of these services are unregulated, which means there are no demands on reliability, other than what the marketplace demands.' So here's the question for you: Why does the marketplace demand so little when it comes to these services?"
This discussion has been archived. No new comments can be posted.

Why Is Less Than 99.9% Uptime Acceptable?

Comments Filter:
  • by yagu ( 721525 ) * <yayagu@[ ]il.com ['gma' in gap]> on Sunday March 02, 2008 @04:22PM (#22617178) Journal

    Why does the marketplace demand so little when it comes to these services?

    The marketplace has been duped into believing that this is the best technology can provide. People don't have time to know, understand, or research history and find that technology really can be reliable.

    I'll get modded troll, but I lay much of this at Microsoft's feet. I laughed them off when I first heard of them and their goal of taking over the industry. After all, I'd been working on systems that ran 24x7 with five-9 reliability for years, and DOS/Windows couldn't touch that.

    One time I had an opportunity to visit Microsoft and have lunch with a friend there. I figured while there I'd take the opportunity. I asked them in hushed tones, "Just how do you configure Windows so that you don't have to reboot it all of the time?" They looked at me like I was crazy.

    Technology can provide reliability. The general public is no longer even aware that it's possible.

    • by The Ancients ( 626689 ) on Sunday March 02, 2008 @04:27PM (#22617228) Homepage
      The reasons why Microsoft were so successful (in a business sense) are manifold, but one is not that their products were great, but that they were good enough. They accurately measured what people would put up with at different price points, and serviced the market accordingly. I think ISPs, telcos, etc have done likewise.
      • Partly correct (Score:3, Interesting)

        by unixfan ( 571579 )
        Partly correct. What they did was to mass introduce the GUI. 1.0 was a joke as far as usability went. At the same time the 386 was out and the talks of multiprocessing was promising new and exciting computing in the near future.

        I don't think they measured squat. Just did their best. Only thing was that there were nobody who could properly design an O/S and complexity, instead of simplicity, ruled the day.

        What we are seeing is the very best they as group are able to produce.

        They have never been great at mark
        • Re: (Score:3, Interesting)

          > they didn't measure squat, they just did their best. I don't agree with that, and there's an obvious process that demonstrates my point. Beta. Microsoft could have released each product a year earlier, or a year later. Do it a year later and you have a more polished product, but users clamoring for an update in the mean time. Do it earlier and you have crap. At many times Microsoft had product in long betas and people were asking for a release, but Microsoft knew that releasing at that point would dama
      • Introducing the EULA (Score:5, Informative)

        by Mr Pippin ( 659094 ) on Sunday March 02, 2008 @06:57PM (#22618358)
        Also, because the EULA came into existence, product warranties effectively vanished, as well as actions the consumer could take via product liability claims, in court..

        After all, liability plays a large part in defining QA policies. If software companies were held to the same liability standards most product manufacturers face, I'd bet software development would be more of the engineering practice it should be.

        To quote part of Microsoft's EULA for Windows XP.

        http://www.microsoft.com/windowsxp/home/eula.mspx [microsoft.com]
        ALSO, THERE IS NO WARRANTY OR CONDITION OF TITLE, QUIET ENJOYMENT, QUIET POSSESSION, CORRESPONDENCE TO DESCRIPTION OR NON-INFRINGEMENT WITH REGARD TO THE SOFTWARE.
      • by rmerry72 ( 934528 ) on Sunday March 02, 2008 @10:08PM (#22619480) Homepage

        The reasons why Microsoft were so successful (in a business sense) are manifold, but one is not that their products were great, but that they were good enough.

        This I agree with whole-heartedly. Its a fundamental basis of a market driven economy. Spending effort on things that are too good for the market wastes resources that could be spent elsewhere on items that the market (ie. people) do want. Capitalism does not - and must not - build the best, merely the just barely good enough.

        Most people don't give a crap about quality, and if they do then somebody else should pay for it. Its all about the latest and greatest bling and appearing to be better than your neighbours.

        So everything we have in our lives - every product, service, and system - is just good enough to work for most of the people most of the time and no more. Our transport largely gets people from A to B (eventually), our health system keeps most people alive a few years longer with not much discomfort, our communications work most of the time for most people in most places, and our politicians mostly look after us OK.

        Oh, and most of us do most of our work most of the time when we have to. And no more!

        • by tronbradia ( 961235 ) on Monday March 03, 2008 @04:23AM (#22621426)
          Actually our health system has completely ballooning costs relative to other countries and is really more of an example of the opposite phenomenon, where insurance must pay for all possible treatment or be sued. Our system without a doubt provides the most care of any system in the world, even though it's pretty obvious that returns diminish dramatically after about 10% of GDP (we are at 15% of GDP, 2nd runner up is Switzerland at 11 or 12%). Returns diminish because, essentially, more care doesn't actually make people healthier past a certain point. 99% if people just need a GP (cheap), immunizations (dirt cheap), antibiotics when they get an bacterial infection (dirt cheap), and surgeons to sew them up when they get in a car crash (expensive-ish but hopefully uncommon and only rarely protracted). The problem is whenever anybody gets anything terminal, there's the potential for basically infinite spending, and the more successful treatment is, the more money goes in because treatment is prolonged. In this case our system is not "barely good enough", it's more way too good, or at least, way too generous.
          • by Eivind ( 15695 ) <eivindorama@gmail.com> on Monday March 03, 2008 @05:14AM (#22621582) Homepage
            Everything you say is true, but it's actually even -worse- than that.

            It's not just that the returns are diminishing, they're -NEGATIVE-. It's not just that countries that spend 30-40% less on healthcare compared to USA have similar health and life-expectancy, several of them actually have significantly BETTER results for LESS money.

            The reason is basically what you state: Giving EXTREME healthcare to those who already have GOOD healthcare provides little if any benefit, but providing the BASICS to those who are lacking them is cheap and efficient.

            So, USA has very very high spendings for those who are "in", but fall quite deeply on the rankings because you fail to provide GOOD healthcare to everyone living in the USA. That's why you're not in the top 40 for any of the most used healthcare-indications despite being undisputed as number one in spendings.

            Norway, for example, has similar healthcare to USA, not quite as extreme on the top mainly due to less panic about courts, but still come out way ahead, because healthcare is truly universal.

            Costs less, gives more health. What is not to like ?
    • by Otter ( 3800 ) on Sunday March 02, 2008 @04:38PM (#22617330) Journal
      I'll get modded troll, but I lay much of this at Microsoft's feet.

      Truly, your courage is an inspiration to us all!

      In fact, though, I can tell you that in the pre-Windows days, electricity had outages, television had outages, telephone service had outages, gas service had outages... For the same reason we have them today -- people aren't willing to accept the economic and aesthetic costs of providing those services at the level of reliability you and the author are demanding.

      Incidentally, is it most people's experience that "We're so used [sic] cable and satellite television reception problems that we don't even notice them anymore"? There were some glitches in a broadcast of Zoolander on TBS last weekend, which I'll admit is cause for complaint. (Especially since one wiped out "I feel like I'm taking crazy pills!") But on the whole, I can't say I've seen substantial problems when there wasn't a blizzard or hurricane, and if I'm forced to to stop watching TV for an hour or two, it's not the end of the world.

      • Reality Check (Score:5, Interesting)

        by grcumb ( 781340 ) on Sunday March 02, 2008 @05:11PM (#22617608) Homepage Journal

        In fact, though, I can tell you that in the pre-Windows days, electricity had outages, television had outages, telephone service had outages, gas service had outages...

        I was born in 1964. I have no recollection of POTS telephone service ever being unavailable.

        Electricity was expected to drop out a few times every summer, and until someone figures out how to tell lightning where to go, I expect it will continue to happen. In my part of Canada, however, power is continuously available from October to April no matter what. Even if you don't pay your bill. The only winter power outage of note I can think of offhand was the great Ice Storm of 1998 [wikipedia.org], one of the most spectacular cases of force majeure I've witnessed in my life.

        In my part of the world, at least, power and telephone were life-and-death services and legislation mandated their reliability.

        • by PlusFiveTroll ( 754249 ) on Sunday March 02, 2008 @05:34PM (#22617756) Homepage
          I have no recollection of POTS telephone service ever being unavailable.

          Your neighbors evidently didn't own a backhoe. ;)

        • Re:Reality Check (Score:5, Interesting)

          by Beardo the Bearded ( 321478 ) on Sunday March 02, 2008 @07:21PM (#22618490)
          The first responder radios they have in my city are being upgraded... ... to 97% uptime.

          First responders are police, paramedics, firefighters, etc. There was an incident about a year ago where two cops were being assaulted (and losing the fight) in a basement. Their radios were not working, so they couldn't call for backup.

          Luckily for them, a bystander called 911 on their cell phone.

          Lucky for me, too, since I got called to the carpet for calling the reliability of the system into question. I probably would have been fired, but the above-mentioned incident was in the paper the morning of my "meeting".

          The new radios are controlled by internet-connected computers. As the Farkism goes, "this should end well."
          • Re: (Score:3, Interesting)

            by darkpixel2k ( 623900 )
            Lucky for me, too, since I got called to the carpet for calling the reliability of the system into question. I probably would have been fired, but the above-mentioned incident was in the paper the morning of my "meeting".

            Unfortunately nobody seems to realize just how much money goes into one radio tower.
            In my county we had decent coverage even though we were the second largest county in the state, but had the fewest number of people.

            We had three towers serving the entire area. Each one cost around $1
        • Re: (Score:3, Interesting)

          by Rick17JJ ( 744063 )

          I have always had several telephone service failures per year, every year, for the last several decades, where I live here in Northern Arizona. First of all, when it rains, the telephone lines sometimes become wet and I loose my dial-tone for a day or so. Then, when I call the telephone company, they usually say, if your telephone lines have not dried out and started working within 48 hours, we will send someone out then then. Can't they figure out how to water proof the phone lines and boxes and other

      • by cgenman ( 325138 ) on Sunday March 02, 2008 @05:53PM (#22617878) Homepage
        The server is up 99.99% of the time. The server's T1 is up 99.99% of the time. T1's ISP is up 99.99% of the time. The backbone provider is up 99.99% of the time. The cellular ISP is up 99.99% of the time. The cell-to-tower linkage is up 99.99% of the time...

        Eventually, with all of these little points of failure, you're going to get a good sized chunk of fail. Add in things like the inherent instability of wireless technologies and our nation-wide problem with an aging electrical infrastructure, and you have the sorts of occasionally mildly-inconviencing issues that you see today.

        Right now it seems like the things users want to optimize most for are A: speed and B: cost. One day every other month where our home internet is down doesn't seem like the end of the world, especially with the cost of the alternative.
        • Re: (Score:3, Informative)

          by Jurily ( 900488 )

          Keeping internet services online suffers from the problem of black swans. Nassim Taleb, who invented the term, defines it thus: "A black swan is an outlier, an event that lies beyond the realm of normal expectations." Almost all internet outages are unexpected unexpecteds: extremely low-probability outlying surprises. They're the kind of things that happen so rarely it doesn't even make sense to use normal statistical methods like "mean time between failure." What's the "mean time between catastrophic floods in New Orleans?"

          http://www.joelonsoftware.com/items/2008/01/22.html [joelonsoftware.com]

        • One day every other month where our home internet is down doesn't seem like the end of the world
          Hell, it's a relief! We wander outside, blinking and squinting at the surprising brightness, experiencing strange yet nostalgic smells and sounds.
      • Re: (Score:3, Interesting)

        by Firethorn ( 177587 )
        people aren't willing to accept the economic and aesthetic costs of providing those services at the level of reliability you and the author are demanding.

        I have to agree.

        I've stated before 'Every 9 of reliability increases the cost 10 fold'. Now, this is only the vaguest estimate, with vast numbers of variables, unseen incidents, competency, etc...

        Take a car that's 90% reliable. It'd be used, of course, and probably cost you only $100-500. You can get a car that's 99% reliable for $1-5k. 99.9% reliabili
      • Re: (Score:3, Interesting)

        by Hooya ( 518216 )
        > I can tell you that in the pre-Windows days...

        and I can tell you that in the post-windows days... well, people have this concept of rebooting when things don't work. "it will auto-magically fix itself" (tm). cell-phones, managed switches, home routers... you name it, the first thing tech-support will do is ask you to "turn it off and on again". so much so that that is a standard gag in "the IT crowd".

        i had this incident in our data center where this nincompoop kept futzing around with a managed switch.
    • Re: (Score:3, Insightful)

      The marketplace has been duped into believing that this is the best technology can provide.

      I do not believe that this is the cause.

      As is correctly noted above, there are only market pressures involved. When that's the case, customers rarely factor 7 or 8 different metrics (eg. price, quality, reliability etc.) into their decision making. Rather, they identify what they want, then find the cheapest supplier, and provided that there is no compelling reason to avoid the supplier, do the deal.

      This means that

      • by NevermindPhreak ( 568683 ) on Sunday March 02, 2008 @07:22PM (#22618492)
        I believe you are correct. The market isn't "conditioned" into thinking that anything less than five 9s is acceptable. They just don't want to pay the cost associated with it. The price/reliability ratio right now is the one that will satisfy the most customers. 99.999% reliability is harder to sell than 99.9% reliability at half the cost.

        I work for a cable company, by the way. I design a lot of the building-out of our system, so i know the actual costs associated with creating that kind of reliability. Whenever someone needs that kind of reliability, I actually recommend getting a second ISP as a low-speed backup solution. It is the only smart way to go to get complete reliability, as pretty much any company advertising 99.999% reliability in this area is outright lying to the customer. (I know this from experience. I have switched customers over to our ISP from a week-long (or longer) outage of every ISP here, and there are quite a few.) Besides, a good router will split bandwidth between the ISPs so you're not paying for something you're not using. (called "bonding")

        I still get amazed when people yell at me for being offline for a few hours after maybe 3, 4, 5 years of uptime. They say that they are losing thousands of dollars per day they are offline. Yet, they don't want to pay for a $40 roll-over backup. THESE are the vast majority of customers who complain so much about 99.999% uptime.

        On another note, I think anyone claiming 99.999% on POTS is anecdotal. Growing up, I had my power cut out at least twice a year, and the phone system was hardly 99.999%. Trees fall on lines, and people cut buried lines for all sorts of accidental reasons. Just like you insure anything worth enough value, just like you back up data in multiple locations, you need a fallback plan if your ISP goes out if it means that much to you.
        • Re: (Score:3, Insightful)

          by Da Web Guru ( 215458 )
          I still get amazed when people yell at me for being offline for a few hours after maybe 3, 4, 5 years of uptime. They say that they are losing thousands of dollars per day they are offline. Yet, they don't want to pay for a $40 roll-over backup. THESE are the vast majority of customers who complain so much about 99.999% uptime.

          Thousands of dollars per day? That's all? I work for a web hosting company. When one of our customers' servers goes down for more than 10 minutes, they immediately claim to be losing
        • Thank you for bringing some sanity into this argument. Before you showed up it was dominated by idiotic hippies ranting about our mindless consumer-driven existence, the destruction of the environment, Microsoft, and just about everything else that has nothing to do with the issue at hand.

          99.999% uptime is orders of magnitude more expensive than 99.99%, which in turn is orders of magnitude more expensive than 99.9% uptime, and so on.

          The added cost is simply not worth it, in any sense of the word, to the ge
      • Re: (Score:3, Interesting)

        by griffjon ( 14945 )
        I'd say it's an even deeper problem -- it's not really a marketplace. The competition is few and far between, and they're oligopolistic, and probably price-fixing. I mean, what's your alternative to a blackberry? So what if the service sucks -- is your employer going to ... buy you an iPhone? [1] If Verizon pisses me off, I can switch to... AT&T, or some of the others if I don't mind roaming? People would vote with their wallets if there were candidates worth switching to.

        [1] If so, let me forward
    • At what price? (Score:5, Insightful)

      by NEOtaku17 ( 679902 ) on Sunday March 02, 2008 @04:39PM (#22617338) Homepage
      "The marketplace has been duped into believing that this is the best technology can provide. People don't have time to know, understand, or research history and find that technology really can be reliable."

      No. They believe it is the best the technology can provide at a given price. Why do people "put up" with cars that only give them X amount of protection in a car crash even though there is technology out there that would make them safer? Because they aren't willing to pay the marginal cost for the extra protection. Arguing about what is possible with technology is pointless. What matters is what a piece of technology can do at a given price.

      Everything is a trade-off. The sooner Slashdot learns this the less we will have these stupid "Why don't consumers use the latest, greatest, most expensive technology? We need to force them somehow!" articles.
      • They believe it is the best the technology can provide at a given price. Why do people "put up" with cars that only give them X amount of protection in a car crash even though there is technology out there that would make them safer? Because they aren't willing to pay the marginal cost for the extra protection.

        This reminds me of why Bruce Schneier's dream of legislating liability for software defects is misguided. Sure, statutory liability would make software more reliable, but it would mean that the many who don't need the additional reliability (and currently aren't willing to pay for it) would be forced to subsidize the handful who do. It would also likely claim volunteer-developed software as a casualty.

    • by SailorSpork ( 1080153 ) on Sunday March 02, 2008 @04:47PM (#22617404) Homepage
      I think the term you're looking for is "managing expectations." Here's a little article about it from the IT side. [ittoolbox.com] It's something that Microsoft and teleco's have become so good at. If you keep expectations low and give them a little better, they'll be more than happy. If you give the same, but you promised the world, you get a bunch of unsatisfied customers.
    • by Vellmont ( 569020 ) on Sunday March 02, 2008 @04:54PM (#22617472) Homepage

      One time I had an opportunity to visit Microsoft and have lunch with a friend there. I figured while there I'd take the opportunity. I asked them in hushed tones, "Just how do you configure Windows so that you don't have to reboot it all of the time?" They looked at me like I was crazy.

      In a certain sense.. you were crazy, at least at Microsoft.

      The origins of an OS really show through a lot of the time. Windows started out as a single user OS, so rebooting was OK because the only person you messed up was the guy sitting in front of the screen. It eventually evolved into a multi-user OS, but the "just reboot!" mentality persists to this day.

      Linux/Unix on the other hand started out life as a multi-user OS. Rebooting was a big no-no, because you'd affect countless people logged in, and you'd get yelled at for ruining someones work.

      It's funny the attitude that comes from the users of each OS. Windows administrators categorically will try rebooting the damn thing first to fix any problem (and it usually works). Linux administrators will only try this as a last resort (and it almost never works).

      Anyway, at Microsoft the idea that you can somehow tweak windows just right so rebooting isn't necessary is crazy. They designed the damn thing so "just reboot!" will fix any problem. This of course is an unacceptable solution to a lot of people out their, but for a lot of people it's obviously reality.
      • Re: (Score:3, Informative)

        by kasperd ( 592156 )

        Linux administrators will only try this as a last resort (and it almost never works).

        I think that about 80% of the time I will know beforehand if rebooting a Linux system is going to solve a particular problem. But even if I'm convinced that a reboot would solve the problem, I usually spend some time looking for a solution that does not involve rebooting. There are multiple reasons why I look for other solutions. Sometimes a reboot is inconvenient because of all the programs that have to be shut down and st

      • by drsmithy ( 35869 ) <drsmithy&gmail,com> on Sunday March 02, 2008 @06:36PM (#22618218)

        The origins of an OS really show through a lot of the time. Windows started out as a single user OS, so rebooting was OK because the only person you messed up was the guy sitting in front of the screen. It eventually evolved into a multi-user OS, but the "just reboot!" mentality persists to this day.

        Windows NT (ie: contemporary Windows) has been a multiuser OS since it's first release.

        The reason the "just reboot" mentality persists is simply becaus e99% of the time it *is* used as a single-user OS, and no-one else is impacted. This has _zero_ to do with the architecture and everything to do with the user. Linux would be (and is) treated in the same way in similar situations.

        Linux/Unix on the other hand started out life as a multi-user OS. Rebooting was a big no-no, because you'd affect countless people logged in, and you'd get yelled at for ruining someones work.

        UNIX actually started out as a single-user OS and the multiuser aspect was bolted on later. Linux didn't, of course, because by the time Linus banged together his UNIX rip-off, UNIX had been multiuser for quite a while.

        However, again, the attitudes towards how their relevant users treat servers and workstations have about 10% to do with their architectures and 90% to do with their knowledge. DOS and OS/2 were single user, yet frequently had BBSes and similar running off them. You can be assured the people running those BBSes were far less like to have the "just reboot" mentality.

        Further, the other reason most people have that attitude is because to them a computer is just another appliance. When other appliances act up, pretty much the first thing _everybody_ does is turn it off and back on again. Why on Earth would you expect them to treat a computer any differently ?

        Windows administrators categorically will try rebooting the damn thing first to fix any problem (and it usually works). Linux administrators will only try this as a last resort (and it almost never works).

        No. Inexperienced admins will try rebooting first, regardless of platform. Experienced admins will not. Incidentally, there are numerous classes of problems on Linux (and UNIX in general) which are more quickly and easily "fixed" with a reboot.

        Anyway, at Microsoft the idea that you can somehow tweak windows just right so rebooting isn't necessary is crazy.

        I can't even remember the last time I had to reboot any of my Windows machines without a good reason (eg: patching).

        Finally, there's nothing wrong with rebooting _anyway_. If your service uptime requirements are affected by a single machine rebooting, your architecture is broken. All the reboot does is demonstrate that it's broken without a real problem actually occurring.

        Sysadmins comparing machine uptimes is like ricers comparing spoilers.

        • Re: (Score:3, Informative)

          by Vellmont ( 569020 )

          This has _zero_ to do with the architecture and everything to do with the user. Linux would be (and is) treated in the same way in similar situations.


          This is simply not true. Anyone that's ever installed software, or run "windows update" knows that rebooting is a very likely part of this process. The dependencies and non-modular approach of Windows are quite apparent. Software vendors say "just reboot" because of all the complexities and dependencies within windows.

          The same simply isn't true for Linux.
          • by drsmithy ( 35869 ) <drsmithy&gmail,com> on Sunday March 02, 2008 @08:24PM (#22618924)

            This is simply not true.

            Yes, it is. People who use Windows, when using Linux, are going to respond exactly the same way to problems - by rebooting.

            Anyone that's ever installed software, or run "windows update" knows that rebooting is a very likely part of this process. The dependencies and non-modular approach of Windows are quite apparent. Software vendors say "just reboot" because of all the complexities and dependencies within windows.

            No, they do it because it's a simple step for the ignorant end user to understand.

            The same simply isn't true for Linux. Replace a critical shared library? No problem, running programs still have a hook to the old version. Any new process that starts will get the new version of the library. Why reload the whole damn OS when restarting a process will do the same thing?

            Because for people who don't know that, it's easier to say reboot.

            You are conflating knowledgable end users with typical end users. This is at best naive and at worst deliberately deceptive.

            You're trying to tell me with a straight face that the BBS market influenced Microsoft? (Which flies in the face of what we've all experienced with Windows).

            No, I'm telling you that a random individual's attitude towards rebooting is going to be vastly more influenced by their skill level ad what they're using their computer for than the OS it runs.

            No, the reason people have this attitude is because it freaking works.

            Exactly. Now, again, why do you think they're going to treat computers any differently ?

            I've been administrating Linux machines for 13+ years. I can count on one hand the number of times a reboot solved any problem. The only class of problem this solved is a kernel bug, or the kernel crashing (usually from a hardware problem).

            Not done much work with NFS then, I take it ? Or services that have long timeout periods and don't die nicely ?

            I struggle to believe anyone has been using Linux for "13+ years" and can only "count on one hand the number of times a reboot solved any problem". Either you've not used Linux anything close to "13+ years" or you've not used it in a very wide range of situations.

            Why would anyone reboot without a "good reason"?

            The fact that you even need to ask disqualifies you from any useful input to this discussion. Fucking hell. People hit the rest button on their PCs because the monitor power-saving kicked in and for dozens of other reasons that aren't even that good.

            The point is that Linux simply has less "good reasons", and requires less reboots. Linux requires FAR less reboots for "patching".

            Linux also makes a lot more assumptions about its users (and "users" in this sense reaches from Grandma to software developers).

            Wow. Now I know you've really drank the Microsoft kool-aid. Not everyone can afford multiple machine redundancy just to fix the endemic problems of Microsoft who advocate "Just reboot!" to fix so many problems. There's really no reason why I need to reboot just to update what's essentially some new versions of DLLs. The Microsoft architecture is essentially broken if you have to buy another damn machine for the SOLE purpose of maintaining high availability.

            Yeah, like I thought. "13+ years" and 12 of those were probably using it on your home PC.

            The only meaningful difference between a "reboot" and a hardware failure is the amount of warning. I'll say it again. If your business continuity is vulnerable to individual machine outages (be they from reboots or motherboards going up in smoke), then it's broken. Period. If you can't afford "multiple machine redundancy" then you don't need 24/7 uptime. If you don't need 24/7 uptime, then either scheduled machine reboots (eg: for patching) are irrelevant, or brief outages are acceptable.

            Any sysadmin who thinks he can run a high-availability operation without multiple machine redundancy is incompetent. Any sysadmin who is purporting to do so, is grossly negligent. The fact that there's a hell of a lot of people whose Linux (and UNIX in general) bias puts them into these categories, does not make them any less incompetent or negligent.

            • Re: (Score:3, Insightful)

              Not done much work with NFS then, I take it ? Or services that have long timeout periods and don't die nicely ?

              Amen. Hoping for a long, stable uptime on a linux machine that does very intensive and sustained NFS I/O provde to be pipe dream for me. Things did get much better after applying the plethora of nfs.org patches, but you still get some awesome kernel failures.

              But I don't care, because I have many machines accessing the NFS mount (mailboxes, btw). I lose one, and keep on ticking. If I lose o

      • Re: (Score:3, Interesting)

        by pclminion ( 145572 )

        It's funny the attitude that comes from the users of each OS. Windows administrators categorically will try rebooting the damn thing first to fix any problem (and it usually works). Linux administrators will only try this as a last resort (and it almost never works).

        It's even less than a last resort. I have, once or twice, had true problems that required a reboot of a Linux machine to fix. The one in most recent memory, it took three weeks before realizing that a reboot was (or at least, could be) the s

        • Re: (Score:3, Interesting)

          Rebooting a computer to fix a problem should be viewed with the same suspicion as burning down your house to eradicate an infestation of insects.

          No, it should be viewed as fumigating your house. You all move out, wait a few days, then move back in. When you reboot you don't lose the computer, you don't lose the archived data, and all the users can return in a short amount of time.

          Burning down your house loses all the contents and ensures you'll never return...

        • by drsmithy ( 35869 ) <drsmithy&gmail,com> on Sunday March 02, 2008 @08:39PM (#22619032)

          That's three weeks of hard core debugging, tweaking, and hair pulling.

          The fact that you were able to wait *three weeks* demonstrates that the problem was, at most, insignificant.

          When thousands of dollars (or more) are being lost every minute that a service is unavailable[0], you don't fuck around with idiotic philosophising about how "its UNIX, I shouldn't need to reboot for anything"[1], you just DO IT.

          [0] We shall ignore here for a minute the false economy of not just investing in a properly redundant architecture where individual machine outages do not impact availability.

          [1] I've been there myself and had arguments with my (at the time) boss about it. It is the difference between how geeks think and how businesspeople think. The geek is interested in figuring out wtf is wrong. The businessman is interested in whether or not his business is still operating.

    • by tverbeek ( 457094 ) on Sunday March 02, 2008 @05:09PM (#22617588) Homepage
      Conditioning certainly has to be a big part of it. People put up with crappy wireless phone service because that they don't remember (or are too young to know) what an old-fashioned fully-wired telephone conversation sounds like. After a couple decades of cordless and wireless phones, the level of service has gone from "you can hear a pin drop" to "can you hear me now?"
    • No way... (Score:5, Insightful)

      by duffbeer703 ( 177751 ) * on Sunday March 02, 2008 @05:36PM (#22617772)
      This has everything to do with cost and nothing to do with Microsoft. Consider VoIP... people are deliberately choosing telephony services that are less reliable and lower quality than POTS, because VoIP is cheaper. If you want 99.999% uptime, that's fine -- but you're going to pay for it. High availability services require better equipment, redundant equipment that doesn't come cheap and more, higher quality staff to operate it. So it costs more.

      I've been in the technology services business for a long time, and with few exceptions, 80%+ customers want their services are delivered as cheaply as possible. Most hospital systems don't even have a 99.999% availability requirement. The 20% the want varying levels of higher than normal availability usually have a government regulation, SLA or other mandate requiring that they do so.
  • Oh Zonk (Score:4, Funny)

    by opec ( 755488 ) * on Sunday March 02, 2008 @04:23PM (#22617186) Homepage
    Oh Zonk, I'm marking your story as "flamebait". :(
  • by Corpuscavernosa ( 996139 ) on Sunday March 02, 2008 @04:23PM (#22617196)
    Complacent consumerism. "Hey, it's always been this way so they [service providers] must not be able to have 99.9% uptime. If they had the capability, they sure would provide it to us, their customers."
    • by zappepcs ( 820751 ) on Sunday March 02, 2008 @04:39PM (#22617346) Journal
      While you deserve the mod points, it should also be noted that consumer expectation is strangled into submission within 20 minutes on the first support call they make to ask about better service quality.I know a guy who is locally famous because he will spend 4,5,6 or more hours on the phone with customer service, supervisors, managers and anyone on the board of directors that he can find a phone number for. What is he fighting for? discounted service or reparations for lost service(s). That's right, it takes hours on the phone to get one of those companies to either own up to, and pay for losses accrued by their customers through loss of service.

      In truth, most consumers won't complain when they should, so there is no marketplace pressure on those businesses to aim for five nines uptime.
      • Re: (Score:3, Informative)

        by cgenman ( 325138 )
        That's right, it takes hours on the phone to get one of those companies to either own up to, and pay for losses accrued by their customers through loss of service.

        Having been on the other end of these types of calls, this sort of thing can be *very* annoying. People do call all of the time with the expectation that because they do five or six thousand dollars worth of business in a day, the ISP is somehow responsible for those thousands of dollars when some idiot Verizon contractor accidentally cuts our ca
        • Re: (Score:3, Insightful)

          by zappepcs ( 820751 )
          While I understand what you are saying it would go a very long way if when I called customer service, while I was on hold waiting for an operator the interactive processing system could take my zip code and tell me if there are any known problems or outages in my area. That would alleviate much of the complaints because of technical problems that are out of your hands. I've had trouble getting anyone to tell me they are having problems of any kind, never mind that the problem happened 2 blocks from my house
  • The cost (Score:5, Interesting)

    by Introspective ( 71476 ) on Sunday March 02, 2008 @04:24PM (#22617200) Homepage
    Probably because of the cost. I do network design for a fairly large telco, and let me tell you the cost goes up exponentially with the number of "9"s that the business asks for.

    • Re:The cost (Score:5, Informative)

      by HairyCanary ( 688865 ) on Sunday March 02, 2008 @04:29PM (#22617244)
      Exactly what I was thinking. I work for a CLEC, and I have a rough idea how much things cost -- compare what a Lucent 5E costs with what a top of the line Cisco router costs, and you have the answer why voice service achieves five-nines while data service typically does not.
      • Re: (Score:3, Insightful)

        by freebase ( 83667 )
        Don't forget that the support costs on a 5E dwarf even the cost of most, if not all, Smartnet contracts.

        Simply said, because the equipment isn't/hasn't been able to support it, the only way to build 5 9's or better has been to add more equipment, which increases operations costs, capital costs, etc across the board in an almost linear fashion.

        The market has for the most part established the level of service available by establishing the price point the customer is willing to pay for said service.

        People love
    • Bingo (Score:5, Insightful)

      by dreamchaser ( 49529 ) on Sunday March 02, 2008 @04:43PM (#22617372) Homepage Journal
      It's all about cost vs. the cost of downtime. You'll find in business lines such as the financial sector, customers are willing to pay for extremely high availability because time is indeed money. Business lines that have lower costs for downtime have to weigh availability vs. ROI.
    • Re:The cost (Score:5, Interesting)

      by who's got my nicknam ( 841366 ) on Sunday March 02, 2008 @05:32PM (#22617736) Homepage
      Also, you need to bear in mind that POTS is incredibly simple technology compared to Internet/Cellular/Data services. I haven't had cable TV since the early '90s, but I don't ever remember it going out, either- that was long before we had digital cable/cable Internet in my market area. POTS never goes down because the equipment is extremely robust, even (especially) the older stuff. My local telco could continue to provide POTS for more than 4 days during power outages simply because of lower power requirements (after 4 days, they had to fire up their generators, and started dropping remote COs due to extreme cold).

      We always want to compare service levels for newer tech with POTS and complain when they don't approach the same levels, but I'd expect that if we were to be still using the same equipment for ISP/Cellular service in a hundred years, it would be as stable and robust as the current (ok, previous generation) iteration of POTS. Problem is, we are constantly demanding better, faster, and cheaper: this has to be traded off for reliability, and for the most part people are happy with that tradeoff. Just like we're happy to buy crappy consumer goods from China at Wal*Mart because they're cheaper than domestic products. /rant
  • by studpuppy ( 624228 ) on Sunday March 02, 2008 @04:25PM (#22617210)
    So the simple answer is that I have more options. When my cell phone doesn't work, I have my desktop phone (or vice versa). or IM. Or email. Or fax.

    Basically, we don't rely so much on a single system that a brief outage can be tolerated because there are alternatives to choose from.

    This is also the basis of Clayton Christensen's theories on disruptive innovation - that a consumer of something (technology, etc.) is willing to trade off some of these aspects, like reliability, for cost or performance benefits (however you wish to define those benefits...).

    • by CdBee ( 742846 )
      when my employers blackberries failed earlier this month they fell back to laptops with a bluetooth tethered phone and outlook/exchange. redundancy is built into the mindset. No messages were lost
  • Because every nine will cause a geometric increase in costs.
    • Because every nine will cause a geometric increase in costs.

      This

      Uptime (%) Downtime 90% 876 hours (36.5 days)
      95% 438 hours (18.25 days)
      99% 87.6 hours (3.65 days)
      99.9% 8.76 hours
      99.99% 52.56 minutes
      99.999% 5.256 minutes
      99.9999% 31.536 seconds

      I work for a software shop where we can do high availability, but more often than not, folks chose to lower the uptime expectation rather than pony up for the stupid money it takes to have the hardware / software / infrastructure to get there. Most companies know the customer will not pay the extra cash for the uptime, thus... you get what you pay for.
      • Even Simpler... (Score:4, Insightful)

        by nick_davison ( 217681 ) on Sunday March 02, 2008 @08:42PM (#22619052)
        As Joel Spolsky pointed out on his blog JoelOnSoftware [joelonsoftware.com], 99.999% is pretty much fictional.

        99.999% over a year is 31.526 seconds.

        No matter how good your staff, no matter how many people you have on site, no matter how robust your systems, no matter how many failsafes you have standing by, ready to be plugged in...

        IF something does go down, even the fastest tech on earth is unlikely to identify, pull out, replace and have fired back up whatever the faulty item is in under 30 seconds.

        99.999% uptime is essentially fictional. It's simply an impressive sounding number that says, "We'll do everything realistically possible to keep you up 100% of the time. In a typical year, you won't see anything bring you down. You can now tell your investors/clients this and make them feel warm and fuzzy."

        It ignores the second part, "But, honestly, if it does go down, we won't have it back within 30 seconds, 100% of the time. Sorry, but welcome to reality. But, for what it's worth, our board's happy to pay you outage fees because it's a small enough risk and the amounts are capped enough, that we're happy to take the risk and costs in exchange for advertising a service we know no one can deliver."

        Let's look at regulated phone service, the example in the original post. Can anyone point to a major carrier that hasn't had a major outage at some point? Be it an idiot in a switch room, a power outage affecting a whole side of the country, an anchor ripping up an undersea cable? And how many of them have actually been back within the mandated 30 seconds?

        It doesn't happen. That two hour outage is going to take quarter of a millenium of absolutely no more faults to earn back at 30 seconds/year. With luck, it only hit one in 250 customers so you can pretend you're well within your 99.999% uptime but that 1 in 250 isn't really going to agree they got 99.999% after they were down for 1:59:30 more than their contract said they would be.

        So, no, 99.999% doesn't exist. It's just a really cool story we tell ourselves whilst being willing to pay whatever the penalties are for missing it, on rare occasions, in exchange for great advertising.
        • No, it does exist (Score:3, Informative)

          by Sycraft-fu ( 314770 )
          But only with redundant systems. What happens is when something goes down, techs aren't getting it back up in 30 seconds, rather it is instantaneously failing over to another system. You have enough redundancy, you can keep operating even in the face of multiple simultaneous failures.

          The problem is, of course, going for that can be really expensive. Not only does the system itself have to have a bunch of redundancy, but so does everything supporting it. For example in the case of a web server you'd not only
      • Re: (Score:3, Insightful)

        by LordKronos ( 470910 )
        Thanks for actually listing out the figures. It really puts things in perspective, and it made me realize something. My internet service probably gets somewhere between 99.9% and 99.99% uptime. My cell phone is probably in a similar range. My cable is better than 99.999% (maybe even 99.9999%).
    • by setagllib ( 753300 ) on Sunday March 02, 2008 @05:31PM (#22617734)
      My concept of 5 9s is much easier: 9.9999%. Or for Vista servers, .99999%.
  • Quoting the summary:

    ... after decades of mobile phones, why do we even still have dropped calls?
    It's a little thing called physics. When you're traveling while using your phone, you may transit into dead zones [wikipedia.org]. We could solve this by cutting down all the trees and flattening the landscape, but that might make some people angry...
  • by schnikies79 ( 788746 ) on Sunday March 02, 2008 @04:26PM (#22617220)
    You can have one or the other.

    We're not talking about software, we're talking about hardware and man-hours. Those will never be free.
  • by myowntrueself ( 607117 ) on Sunday March 02, 2008 @04:27PM (#22617230)
    'five nines' of uptime is a ridiculous and exaggerated expectation for pretty much anything technological for anything that is not life threatening.

    Whenever people talk about 99.999 uptime for a service delivered over the internet I laugh in their faces.
    • Re: (Score:3, Informative)

      by X0563511 ( 793323 )
      I just did the math. 99.999 uptime is "less than 5 minutes per year" or "less than half a minute per year" depending if i stuck an extra 0 in there...

      Clearly, a ridiculous number.
  • mass outages several times this month
    Was it converted to energy?
  • It's not that customers will put up with it, it's that there are exactly zero providers willing to offer such a service. When every single vendor will simply tell the customer to go screw, what option is there? Let's say for the sake of argument that every single customer moves to another service when their existing provider has an outage. Since they're all having outages at the same rate, all it does is swirl the market about, accomplishing nothing. All the vendors will continue to have customers regar
  • As consumers, we're made to feel helpless. The worst we can do (without litigation) to a company is complain or refuse to use their services, but what harm can that do to a giant conglomerate? And in situations in which one company has a monopoly in a certain area of the country, for example, consumers may not have the ability to switch or do without.

    As a personal example, Comcast owes me a refund check for Internet services I canceled six months ago. If I, as a consumer, had allowed my debt to go unpaid

    • Re: (Score:3, Insightful)

      by cgenman ( 325138 )
      As a consumer, you're more than entitled to take Comcast to small claims court [peopleslawyer.net], which is most likely the mechanism that Comcast would use to extract unpaid bills from you. That Comcast is more likely to enact this mechanism than you are is not a fault of politicians.

      It varies by state, but usually it costs 15 dollars to take a company to court, and no lawyers are required. It is generally quick and painless, and people at your local courthouse can fill you in on the details and help you through the proces
  • Not everybody is a member of the "I WANT IT NOW!" generation. Most of us are still not particularly bothered if we can't get to some particular piece of information right this second. Some of us still remember how to go to the library. And some of us actually have interests that do not include being online. I know, it's hard to understand, but I don't think I could talk any faster. I'm conditioned that way.
  • Really so common? (Score:5, Interesting)

    by Moridineas ( 213502 ) on Sunday March 02, 2008 @04:31PM (#22617264) Journal
    Are these kind of outages really so common? Mobiles phones I absolutely agree with. ON the other hand, I literally cannot remember the last time I lost cable or my internet. I've literally lost power more frequently than either of them (maybe 4 times in the past year) and lost water once. Emails not making it to their destination--again, does this really happen? In the decade plus I've been using internet email, I can't off the top of my head ever think of any "lost" email unless it was sent to a wrong address or something.
    • Re: (Score:3, Insightful)

      by TubeSteak ( 669689 )

      ON the other hand, I literally cannot remember the last time I lost cable or my internet.
      Hey! I've got an anecdote too! I spent a few years in a town where heavy rain would kill most of the town's cable tv & internet).

      Hint: Just because you live somewhere without such problems does not mean they don't exist. Ditto for lost e-mail.
    • Re: (Score:3, Interesting)

      by CorSci81 ( 1007499 )
      I'm in exactly the same situation. I'm on Time Warner's fiber network for internet, most of the time my wireless router is the source of any internet troubles, or it's exterior to my connection to TW. I've lost power more times in the last 2 months than I have my cable in the past 2 years. Even then my cable outages are generally under an hour and it usually involves calling the local office and having them reset my box remotely. Takes maybe 10 min to fix. And as far as cell phones, I generally know wh
  • by spasm ( 79260 ) on Sunday March 02, 2008 @04:36PM (#22617312) Homepage
    Because 90% of stuff labeled 'mission critical' actually isn't. Think about it - for most of us, being able to receive or send cellphone calls or emails at any time seems super important, but the number of hours in any given month where it really *was* super important (the grant application was due in two hours; your mother was sick; your partner was about to go into labor; whatever) is generally pretty low - our real tolerance for occasional downtime is therefore quite high.
  • by Srass ( 42349 ) * on Sunday March 02, 2008 @04:39PM (#22617336)
    Well, my guess would be that many (but not all) people understand that being able to call an ambulance because Aunt Betty has fainted is a necessity, but being able to chat with Aunt Betty for an hour from your car isn't. Missing a rerun of Laverne and Shirley isn't critical, and neither is having to wait to post those vacation pictures to Flickr. Your coworkers will, in all probability, somehow muddle through if you can't send them email from your blackberry.

    The telephone as we know it was the first genuinely instantaneous, worldwide communications medium that anyone could use, it was seen as a necessary component for national security during the cold war, and was built out as such. We've had over a century to perfect it, and vast amounts of money were spent doing so. Despite its origins at DARPA, the Internet as we know it today, although more useful, is by and large less of a basic need, is far more complex, and large portions of it are still built on top of the telephone infrastructure, besides.

    I can't help but think that most people understand this sort of thing, and understand that bringing such modern conveniences up to five nines of reliability is difficult and expensive, and people have evidently decided that a certain tradeoff to make such things affordable isn't out of line.

    The shorter, more pessimistic version of this is probably, "It's cheaper to suck."

  • by BanjoBob ( 686644 ) on Sunday March 02, 2008 @04:39PM (#22617344) Homepage Journal
    When Comtrash Internet dropped my speed from 6 Mbps to 1 Mbps but kept the rate at 6 times DSL, I dropped Comtrash and went with the 1.5 Mbps DSL from my local telco. I got 50% more than Comtrash was delivering at 1/6th the cost. No problem.

    When Microsoft decided that I didn't own the rights to my own media and stopped me from being able to copy my own DVDs, I decided to drop them for my media development system and I switched to Linux and Apple. Microsoft doesn't want my business so I went with the people who do. No problem.

    When my Long Distance company decided to charge over $1.00 per minute for International calls, I switched to AT&T and their 17 cents a minute program. No problem.

    When Frigidaire washers charged extra for the warm water cycle but only give you 5 seconds of hot water and thus, never any, it was no problem to return the unit and buy a different brand. Sure, the salesman wasn't happy but, that is now his problem and not mine. I bought a different brand that did give me what they advertised and promised. No problem.

    The list is endless and across all businesses and domains.

    The point being is that there are alternatives but, many (or most) people are either too lazy to do anything about it or, like this article, they are too apathetic to do anything about it.

    The choice is up to the consumer and, if the consumer would take action, the industry would have to adapt because the market demands it. So far, the market is willing to accept this and thus, the industry sees no reason to change. The less the consumer will accept for their dollar the less they will receive. That, is the problem.
  • by Chairboy ( 88841 ) on Sunday March 02, 2008 @04:44PM (#22617376) Homepage
    Be careful to pick a provider that advertises "seven nines of reliability" instead of the more common "nine sevens of reliability".
  • O RLY? (Score:5, Insightful)

    by nacturation ( 646836 ) <nacturation AT gmail DOT com> on Sunday March 02, 2008 @04:48PM (#22617414) Journal

    When it comes to mobile phone service, cable TV, Internet access, service interruptions are the norm -- and everyone seems willing to grin and bear it: 'We're so used cable and satellite television reception problems that we don't even notice them anymore.
    And television is mission critical? Besides, I bet most people don't experience significant cable TV interruptions. Satellite depends on the strength of the signal. Tap into Arecibo and you'll likely get 100% reception.

    We know that many of our emails never reach their destination.
    [citation needed] I call bullshit on that one.

    Mobile phone companies compare who has the fewest dropped calls (after decades of mobile phones, why do we even still have dropped calls?)
    Because it's a benefit to have a phone that doesn't draw so much power that your brain heats up just from using the device. Also, dropping a call indicates that you're in an area where there's no cell towers or because you've hopped from one tower to the next and the next tower has its connections maxed out.

    And the ubiquitous BlackBerry, which is a mission-critical device for millions, has experienced mass outages several times this month.
    Blackberry is not a mission critical service. The people who use it as such are naive. If there truly is a market for five nines uptime for Blackberry, RIM would develop such a service and charge an order of magnitude more for it.

    All of these services are unregulated, which means there are no demands on reliability, other than what the marketplace demands.' So here's the question for you: Why does the marketplace demand so little when it comes to these services?
    Because ultimately it's really not a big deal. So your satellite TV goes down for a bit... get a life. You drop a cell phone call... redial. Your Blackberry isn't receiving emails... get a life.
     
    • Just one point ... (Score:4, Informative)

      by tomhudson ( 43916 ) <barbara.hudson@b ... m ['son' in gap]> on Sunday March 02, 2008 @05:28PM (#22617700) Journal
      In a properly designed cell phone system, if the tower you were going to be handed off to can't take the connection, either the tower you're with will keep the connection, or another (though still sub-optimal) will take the connection.

      Of course, when you don't have transmitters with overlapping coverage, this doesn't work.

    • "We know that many of our emails never reach their destination.

      [citation needed] I call bullshit on that one.

      And I call BS on your BS. Clearly you're not familiar with the state-of-the-art as far as email goes. You've certainly not had to set up and run a private email server.

      Here's one good reference [realfreewebsites.com]. It mostly mirrors my experience, except that it's been going on longer than the writer has observed.

      The basic problem is that Yahoo, Hotmail, ATT and other large email providers, or ISPs, simply r

  • Gas Prices? (Score:3, Interesting)

    by careysb ( 566113 ) on Sunday March 02, 2008 @04:53PM (#22617460)
    I'm still waiting for people to scream about the rising gas prices and the record oil company profits. Seems like this would have a greater impact on the general populous than reliable cell phone service.
  • by Kjella ( 173770 ) on Sunday March 02, 2008 @06:01PM (#22617934) Homepage
    My electricity isn't 99.999% uptime (that's 30 seconds in a year) which would require me to get an UPS
    My consumer grade equipment isn't 99.999% uptime (with luck, maybe I guess but there's no ECC, redundant power etc).
    My software isn't 99.999% uptime (ok, so the kernel is stable. When X crashes, so does everything of importance on a desktop)
    If there's something urgent, you CALL me anyway.

    I'd rather take a line with 99.5% uptime (that's two days without internet per year) that's 10x faster and costs 10x less. Which doesn't include that I have Internet at work, or via my cellphone, or via a webcafe or any number of other easily available sources. The only real killer I can think of is if you only telecommute and can't go to work, but even then I figure the nearest Starbucks will let you occupy a corner with some purchases.
  • Reality check (Score:3, Informative)

    by mstone ( 8523 ) on Sunday March 02, 2008 @06:24PM (#22618130)
    The N-nines model is a fast and easy way to compare order-of-magnitude differences between existing networks, but it says almost nothing meaningful about actual usage or the perception of uptime from a user's perspective.

    Let's look at the numbers: 99.9% uptime translates to about 9 hours of unscheduled downtime a year. That can be one 9-hour block once a year, 36 minutes per day, 1.5 minutes per hour, 1.5 seconds per minute, or one dropped packet per thousand. Sure, it's easy to spot a 9-hour blackout, but as the slices of downtime get thinner, they get harder to notice at all, or to identify as USD specifically.

    99.999% uptime translates to about 5 minutes of USD per year, and is of questionable value. You can't identify a network outage, call in a complaint, and get the issue resolved in the given timeframe. 99.9999%? It is to laugh. You can't even look up the tech support phone number without blowing your downtime budget for the year. Get hit by a rolling blackout for an hour? Kiss your downtime budget goodbye for the next 120 years.

    Getting back to 99.9% uptime, let's move on to standard utilization patterns. USD really only becomes an issue if people notice it .. nobody cares if an incoming piece of email got delayed by 30 seconds at the MTU, but they do get testy if they can't load their webpages. But web surfing only uses 1-2 seconds of bandwidth per minute anyway.

    If we have 2 seconds of usage and 2 seconds of downtime per minute, the odds of a collision are around 15:1 with an average overlap of 1 second when a collision does happen. Simply interleaving usage and downtime that way increases the perceived uptime by an order of magnitude since 90% of the outages happen when no one is actually using the network. And larger blocks of downtime get lost in larger blocks of non-utilization exactly the same way .. who cares about a half hour of downtime from 0300 to 0330 when no one in your company is actually in the building and using the network?

    Granted, if you have higher utilization you'll have a better chance of hitting a chunk of downtime, but you'll also have higher chances of queuing latency within your own use patterns. If you're already using 99% of your bandwidth, you can't just plunk in one more job and expect it to run immediately. It has to wait for that 1% of space no one else is currently using. And when you get to that point, it's really time to consider buying a bigger pipe anyway.

    And that brings us to the main point: People don't buy network connectivity in absolute terms. They buy capacity, and the capacity they buy is scaled to what they think of as acceptable peak usage. "Acceptable peak usage" is a subjective thing, and nobody makes subjective judgements with 99.999% precision.

  • by supabeast! ( 84658 ) on Sunday March 02, 2008 @07:13PM (#22618440)
    If we wanted better uptime we could have it. We would just have to pay more for, and look at, a whole lot of redundant systems. Personally, I'm happier to keep paying less and only have one power line coming into my house, with the nearest plant many miles away. The same goes for cable and telephone service. And my cellular service does work about 99.9% of the time.
  • by Hercules Peanut ( 540188 ) on Sunday March 02, 2008 @08:05PM (#22618782)
    I'm at home (and awake) 20% of the time.
    My landline is up 99.999% meaning my phone is available to me when I need it 19.998% of the time.

    I'm out and about (and coherent) 40% of the time.
    My cell phone works 90% of the time meaning it is available to me when I need it 36% of the time.

    Clear winner, cell phone.

    Sometimes we lose site of reality while studying statistics.
  • On redundancy (Score:4, Informative)

    by Animats ( 122034 ) on Monday March 03, 2008 @01:15AM (#22620642) Homepage

    In the entire history of electromechanical switching in the Bell System no central office was ever out of action for more than thirty minutes for any reason other than a natural disaster. On the other hand, step-by-step (Strowgear) switches failed to connect about 1% of calls correctly, and crossbar reduced that to about 0.1%. With electronic switching, the failure rate is higher but the error rate is much lower.

    This reflects the fact that, in the electromechanical era, the hardware reliability was low enough that the system had to be designed to have a higher reliability than any of its individual units. In the computer era, the component reliability is so high that good error rates can be achieved without redundancy. This is why computer-based networks tend to have common mode failures.

    If you're involved in designing highly reliable systems, it's worth understanding how Number 5 Crossbar worked. Here's an oversimplified version.

    The biggest component of Number 5 crossbar were the crossbar switches themselves. Think of them as 10x10 matrices of contacts which could be X/Y addressed and set or cleared. Failure of one crossbar switch could take down only a few lines, and they usually failed one row or column at a time, taking down at most one line.

    The crossbars had no smarts of their own; they were told what to do by "markers", the smart part of the central office. Each marker could set up or tear down a call in about 100ms. Markers were duplicated, with half of the marker checking the other half. If the halves disagreed, the transaction aborted. Each central office had multiple markers (not that many, maybe ten in an office with 10,000 lines), and markers were assigned randomly to process calls.

    When a phone went off hook, a marker was notified, and set up a "call" to some free "originating register", the unit that understood dial pulses and provided dial tone. The marker was then released, while the user dialed. The originating register received the input dial info, and when its logic detected a complete number, it requested a random marker, and sent the number. The marker set up the call, set and locked in the correct contacts in the crossbars, and was released to do other work.

    If the marker failed to set up the call successfully (there was a timeout around 500ms), the originating register got back a fail, and retried, once. One retry is a huge win; if there's a 1% fail rate on the first try, there's an 0.01% fail rate with two tries. This little trick alone made crossbar systems appear very reliable. There's much to be said for doing one retry on anything which might fail transiently. If the second retry fails, unit level retry as a strategy probably isn't working and the problem needs to be kicked up a level.

    The pattern of requesting resources from a pool at random was continued throughout the system. Trunks (to other central offices), senders (for sending call data to the next switch), translators (for converting phone numbers into routes), billing punches (for logging call data), and trouble punches (for logging faults) were all assigned on a random, or in some cases a cyclic rotation basis. Units that were busy, faulted, or physically removed for maintenance were just skipped.

    That's how the Bell System achieved such good reliability with devices that had moving parts.

    Note that this isn't a "switch to backup" strategy. The distribution of work amongst units is part of normal operation, constantly being exercised. So handling a failure doesn't involve special cases. Failures cost you some system capacity, but don't take the whole system down.

    We need more of that in the Internet. Some (not all) load balancers for web sites work like this. Some (but not all) packet switches work like this. Think about how you can use that pattern in your own work. It worked for more than half a century for the Bell System.

The use of money is all the advantage there is to having money. -- B. Franklin

Working...