Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Cloud Google HP IBM Technology IT

Intel Confirms Decline of Server Giants 152

An anonymous reader writes "A Wired article discusses the relative decline of Dell, HP, and IBM in the server market over the past few years. Whereas those three companies once provided 75% of Intel's server chip revenue, those revenues are now split between the big three and five other companies as well. Google is fifth on the list. 'It's the big web players that are moving away from the HPs and the Dells, and most of these same companies offer large "cloud" services that let other businesses run their operations without purchasing servers in the first place. To be sure, as the market shifts, HP, Dell, and IBM are working to reinvent themselves. Dell, for instance, launched a new business unit dedicated to building custom gear for the big web players — Dell Data Center Services — and all these outfits are now offering their own cloud services. But the tide is against them.'"
This discussion has been archived. No new comments can be posted.

Intel Confirms Decline of Server Giants

Comments Filter:
  • by denis-The-menace ( 471988 ) on Wednesday September 12, 2012 @04:25PM (#41316525)

    If Google sold servers, HP and Dell would die overnight.

    Just the "12volt-only" power supplies with built-in batteries with "12volt-only" motherboards makes them more reliable than anything out there.

    HP and Dell either can't or won't license this from Google.

    • Re: (Score:2, Flamebait)

      by postbigbang ( 761081 )

      Oh? Inside your desktop or 1U/etc server is a 12V power supply, and 5vdc, too. License? This isn't about licensing, it's about density and uniformity.

      You can put a 12v battery into your machine, too. It's allowed.

    • by Jake73 ( 306340 )

      License what? The ability to run from 12v power?

      I'm pretty sure my old Atari 400 and Atari 800XL both ran from DC power supplied from a brick. What's new about that? Nearly every laptop runs from DC power and has a built-in battery.

    • by fm6 ( 162816 ) on Wednesday September 12, 2012 @05:22PM (#41317187) Homepage Journal

      Sorry, you're wrong. Wish you were right.

      I've always been appalled by the way PCs rely on big, hot, wasteful noisy internal power supplies. When IBM entered the workstation market, 30 years ago (Oh, Lord, that makes me feel old) I worked for a company that made a pre-PC x86 system [computinghistory.org.uk] that relied entirely on external, passively cooled power supplies. To me, this was clearly the way of the future, but once IBM entered the market, everything had to be IBM compatible, even the way the power system worked. Because if you couldn't use IBM-compatible power supplies, your system cost too much to build. (I once had to throw out a perfectly good Zenith PC with a blown PS; although it was mostly IBM-compatible, its power supply was proprietary, and cost too much to replace.)

      So, Google can't go into the hardware business, because their machines would cost too much and would rely too much on proprietary infrastructure. Easier to justify using your own technology regardless of cost when you're gigantic and profitable.

      HP and Dell's nightmare isn't Google. It's cloud computing in general. The cloud providers (which includes Google, if you ignore the fact that they only provide high-level cloud services, unlike Amazon) mostly build their own hardware. Those that don't buy cheap no-name hardware.

      • by DragonWriter ( 970822 ) on Wednesday September 12, 2012 @06:11PM (#41317713)

        The cloud providers (which includes Google, if you ignore the fact that they only provide high-level cloud services, unlike Amazon) mostly build their own hardware.

        Google provides low-level cloud services (IaaS in the form of Google Compute Engine, PaaS in the form of Google App Engine, RDBMS-in-the-cloud in the form of Google Cloud SQL, bucket-style storage in Google Cloud Storage) as well as higher-level services (all of Google's various apps build on their cloud infrastructure.)

        So the Google-Amazon distinction drawn in the parenthetical is inaccurate.

      • I've always been appalled by the way PCs rely on big, hot, wasteful noisy internal power supplies.

        I don't follow your complaint. I'm sure you don't have a MORE EFFICIENT, SMALLER and QUIETER, PASSIVE, EXTERNAL power supply. Obviously, you threw in at least one trait that isn't possible in combination with the rest. In particular, it's incredible just how much air and heat a tiny little 12v fan can move, even the almost completely silent ones (see: SWiF2-1200 or 800).

        And while I've long lamented the ineff

        • by fm6 ( 162816 )

          Internal power supplies that don't make a lot of noise are becoming increasingly common now. But for most of the PC's 30-year history, PC PSUs have been noisy power hogs. It was only when people started worrying about energy waste that anything was done about it.

          I'm probably guilty of overstating the potential of passively-cooled PSUs. I just noticed that they seemed to work well on some pre-PC systems I worked with (you dislike cables, but I dislike noise, and everybody dislikes wasting energy) They only d

      • Comment removed based on user account deletion
      • by inKubus ( 199753 )

        Cloud computing is a fad. The reason why is BGP. BGP means that there's nothing but statistical luck that your connection to your data will go through. The biggest companies in the world (and the largest purchasers of IT equipment) will not ever use it. It will always be relegated to the consumer and the small business, who don't have much to lose if they can't access the data.

        At some point, some genius will invent a new internet protocol that will enable the data to be stored local to the owner but can

        • by fm6 ( 162816 )

          Your statement about BGP makes no sense to me. How does BGP interfere with cloud-type connections and not others?

          You seem to be claiming that that cloud computing is simply impossible. And yet Google, Facebook, Amazon, Salesforce, and Microsoft all operate huge data centers that run only cloud technology. Not only are big companies using it, but they're selling their excess cloud capacity. That's how the two biggest cloud services got started: Amazon and Salesforce developed cloud technology because they ne

          • by drsmithy ( 35869 ) <drsmithy&gmail,com> on Thursday September 13, 2012 @05:47AM (#41321533)

            Your statement about BGP makes no sense to me. How does BGP interfere with cloud-type connections and not others?

            He is rehashing - in a rather rather pained and circuitous fashion - the "if you lose your internet connectivity you can't do any work" argument.

            This point is not entirely without merit, but generally fails to recognise that a) most companies these days can't do a lot of work without an internet connection anyway and b) internet connectivity is usually a lot easier and cheaper to make highly available and redundant that server infrastructure.

            • by fm6 ( 162816 )

              He's not talking about flaky ISPs or NAPs. He's talking about a routing protocol that he says prevents clouds from working. At all. Further explanation required.

    • by afidel ( 530433 ) on Wednesday September 12, 2012 @07:39PM (#41318593)

      Not in the least bit, Google designs their servers to optimize power usage and absolute lowest cost per compute cycle. Those are not the same goals for every server buyer. For instance single threaded performance is a large factor for me because we run a lot of interactive workloads that are single threaded or weakly threaded but Google doesn't really care about single threaded performance because they're optimizing at the datacenter level. I also care a lot more about the reliability of any given unit because my jobs are mostly traditional single-server jobs with only my most critical workloads being clustered so the loss of any given node has a significant impact on my overall reliability whereas Google can lose dozens of servers a day per datacenter and it would have no impact on their overall operations. Another example is storage, Google uses COTS SATA drives with horrible MTBF stats and they do so without RAID protection, the only application where that might remotely have a chance of working for me is Exchange 2010 because I have four copies of each database online and the client is seamlessly pointed to a working copy.

    • by gagol ( 583737 )
      Google's server architecture is custom made for their datacenters and built around their application. What they could offer is a turn-key datacenter thet requires a similar workload to theirs... and it is not their business to do so.
    • If Google sold servers, HP and Dell would die overnight.

      No. You're wrong on so many levels, it's hard to believable.

      Google's solution is cheap, UNRELIABLE servers. I liked the idea of a built-in battery for about 5 seconds, until I realized that the PSU isn't going to have any way to do a weekly self-test of the battery, or allow hot-swapping it... the features that separate decent UPSes from low-end consumer crap. I liked the idea of motherboards stripped of unnecessary components, until I saw it only h

      • "bonded/trunked NICs"

        Why does that matter? The only justification for bonding with 10g these days is "redundancy" and I've seen many more outages (at a variety of sites) from people failing at bonding than I have from switch failure.

        If a machine is that critical the service it runs shouldn't live on a single machine.

        Even at my last job where we had a design based on multiple SPOFs we lost machines to PSU or drive/RAID failure several times, but never network, except for the one site that did "redundant" NIC

        • Even at my last job where we had a design based on multiple SPOFs we lost machines to PSU or drive/RAID failure several times, but never network, except for the one site that did "redundant" NICs.

          I've never seen anyone "failing at bonding", and any such misconfiguration would be picked-up by the monitoring system before a given server went live, so your trained-monkeys appear to be highly defective, and you clearly need to get them traded-in for better ones.

          At my last job, where we were a nicely clustered

  • by Compaqt ( 1758360 ) on Wednesday September 12, 2012 @04:27PM (#41316543) Homepage

    Back in the day (say, 2008 as in the article), if you wanted to buy a server, you'd buy one from the big three.

    These days, especially with FB and Google leading the way on commodity hardware, it's a different story.

    So what should you get for your first server. I.e., you're a small company. You've got a couple of laptops. You're outgrowing mutual Samba.

    You maybe want a fileserver. Maybe it'll have a few NICs and a virtual machine on it (Xen?) will do double duty as a external webserver.

    So, Core i3, i5, Xeon? Number of processor cores? Forget fast drives, and just buy a lot of memory? Rack? Or tower?

    Lockable front (so people can't just come by and reset it)? Hotplug harddrives? (You don't go this if you go the Google build-your-own route.) Redundant hard drives and ECC memory? Or a couple different commodity-style servers + sharding/rsync?

    Is a big 3 server worth it? Or search for your own server case + server power supply, etc.?

    • Re: (Score:3, Informative)

      by Anonymous Coward

      Search for your own. Priced one from hp/dell and it would have cost $6,000 plus. Built it with the same specs for $3000. That right there is why their server sales are dwindling.

      • by ard ( 115977 ) on Wednesday September 12, 2012 @04:40PM (#41316683)

        With the same specs? With hot-plug drives, true hardware raid, iLO/iDRAC lights-out management, secondary bios if flashing fails?

        Get a refurbished HP gen 5 or 6 server instead of building your own. Perfomance will be sufficient, don't worry. It's well below $3000, and you get enterprise quality hardware.

        • With the same specs? With hot-plug drives, true hardware raid, iLO/iDRAC lights-out management, secondary bios if flashing fails?

          Use software RAID and buy from SuperMicro. Yes, $3k will get you a reliable server (perhaps with dual power supplies also).

          • I'm having a hard time putting "software RAID" and "reliable" in the same sentence.

            • by h4rr4r ( 612664 ) on Wednesday September 12, 2012 @05:17PM (#41317123)

              Linux software raid is great.
              Proprietary software raid is garbage.

              I base this on what I have seen. Linux software raid beats cheapy hardware controllers both in reliability and speed.

              • It is great until you have a drive failure, then the system turns to mush trying to rebuild, unless you are still using 250GB drives...

                • by h4rr4r ( 612664 )

                  You can say the same thing about most hardware controllers. Rebuild times these days are just too damn long. This is what will kill spinning discs in the enterprise.

            • Software RAID has it's advantages. If you have your controller card blow up, you don't need to procure an identical card. It does have other drawbacks though.
              • by afidel ( 530433 )

                You don't need an identical card, just one of the same generation, at least for servers from the big 3.

        • Well, sure, you can do all that with Newegg available parts.

          Would I?

          Depends. If I can scale horizontally, sure. Downsize the spec and built 4 or 5 in case one fails and I wait days for a replacement part.

            If I have a vertical architecture, then I want a box I can get someone onsite in 4hrs or less.

          And that ain't Newegg, that is an Dell or HP sized company.

          • If I have a vertical architecture, then I want a box I can get someone onsite in 4hrs or less.

            And that ain't Newegg, that is an Dell or HP sized company.

            Management turned down my plan to have a second server. It was to be the identical model, but without all the disks and redundancy. They figured HP's 4-hour response time would be better than a hot spare server.

            Then the crash came.

            A nice fellow showed up within 4 hours, with the "most likely" part. It wasn't.
            The next day, more parts. Nope.
            The next day, two nice fellows showed up and replaced every part but the case. That solved it.

            The cost of downtime was so far beyond the cost of the spare server that

      • by MightyMartian ( 840721 ) on Wednesday September 12, 2012 @04:43PM (#41316723) Journal

        As much as anything, I think virtualization is murdering the market. I bought a $3000 server that hosts six VM guests; two Windows installs (one a DC, one an Exchange server) and four Linux. A couple of years ago, I would have needed at least three servers to do it (one for each Windows install) and one Linux. Admittedly they wouldn't have to have the balls that the new server has, but still, I think we'd be talking about $4000 to $6000 in hardware. Even worse, these are all just basically images sitting on hard drives, so they can essentially be perpetual. Two or three years, when the current server dies or I decide I need more juice, just move the VM images over and away I go, and with hardware prices the way they are, I doubt the next generation server will cost any more than the one I have now, and maybe even less.

        Factor in the cloud, VPS hosting and so on, the demand for servers will inevitably drop.

        • What VM do you use? I am quite ignorant when it comes to virtualization, I left IT (I am now back into academia) before virtualization became big.

          • by SuperQ ( 431 ) *

            I run a co-op VM cluster on Ganeti. We bought 3 supermicro 1U single-socket machines (12-core AMD, 64G of ram) for about $7,000. We have about 60% of our capacity rented out. The nice part is we allocate based on 1G of ram slices so you get a pretty powerful minimum server.

          • Currently I'm using Linux KVM with the libvirt libraries. There are limitations, such as there is no simple way to move images between servers, but all in all it works well.

        • by rtb61 ( 674572 )

          Makes you think what the cloud is doing to the OS server market. It seems only the M$ managed parts of the cloud make M$ any real money and the rest of the cloud is running OSs that keep revenue within those parts of the cloud controlled by those operators. Taking the point of view that the cloud is a whole and not really separated as it is made to appear because you can tie your services to more that just one operator, especially considering the risks of the cloud.

      • by hawguy ( 1600213 )

        Search for your own. Priced one from hp/dell and it would have cost $6,000 plus. Built it with the same specs for $3000. That right there is why their server sales are dwindling.

        The difference is not always so dramatic.

        My local whitebox builder can put together hardware equivalent to a Dell R720: dual E-2620 CPU's, 32GB RAM, dual 1TB disks with onboard RAID (i.e. fake RAID) for $2800 with one year carry-in warranty. Dell charges $3566 for the the equivalent server but includes a 3 year next business day on-site warranty.

        So the dell costs $766 more, or think of it as $20/month for on-site service.

        If you're a large shop (or a very small shop) and don't mind taking care of motherboard

      • by hjf ( 703092 )

        I don't know what you got. I got an IBM x3200 M3, quad-core Xeon, IMM (integrated management module), SAS controller, hot-plug bays, 2 gigabit NICs and 2GB RAM for $1000.

        If I had gone the route of IBM hard drives and RAM it would have doubled the price, but I just got Kingston ECC memory ($60 for 8GB) and some SATA HDDs (I don't need SAS).

        The killer feature for me? the IMM is connected to the serial port. So I can SSH into the IMM and get a Linux console (and also, get to the BIOS -UEFI actually- over Ether

    • Comment removed based on user account deletion
      • by h4rr4r ( 612664 )

        Buy from the Big Three but get it refurb.
        You can get them with the original 3 year 4 hour warranty still in place. Extend it if you need that, or better yet buy another one and there is your spare parts.

      • The problem is that you have to support all of that equipment you just threw together all piecemeal-like. Do you have spare parts available? If no, how much does it cost to have them shipped overnight? Are they still available via retail channels or do you have to dredge through eBay? How much does it cost to purchase and store spare inventory? Do you have the equipment to test for failed components without the possibility of frying other equipment?

        Those "Big Three" server companies charge more because of service and support so you don't have to worry as much about those things. RMA and forget. And yeah, I'm saying that with a straight face.

        There are times where a company is small enough to where your tech has enough idle time to deal with a white box server. Other times, your techs are better utilized doing other work.

        The Big 3 have the same problems. I've seen lots of IBM servers have failed RAID controller batteries, which IBM won't replace under warranty because they're "consumable", and won't replace for a fee because they aren't available anymore. On the other hand, installing a third-party part voids the warranty anyway.

        • by drsmithy ( 35869 )

          I've seen lots of IBM servers have failed RAID controller batteries, which IBM won't replace under warranty because they're "consumable", and won't replace for a fee because they aren't available anymore.

          You'd have to be talking about a machine at least 5 years old.

        • by cdrudge ( 68377 )

          The Big 3 have the same problems. I've seen lots of IBM servers have failed RAID controller batteries, which IBM won't replace under warranty because they're "consumable", and won't replace for a fee because they aren't available anymore. On the other hand, installing a third-party part voids the warranty anyway.

          Under 15 USC 2302(c), they can not require original equipment be used. If the 3rd party component (in this example a battery) can be shown caused damage, then they may have grounds to deny a warran

        • by afidel ( 530433 )

          That particular problem was solved a few years ago when they introduced flash backed write cache. Basically it's a supercap or bank of regular caps that will power the controller long enough to push ram contents into a flash module. I won't buy anything else and in fact HP stopped offering battery backed units with the gen8 servers.

      • by heypete ( 60671 )

        I'm only really familiar with SuperMicro products, but they offer a pretty standard warranty [supermicro.com] for their servers. Since they use pretty standard components, rather than vendor-specific stuff or firmware-locked drives (see my other post), spare parts are pretty easy to come by. They had all the standard features like IPMI ("Lights Out"), redundant power supplies, etc.

        RMAing broken hard disks to Sun was an exercise in frustration and delays. It literally took weeks to get a hard disk replaced under warranty.

        Del

      • To some extent virtualization has done away with even this. Frankly, I doubt I will ever run a server that isn't a guest, unless I'm looking at something like a dedicated backup server (which I have right now) or some very high capacity database server (for my business's needs, I can't see that happening any time in the near future). So for most of my needs, I'd be buying something good RAID, fast drives, lots of RAM and CPU that I can install VMWare or Debian with KVM or Xen support on (running KVM right n

    • ...looks alot like the one from 2008. Big three = hardware warranty and support: drive dies, Dell guy's there in less than 4 hours. That covers the entire lifecycle of the server (3-5 years) while it's in production and playing a mission critical role. Virtualization/consolidation/cloud are whittling away at the server market, but it's never going to go away. Right now I'm dealing with an EC2 instance that won't start and I can't detach the volume to try to snapshot it or mount it to another new instanc
    • Maybe something like a QNAP?

    • by drsmithy ( 35869 ) <drsmithy&gmail,com> on Wednesday September 12, 2012 @05:10PM (#41317025)

      Is a big 3 server worth it?

      Almost certainly. The problem is most techies - especially young ones - only look at a handful of specifications (CPU, RAM, # disks) and the sticker price, because they think their time is free.

      • Or we think that our time costs, but it costs less than business downtime does. If you depend on the vendor and their support contract, you're impacted for however long it takes them to come out. They won't typically let you keep spares, so when a part breaks that box is impaired or off-line for whatever your contract response time it and there's nothing you can do about it. But if it's a white-box server that can be worked on in-house, you can typically keep spares on the shelf. It may cost more in admin/t

        • by drsmithy ( 35869 )

          Or we think that our time costs, but it costs less than business downtime does. If you depend on the vendor and their support contract, you're impacted for however long it takes them to come out. They won't typically let you keep spares, so when a part breaks that box is impaired or off-line for whatever your contract response time it and there's nothing you can do about it. But if it's a white-box server that can be worked on in-house, you can typically keep spares on the shelf. It may cost more in admin/t

    • Back in the day (say, 2008 as in the article), if you wanted to buy a server, you'd buy one from the big three.

      If you wanted a piece of shit (and let's be fair; there are plenty of times that a piece of shit is exactly what a situation requires), then yes; a server from the big three was the way to go. If, however, you wanted something "better" than that (the quotes are due to the admittedly subjective use of the word), you ordered a Supermicro or Intel serverboard, server case, high quality power supplies, etc, etc... and you never looked back (not if you belonged anywhere near a server, anyway!).

      The servers from

    • by mjwx ( 966435 )

      So what should you get for your first server. I.e., you're a small company. You've got a couple of laptops. You're outgrowing mutual Samba.

      You maybe want a fileserver. Maybe it'll have a few NICs and a virtual machine on it (Xen?) will do double duty as a external webserver.

      Erm, if you're a small sub 10 man outfit (say engineering for example) and need storage in this day and age you just buy a $3-400 QNAP NAS and 4 $100 2 TB disks. You've got to be pretty out of it to deploy a file server over a NAS box.

      This can be expanded by a cheap server and run SBS or Linus. business this small have been using non brand name Intel Xeon white boxen for over a decade, this is nothing new. Because a QNAP supports iSCSI and LDAP you dont need excessive storage in a server to have Windows/AD

    • Hell, for most small companies, two single drive NAS units that have automated failover and synchronization are all you need. Throw in external monitoring and plug-and-play backup redundancy for off-site and you are golden.

      The MyBookLive units work pretty good in this respect, but I haven't bothered to do automated failover. We just use them for off-site backups with an rsync script that runs on the server.

      Add in a nicer router like a Cisco ASDM 500, and you are fine until you need an accounting server...

    • by inKubus ( 199753 )

      Well, assuming you're just doing file stuff, one of the commonly available NAS solutions with a box full of disks and multiple file protocols would work great. If you're tiny, your external webserver will be at dreamhost or something (I might have said GoDaddy here in 2008), because you're not going to have a real network connection. More likely your network will be on par with your server equipment and it'll be a cable modem or DSL. Personally, and this has been my business niche a LONG time, so I hate

  • No surprise. (Score:5, Interesting)

    by heypete ( 60671 ) <pete@heypete.com> on Wednesday September 12, 2012 @04:33PM (#41316611) Homepage

    Why bother with branded parts made by an ODM when you can buy directly from the ODM?

    My old workplace had (has, probably) a fairly beefy Sun server with a whole bunch of disks. They used it as a RAID-based storage server for a bunch of lab data. As they do on occasion, a hard disk would crap out. The server wouldn't take ordinary disks, though: it would only accept Western Digital disks with some Sun ID code baked into the firmware -- rather than simply being able to buy a few WD RAID-friendly disks ahead of time, we had to jump through Sun's hoops to get disks replaced under warranty. This usually was a multi-week process, during the array with the failed disk was running with a hot spare -- hardly ideal. That was the last time we bought Sun systems.

    At some other point, we were planning on setting up a few more storage servers for backup data. Dell's price for a storage system, including firmware-locked drives, was about triple the cost of doing it ourselves with SuperMicro servers, MD-based software RAID, and RAID-friendly disks. We ended up buying two of the SuperMicro-based systems and putting them in different buildings for semi-offsite backup (the concern was if the server room caught fire, not if a meteor affected the whole city). The only extra step during the setup was putting the disks in their caddies: the Dell systems came with the disks pre-installed. That took about 5 minutes per server. Whoop-dee-doo.

    The Dell servers restricted our (with firmware-locked disks) options and cost substantially more than doing it in-house. We'd be stupid to go with their products, as we'd be locked to that vendor for the life of the servers.

    Sure, we had Dell Optiplex systems as the desktop workstations for researchers as they were inexpensive, reliable in the lab, and essentially identical (useful for restoring system images from one computer to another), but their server stuff is stupidly overpriced.

    The SuperMicro servers were much more "open" in that they used pretty bog-standard parts and didn't have stupid anti-features like firmware locking.

  • At the beginning of August I got a quote from dell for 2 R710 servers and 4 R610 servers. Three weeks later I placed the order. The response? Sorry, we're not selling those any more. You have to buy the R720's instead and they're more expensive.

    So, sorry Dell. I won't be considering you for the upgrades to the other 200 servers I manage after all. Pity because HP just pissed me off with the DL380p gen8 which can hold 16 drives but has no raid card which can use more than 8.

  • FIFTH? (Score:3, Interesting)

    by Anonymous Coward on Wednesday September 12, 2012 @04:37PM (#41316651)

    Let me get this right. Google, who builds all of their servers in-house, exclusively for their own use (not for resale), is the fifth largest buyer of Intel server chips in the world?

    That sure paints a picture about the sheer size of Google's data center operations.

    • It also paints a picture of just how much pr0n, lolcats, and pointless facebook updates actually exist on Earth.

      Pretty depressing, isn't it?

  • by King_TJ ( 85913 ) on Wednesday September 12, 2012 @04:38PM (#41316669) Journal

    While yes, right now, the tide may be against the server manufacturers -- the cloud still requires them in large quantities to host those services. If it negatively impacts sales, it's only to the extent that efficiency is improved. (EG. Joe Businessman who once bought a server for his office of 10 employees skips it, in favor of cloud computing solutions. But it turns out his needs are small enough so they can share the load with 1-2 other small businesses like his, all on a single server in the cloud.)

    In my opinion, Dell has the right idea -- changing the focus on who their customer is for their server products. Beyond that, what's really news here?

    Going out on a bit more of a limb though? I'm really of the opinion that cloud services are over-hyped as the "in" thing for every business. Once companies migrate heavily to cloud hosted solutions and use them for a while, a fair number will conclude it's not really beneficial. Then you'll see a return to the business model of running in-house servers again. (Granted, those servers might be smaller, with lower power consumption than in the past. Little "microservers" handle many of the basic file and print sharing work companies used to relegate to full size rack mounted systems in the past.)

    But my own experience with cloud migrations tells me that it's not so great, 9 times out of 10. For example, my boss has been using the Neat document management software for a while now to scan in all of his personal receipts and documents at home. Neat now offers "NeatCloud" so you can upload your whole database and then access your docs via an iPhone or iPad client, or even scan something new in by simply taking a picture of it. Sounds great, but in reality, he had nothing but problems with it. The initial upload tied up his PC for the better part of his weekend, only to report that some documents couldn't be converted or uploaded properly. He had close to 100 random pages of existing documents thrown in a new folder the software generated, to hold the problem ones. The only "fix" for this was to click to open a trouble ticket for EACH individual document that failed, so someone at Neat could examine it manually and correct whatever issue prevented their system from properly OCRing and uploading it. Clearly, that wasn't much of a solution! He tried, repeatedly, to get someone to remote control into his PC to do some sort of batch repair for him -- but after a couple promises to call back "the next day" to look at it, nobody ever did. Now, all Neat can tell him is they have another update patch coming out for the software in the next week, and to disable cloud uploads until that time.

    Or take the recent migration a small office did from GoDaddy pop3/smtp email with Outlook to Google hosted mail. I usually help these guys with their computer issues but they thought they could tackle this migration on their own. Turns out, they wound up with a big mess of missing sub-folders of mail in Outlook on the owner's machine. After a lot of poking around, I discovered part of the problem was due to characters in the folder names that Google Apps didn't consider valid. When it hit one of those during the mail migration, it just skipped the whole mail folder upload with an error. (Did Google's migration wizard utility even warn about this in advance or offer to help rename the problem folders before continuing? Heck no!)

    For that matter, take what you'd think is pretty basic functionality with cloud based data backup? I've run into multiple situation now where people used services like MozyPro for their backups, only to discover a full restore (when a drive crashed) was incredibly slow and kept aborting in the middle of the process, making the data restore essentially impossible. Mozy's solution? They're willing to burn a copy of the data onto optical disc and physically mail it back to you. So much for the whole cloud thing, huh?

    • While yes, right now, the tide may be against the server manufacturers -- the cloud still requires them in large quantities to host those services.

      Google's position on the list of Intel server-chip buyers makes it clear that the problem isn't for people server manufacturers (which Google, very much, is), its for server vendors. Sure, the cloud requires servers. But if the people selling cloud services are also building their own servers, that doesn't create a market for server vendors.

    • It may also depend on what kind of servers companies like Google want. Dell, HP and the like produce expensive servers with high-cost maintenance contracts, which look great to conventional business-executive types. Google, OTOH, probably is taking the techie approach of generic white-box servers with no support. They're installing their own OS image on it, and it's not going to be Windows or a commercial Unix, and with all Google's custom software they probably find vendor support all but useless. Ditto ha

  • Further proof that tablets and the Cloud(tm) are the paradigm shift into the new memesphere. Nobody needs big, bulky Iron from folks like IBM, HP, EMC, etc.

    We'll do it all now on clustered iPads! With Retina Displays! Surfing the web is dead, now we're Hangliding in The Cloud(tm)!!!!

    • "The Cloud" is only good as secondary backup if you don't care that it becomes public.

      Encrypt it all you want. Access to your data is the hardest hurdle and by using the could you give it away.

      • "The Cloud" is only good as secondary backup if you don't care that it becomes public.

        Encrypt it all you want. Access to your data is the hardest hurdle and by using the could you give it away.

        But.. but.. but... smartphones and virtualization and...and...and...free community wireless internet over dark fiber!!!

        (Yes, I'm just being silly. Having a slow day at work and the free coffee sucks)

      • "The Cloud" is only good as secondary backup if you don't care that it becomes public.

        Encrypt it all you want. Access to your data is the hardest hurdle and by using the could you give it away.

        I'm thinking that people who want to "be in the cloud" don't think about stuff like encrypting. "What, me--worry? I'm using the cloud!" En/Decrypting is work, and the whole idea of the cloud is to avoid work. If any crypto is being done, it's probably a service operated by your friendly (non-local) cloud provider, which means it provides no real security at all.

        This willingness of businesses to surrender their family jewels—their data—to complete strangers has puzzled me since this type of serv

        • The benefit of the "cloud" is reduced costs, and certainly doesn't mean it's insecure.

          Tarsnap (a backup service), for example, is very much a cloud service (runs on EC2 and stores the user data on S3), yet it encrypts each archive you upload with a random AES256 key that is then itself encrypted with an RSA key that never leaves your machine, and the whole thing has multiple levels of signatures (to prevent tampering).

          It's also designed and run by the FreeBSD Security Officer, which isn't a position given e

  • by ickleberry ( 864871 ) <web@pineapple.vg> on Wednesday September 12, 2012 @04:56PM (#41316869) Homepage
    Do they confirm it? Nothing's actually dieing until Netcraft says so.
  • by Thud457 ( 234763 ) on Wednesday September 12, 2012 @05:08PM (#41317013) Homepage Journal

    see, I told you that electronic data processing was a fad

    -- Spencer Tracy, "The Desk Set", 1957

  • by netwarerip ( 2221204 ) on Thursday September 13, 2012 @08:37AM (#41322329)
    And everything to do with VMWare. No one is buying servers because they have no need to. When I can replace 400 physical boxes with a couple dozen ESX hosts why wouldn't I?

    I guess another way you can look at it is Intel has innovated themselves out of a market. Multi-core procs have enabled the virtualization boom, but they didn't charge enough for them. At least the auto industry was smart about it - new cars last twice as long as cars from 15-20 years ago, and prices have gone up accordingly.
  • ... where is all this going to sit when you build your new computer on your printer?

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...