Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Cloud Facebook Google HP Intel Microsoft Technology

Demand For Custom Datacenter Servers Rising 103

With his first posted submission, SpaceCracker writes "According to this Bloomberg article, hardware giants like HP and Dell are losing out to Intel and others who've adapted more quickly to the trend of shifting from traditional, off-the-shelf servers to custom-tailored machines. 'Buyers say custom servers provide a cheaper, more efficient way of meeting the boom in demand for personal data shared via the Web. A lot of that demand can be met by less expensive machines shorn of the components, upgrades and backup services that server makers traditionally offer to large corporations.'"
This discussion has been archived. No new comments can be posted.

Demand For Custom Datacenter Servers Rising

Comments Filter:
  • Not custom, just not bloated. Bloat, both in hardware, operating systems, and "standard desktops" has become so bad that people are willing to pay a premium to remove some of it. The high end Intel server I got from System76 had nothing on it I did not need, and was cheaper than others, even while being "custom." The Linux drivers were a plus as well.
    • Yup.

      The sheer amount of shit that the major manufacturers put on PCs. HP is awful for that with all of their "assistants" and "wizards" and crap like that, and everybody wants to give you a bit steaming pile of trial-ware.

      Getting a machine stripped of all of this junk is pretty damned hard.

      • That is just software. How about hardware? Why do you need a chipset with high end RAID and 12 SATA ports when you are going to install a LSI card? Look under the hood and you see the same crap, but worse.
        • by h4rr4r ( 612664 )

          That is because they are using this to improve margins. Much like the car dealer will lie about how those alloy rims only come with a higher trim package, they don't how else would he get replacements, hardware vendors want to bundle crap too.

        • If they think 98% of the market is going to go with the onboard, it's cheaper for them to have a single part with an unused component 2% of the time than to maintain two motherboards with different chips populated with independent replacement stock (or to make it a field pluggable module). If they relax warranty promises or have a handful of customers driving tens of thousands of servers for one run, they can (and do, as pointed out in the article) make exceptions.

        • by LWATCDR ( 28044 )

          Chipsets don't have high end raid on them they have a software raid and if you are lucky a little but of help with XOR.
          The simple answer is because that is what is on the chip set and using a custom chip set with less will cost more.
          A lot of this stripping down is using desktop CPUs and chip sets instead of the server parts because they are cheaper . LSI cards? Nope they want two Giga-E ports. They don't use local disks they use SANs.
          I am sure at some point someone will come up with a server that just has

          • ...
            A lot of this stripping down is using desktop CPUs and chip sets instead of the server parts because they are cheaper . LSI cards? Nope they want two Giga-E ports. They don't use local disks they use SANs.
            I am sure at some point someone will come up with a server that just has a SD slot in the front for you to put the boot image on at some point.

            No doubt. But if this is the case, why not just a processor, RAM and some Giga-E ports?

            Won't the boards need some facility to set up the BIOS and/or perform diagnostics? It would be great to do that via LAN but I'm not aware that any systems support that (though that might say more about my lack of knowledge than actual capabilities.) Perhaps that could be handled off line using a plug in card that supports video and USB. Is it cheaper to add a PCI slot or USB and simple video?

            • by LWATCDR ( 28044 )

              You can have KVM over IP or serial over LAN for a server. You could even use a good old fashioned UART which would be as convenient as a USB port and and video port. Truth is that the average sever should have as much use for a video system as a submarine has for a screen door.

              As to why not just a processor, ram and Gig-e ports? Simple computers are not that simple anymore. A modern X86 cpu needs and chipset to interface that ram and network adaptor and the rest is economics.
              If you are going to make a chip

            • Modern servers have a BMC built-in for OoBM, that speaks IPMIv2. If the server has power, the BMC is running, and getting an IP address via DHCP (or static address if configured).

              Besides remote power on/off, querying sensors, etc., IPMI offers serial-over-lan with bios redirection. You get a console over ethernet at the hardware level. ipmitool is great.

              • by LWATCDR ( 28044 )

                Is it built in at the board level or is it still an add in card. When you think about it such a device would be painfully simple. just take any microcontroller on the marketed and have it emulate a UART, VGA, and PS/2 ports and then add a network port. If you set it up to use POE then the BMC would run even if the server didn't have power and could at least sound an alarm about a power fault for that machine.

                • Dell has a BMC built in to every server (AFAIK). The add-in card (DRAC) is only needed if you want an extra ethernet port, or the web interface and (GUI) KVM features.

                  HP has their iLOs built in to every server as well (AFAiK), which will do the IPMI thing once the proper software is loaded.

                  Supermicro, I recall require(d) an add-in card, but that could be old news.

                  Just about any plain vanilla Intel server boards have IPMI built-in as well (they wrote the spec after all). Even old P4/Xeon boards have had IP

          • I am sure at some point someone will come up with a server that just has a SD slot in the front for you to put the boot image on at some point.

            Nope... PXE and/or iSCSI target LUN, NOT an SD slot. You can do this right now with most servers. TOE and the like. Throw in IPMI + SOL and you don't need the serial ports, k&m, vga, etc. You don't really even need the power button. If we could pull more power over CAT-6, servers wouldn't even need power cables, it could be 100% ethernet... Plug two CAT-6

            • by Fjandr ( 66656 )

              Nope, they need to be laser-powered over fiber. Freakin' laser beams!

            • by LWATCDR ( 28044 )

              Good point about PXE. I don't think you are going to see a good sized x86 server that will run over POE anytime soon but it would be nice.

              • Booting to an iSCSI target is builtin to most servers' firmware at this point. The ToE takes care of the performance issues, and the OS doesn't even know the disk isn't a local block device.

                PXE boot (with NFS mounts or whatnot) was the old way. Still works, even the lowest-end PCs can do it easily, but iSCSI SAN over gigE is a much better solution these days.

                • by LWATCDR ( 28044 )

                  Good enough. I do not deal with big time servers much so thanks for the education.
                  Now you are making want to add some remote management to my home servers.
                  maybe when this comes out http://www.raspberrypi.org/ [raspberrypi.org] I can use it to do a little bit of home-brew server management With SPI, I2C, a UART, and a little GPIO I could tie in the reset switch, serial port, power switch and some sensors. Unless the bios supports redirecting the bios over serial it will be limited but for home use it should be a fun project a

                • by Junta ( 36770 )

                  PXE provides a distinct characteristic from iSCSI boot. PXE can/is used in some cases to start iSCSI (e.g. chaining iPXE). PXE can be used to do ram-hosted OS for reduced steady state network utilization.

      • With Compaq SmartStart.

        What a hell! Then, they required it for initial boot! And every other manufacturer saw this as worthy of emulation.

        They were awfully captive to a Windows-oriented market, and driven by MS.

      • "The sheer amount of shit that the major manufacturers put on PCs. HP is awful for that with all of their "assistants" and "wizards" and crap like that, and everybody wants to give you a bit steaming pile of trial-ware.

        " ... for desktops you do know about PCDecrapifier and CCleaner right?

        Who puts these on a server? I highly doubt HP would be that dumb to include this on a server as I doubt any Lan Admin would want Windows Messenger and to play the latest spyware included games.

        Any company worth there salt d

        • ... a fresh install of Windows Server ...

          ... When handling $500,000 worth of data ...

          ???

        • I'm assuming that people buying 10,000+ servers do care if the servers cost $40 more...
        • I read the first two comments and immediately recognized that they were made by people who have never unboxed a server before. When I get in the servers from Dell, Oracle (Sun), Supermicro, whoever, they don't even have the RAID setup. Why would I want the monkey who assembles servers for Dell to setup the RAID, inevitably they don't set it up how I want it for the server I am building.

      • Not to sound snarky, but don't you just reformat and reinstall the OS? Yeah, it's a big step for your average Joe, but this is Slashdot.
        • Since the article is about datacenters, and that would mean business, who even uses the OEM copy of Windows? What company that uses Windows doesn't have Open Licensing?

          What large company doesn't have a company image they blow onto new desktops/servers?

    • We've slowly been migrating back to white box servers as the cost of great hardware declines and the ability to tailor the project directly to the needs of the specific instance increase dramatically. We have some boxes with 12 SSD on Raid10 with redundant power, and other boxes with 4 Sata 1TB drives unstriped with 12 Gigabit nics. Customs don't take long to setup once you have the handle on how to match your components, rack, power, etc.

      The major hardware providers have to mark up every step of the way
    • It's not bloat, it is just common features that they don't need. Is your car bloated because it comes with seat belts, air conditioning, and windshield wipers? SATA controllers and PCI-X slots are not "bloat", but they are features some companies don't need.
      • You are right cars are not bloated for the reasons You mention.

        But if you would replace the driver airbag in a car with a shotgun I bet you everybody would all of a sudden drive like a responsible person ;-)

        • I walked into work today shaking my head, it baffles me why people drive so badly around here.

      • Agreed, engineering time, components testing, multiple warranty options, global distribution channels for repairs/parts, etc. are not bloat. However, to each their own.
    • Unfortunately, System76.com lists ONLY Intel servers.
      • Then buy Dell, HP, whoever. Servers don't come with the bloat as generally, servers come blank.

        I look at system76.com, click servers, click configure, go to hard drives and wonder who the hell needs a 500gb system drive? They don't even offer 160GB drives for the system, how is that custom? I thought maybe they just aren't sold anymore, then checked Newegg, and look, you can buy tiny drives if you want! So yeah...not a very good custom house when you can't split the system and data drives on a server...

        • You can with partitions... And if you are running full RAID, it makes sense to have that failsafe in your system drive as well. But if you want a smaller drive, call them. I got drives for mine that were not on the web site. (Although they were options about a week later)
  • by h4rr4r ( 612664 ) on Monday September 12, 2011 @03:26PM (#37380462)

    If you want 12 core Opterons you are automatically required to get a 2U machine from dell. It does not matter than the 1U machines could use those parts, they do this to improve their margins.

    Lots of stuff like that, feature X only is enabled on Platform Y. Intentional crippling of hardware leads to this big buyers side stepping these vendors.

    • I don't know about Dell or anything precisely along the lines of what you describe, but there *is* more to server design than 'does it fit in the socket'. If the components chosen for a 1U skimped on cooling to a certain TDP and the Magny Cours exceeds that, then they may not have enough room around the socket to accommodate a heatsink that can dissipate the heat given the flow rate.

      In terms of 'custom' unclear to what extent they are talking about board components (which have been increasingly sparse in t

    • Yeah, all these 'steps' are very expensive. If you want A, you're forced to buy B. I do OK building custom 'whitebox' machines for clients. If they worry about depot service, I point out that they can buy two of mine for less than one of theirs, configured in a hot/hot or hot/cold setup. My boxes do better on energy consumption too.

      • by Anonymous Coward

        I'm sure that your white boxes are fine but let's be serious for a moment.

        Do your white boxes support 2 or four sockets? Hot pluggable?
        How much memory will it hold 64GB or more? Hot pluggable?
        What about the power supplies, do yours have n+1? Hot pluggable?
        How about drives, 15K RPM SAS? Hot pluggable?
        What about battery backed up RAID controllers to support those drives, do yours provide the most possible bandwidth?
        Quad gigabit NICs onboard with TCP off loading?
        What about lights out remote management, you hav

        • The answer to all of that is yes, IPMI is found in most whitebox server manufacturers and is actually standardized to the point where a Tyan motherboard and a Supermicro can be managed in the same ways. Compare that to the fact that iLO2 is different on the 100 series versus the 300 series Proliants and you develop a lot of frustration fast. It's nice to see I'm not the only shop that's been going whitebox lately.
          • The only thing I've not been keen on about the Supermicro IPMI is that it's way more quirky and unpolished compared to iLO2/3 (I've only used the 300 series servers). I also have issues with the former locking up and failing to respond to IPMI commands after a while which requires logging in to the web interface and clicking the reset button. Of course, the price point is impossible to beat with Supermicro, and the expensive HP iLO advanced licensing is usually a deal killer.

        • The mac pro in sever use fails at a lot of that,.

    • by drsmithy ( 35869 )

      If you want 12 core Opterons you are automatically required to get a 2U machine from dell. It does not matter than the 1U machines could use those parts, they do this to improve their margins.

      Identically configured 1U and 2U machines cost essentially the same anyway (or at least they did the last time I priced them out - been nothing but blades for a while now).

      • Does that include the rack space rent?

        • by drsmithy ( 35869 )

          Does that include the rack space rent?

          Generally speaking, you run out of power (/cooling) long before you run out of rack space.

          Very few places will let you put 42 1U servers in a single rack.

    • You want better tires, transmission and no speed limiter? You have to buy the package with the huge wasteful engine too. You want the sports suspension, low-profile tires and higher-tuned engine? You need the sports package with the automatic transmission (I kid you not).

      • sport package with automatic transmission? What idiotic car company are you speaking of?

        Not that it is any way a sports car...but I had to buy the Camry SE (Sport Edition) to even be able to get a clutch. Clutch nowadays is a sporty feature, and rather hard to get (in the US at least...maybe you are UK?)

        • by Quila ( 201335 )

          sport package with automatic transmission? What idiotic car company are you speaking of?

          Mid 90s Dodge Avenger, the V6 had to be ordered with the automatic transmission. The sports package, with the bigger rims, performance tires and aerodynamics, dictated the V6 engine, and thus the automatic transmission. Needless to say, I passed on that car.

    • by Trogre ( 513942 )

      ...or just build your own like I did. There's no shortage of decent G34 motherboards.

      2-cpu (24-core) systems can be made up pretty easily provided you can afford the CPUs in the first place.

  • I tried these guys [supermicro.com] (just noticed they switched from com to nl, interesting, wtf?

    They are not as refined as the big ones, but they provide bang for the buck.

    • by Lennie ( 16154 )

      The site does not redirect to the nl site in the US, just the Netherlands (where I am) or the whole of Europe I guess.

      It is probably using a GeoIP or whatever database.

  • by zbobet2012 ( 1025836 ) on Monday September 12, 2011 @03:54PM (#37380762)

    HP and Dell are also killing themselves on SSD prices as datacenters move to these for there increased reliability and performance. HP and Dell are both anywhere from 10 to 20x the prices of other parties.

    Also TFA is about datacenter servers, which always come with either get re-imaged by large customers to whatever there operational image is. Almost none of the people listed here would deploy on OS that "ships" with the box, so all the bloatware complaints are idiotic.

    • Yep, we just had to re-image a brand new server back to 2003 because the client's ERM software wouldn't run on 2008. Surprised us, but that's what legacy software will get you.
    • You should be careful using a term like idiotic, especially when you don't know what bloatware is being referred to. In the case of HP it's things like iLO2 which vary greatly from model to model while Supermicro and Tyan both use IPMI giving me a standardized remote KVM solution that actually works in multiple browsers. HP System Management tools are a complete joke as well, no big deal as I use Nagios to monitor my servers and SCCM for patch management.

      Another benefit of going whitebox for was my ability

      • by bored ( 40072 )

        I buy an HP, sure it says it supports RAID 5 or 6 but that's Windows only and is software RAID prone to problems.

        I guess it must be the MBA's, but the arrogance at HP really started to piss me off a few years ago.

        First it was the removal of 3.5" bays from their machines, because after all none of their customers want to put high capacity low cost drives in the machines. Instead we all want to buy 10x as many 2.5" drives at 20x the cost.

        Then it was their refusal to even send out system diagrams of the CPU-

        • I've found with their smart arrays you just have to be careful which ones you buy. Of course after being burned a few times I just left HP alltogether, I haven't bought an HP in two years because even the Proliant networking crap started requiring extra licensing. Unfortunately the 3com crap is screwing with their stellar switch business. It sucks, I ordered an HP switch and received a 3com with a different OS and set of capabilities that don't even work with my existing HP tech.

          I'd HP is trying to commit

  • IBM, HP, Dell, etc have quit innovating LONG ago. Now, they are busy either shifting operations offshore to make up for horrible management, or they simply sell the unit (again to make up for horrible management, marketing, sales, etc).
    • The best servers unfortunately are the Oracle ones. Now they are waaayyyy too expensive with Oracle contracts for Oracle Database whether you need it or not. When Sun owned them you could upgrade the ram WHILE THE MACHINE WAS ON. Totally redundant and hot swapable. PC servers just do not compare.

      Since blades are being used now no one cares anymore I guess

      • by guruevi ( 827432 )

        For the $15,000 it costs for one of those machines (they are nice and well worth the price if you need those features), you can easily buy 5-10 of the white box servers and a couple of VMWare licenses to shift around the VM's while they are running. The techniques used in the Sun (Oracle) machines were/are nice if you need big iron-style stuff like the financial sector but for most other applications they are outdone these days by smarter software.

        • by kenh ( 9056 )

          You may want to re-price those VMWare licenses - they just revised the terms. I don't think you can implement VMWare on 5 boxes for $10K, ignoring the cost of the underlying hardware...

      • High end Dell servers can hotswap memory, daughter boards, hard drives, power supplies, fans. I don't think anyone can hotswap CPUs though, but I could be wrong. Dell even offers RAID 1 memory, though I can't imagine why you would do that.

    • And another ex-Dell employee heard from.
    • Interesting. I'm not seeing what you are. There have been some enormous innovations from Dell and HP lately, especially on bringing down the price of storage. Have you seen the Dell MD3xxx? You can hook up to four hosts to up to 96 hard drives using redundant SAS? For a small price that things packs an awful lot of performance that used to cost a heck of a lot of money. For smaller implementations it packs quite a punch.
  • Next generation of Windows server running on small, cheap, low power, ARM, system-on-chip servers. Cheap, commodity stuff. Parallel to MS starting to use this to supply their hosted Azure services, I can see private cloud in enterprise running on much the same sort of thing. Stateless machines, much like Azure. It won't suit everybody, but I bet this is the way it's going to go. Why burn power on Intel machines when you can save heat, power and space with SoC stuff?
    This is a guess only, but I think it
    • by kenh ( 9056 )

      The magic of extremely low cost, low-power CPUs is starting to wane - a couple years ago most server were underutilized, then there were two approaches to 'correcting' that problem: a) lower-cost, lower-power servers (Atom, ARM, other) and b) virtualization (VMWare, Hyper-V, Xen, KVM, etc). I would argue at the corporate level that virtualization is winning out, and low-powered servers are finding use in one-off installations (home servers, workgroup appliances, and the like).

      SGI/Rackable Atom-based desksid

      • Where we run into the most challenges (across many different applications/workloads) is in disk i/o. I can't wait for SSDs to get larger/cheaper as that will be a game changer.
  • Where do these vendors source their hardware from? Do they go straight to Taiwan with specs in hand and have a production run of boards done? What happens when they need spares? Do they just buy a whole slew of spares and then when those run out, move onto the next design?

    What happens with industry wide problems, like with what Intel experienced with the SandyBridge CPUs? I ran into some weird issues where a couple of my Dell blades were not seeing all of the RAM. They had a firmware fix available for

    • http://opencompute.org

    • Where do these vendors source their hardware from? Do they go straight to Taiwan with specs in hand and have a production run of boards done?

      Yep. Google goes to Gigabyte with specs and they start up the assembly line. They're constantly buying more, so "spares" aren't an issue. And when you have a "cloud" environment (we used-to just call them clusters) you don't need identical replacements... Use the same SAS/SATA chipset, and otherwise completely update the design (every few months) and drop it in to

  • Back in my day, if you wanted a special server at a datacenter you built your own. Still have a dual 550 xeon sitting around here somewhere in a 2U case that I built many moons ago. It's not an unheard of concept.
  • I just got back from the datacenter, building out new racks of servers, spec'ing out more servers for future expansion, etc. If anyone should be able to understand the story, I should... but I don't. The reporter clearly doesn't understand the topic, so the story becomes completely demented. They're using completely unrelated issues, and even opposing reasoning, to paint a picture of an emerging trend.

    Let's shed some light on this murky discussion:

    âoePeople want to be able to

    • For the likes of Dell/HP/IBM, these scenarios present a problem. These datacenters architect their solution so that the manageability and service is no big deal. A system fails and it's going to be 3 weeks before you can get a replacement in? Fine. Can't get a replacement anymore because that model is done, upgrade it to something 'close enough'. Much of Dell/HP/IBM cost compared to, say, Supermicro is in maintaining stockpiles of replacement parts, keeping them distributed across the globe, paying for

    • You're forgetting someting: both Google and Facebook accept a certain percentage of that hardware to break, and leave it broken until the next maintenance window. They make up for it in numbers and handle the redundancy/high availability in software/OS. They also accept that common hardware is "good enough" and achieve performance through higher volumes. They are also big enough to have a custom server built (design PCB, test, build etc.). Most companies aren't big enough to justify a complete custom desi
      • It is easy to handle a certain percentage of your servers failing when you can buy 3 servers at the price of 1 fully redundant one. At least when you have smart people administating them that won't cut short term costs at the expense of long term ones.

        The only important variable here is how you storage and service the extra servers. If those are expensive enough, you'd better acquiring some mainframes...

      • both Google and Facebook accept a certain percentage of that hardware to break, and leave it broken until the next maintenance window. They make up for it in numbers and handle the redundancy/high availability in software/OS.

        I didn't forget that at all... That is the very x86 business model that Dell and HP are serving to begin with. If you need high availability, you don't depend on a single system being up all the time. If that's your business model, then you want a mainframe, NonStop server, or simil

  • I'd love to convert my x86 based farm into servers that don't have any expansion slots, but have a pair of 10GbE LOM (LAN On Motherboard). I don't need internal disk, just CPU and RAM. I currently boot from SAN, but if I can get rid of the PCI slots, then I can boot from FCoE instead. All my disk is concentrated in the disk arrays, so I don't have to deal with disk in the servers and I can get away from remove the power inefficient HBAs. Now if my LAN and storage admins could stop fighting over "Who ge

    • by Junta ( 36770 )

      Because you want 10Gbe, but maybe you want Qlogic ethernet chips, but another guy wants broadcom, and another guy wants Emulex, and yet another guy wants Intel, and maybe half the people don't even want 10Gbe, and 10Gbe chips are still *expensive*.

The unfacts, did we have them, are too imprecisely few to warrant our certitude.

Working...