Forgot your password?
typodupeerror
Data Storage Sun Microsystems Hardware Technology

Sun To Include SSDs On Server Motherboards 79

Posted by timothy
from the smaller-canyon-shorter-echo dept.
snydeq writes "Sun has announced plans to integrate solid-state drives onto server motherboards to provide faster data access for I/O intensive applications. For now, the company is offering SSDs that customers can slide into their storage bays, but long term, Sun will locate SSDs closer to the server CPUs to cut the bottleneck that occurs when powerful, multicore CPUs have to wait for data to be delivered from hard drives, according to the company. The move could mark a change in how Sun servers are designed going forward, including the possibility of servers that have no hard drive, relying entirely on SSDs."
This discussion has been archived. No new comments can be posted.

Sun To Include SSDs On Server Motherboards

Comments Filter:
  • Sun's hardware is already prohibitively expensive, how much will options like this add to the price of hardware? When I can order up a pair 4U boxen from any competitor that each have the same hardware specifications as a single box from Sun, what does this buy me besides simplified wiring/management, and the ability to run Solaris?
    • Re: (Score:1, Interesting)

      by Anonymous Coward

      This could allow for even higher blade density in HPC solutions. I don't see it being such a big deal for 4U.

    • Re: (Score:2, Interesting)

      by josmar52789 (1152461)
      Um, I think we just read "what this buys you" Reduced bottlenecking, faster read/write... I'd like to see this on cheaper hardware...
    • Re: (Score:2, Funny)

      by MrEricSir (398214)

      You're helping keep Sun from going bankrupt. Think of it as a charitable donation.

    • Re: (Score:1, Insightful)

      by Anonymous Coward

      Oh, only the joys of supporting integrated peripherals when they inevitably fail.

      Take the sun-supplied crowbar, now just pry the ssd's off the motherboard...

    • by smallfries (601545) on Wednesday March 11, 2009 @04:08PM (#27156305) Homepage

      Given that Sun design their boxes around their own custom hardware (Niagra, Sparc etc) who exactly are you buying the same specification from?

      • Re: (Score:3, Informative)

        by H0p313ss (811249)

        Given that Sun design their boxes around their own custom hardware (Niagra, Sparc etc) who exactly are you buying the same specification from?

        You are correct, but incomplete. Sun also sells servers based on Intel [sun.com] and AMD [sun.com] as well as Intel based Workstations [sun.com].

    • Re:But at what cost? (Score:5, Informative)

      by 0racle (667029) on Wednesday March 11, 2009 @04:24PM (#27156511)
      We've only ever found Sun to be a few hundred more then IBM or HP when it was more expensive. The benefit being a Sun reseller actually returned our calls, HP didn't and IBM gave us a run around.
      • by Orlando (12257)

        Interesting. We avoid Sun largely because the after sales support is so bad, at least in our part of the world.

        By the time we get someone from Sun to start working on an issue, the equivalent problem with an HP, Dell or IBM box has already been fixed. For me this wins the deal every time.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Sun's hardware is already prohibitively expensive, how much will options like this add to the price of hardware? When I can order up a pair 4U boxen from any competitor that each have the same hardware specifications as a single box from Sun, what does this buy me besides simplified wiring/management, and the ability to run Solaris?

      Firstly, you employed the term "boxen" which pretty much denotes that you're a basement dwelling fanboy poseur.

      Secondly, prohibitively expensive? Sun support in my neck of the wo

    • Re: (Score:3, Interesting)

      by ishobo (160209)

      The last bidding process I was involved in (for x86 hwardware) 2.5 years ago, Sun came out less than Dell and HP, and significantly less than IBM. Options always add to the price of any vendor.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Yes, there is a price tag. However, because Solaris and the SPARC hardware are both made by the same company, you can call and get 24/7/365 support and not get bounced between a software vendor and a hardware vendor endlessly. This matters greatly with server clusters that are supporting 99.99% or higher uptime, and one has to troubleshoot a kernel panic at 3am in the morning. Sometimes, a Sun tech may be sent out because the hardware notices a glitch that means hardware about to fail, but not yet.

      There

  • Static Content (Score:1, Interesting)

    by Anonymous Coward

    I know that websites as a whole that serve just static content are increasingly rare, but sometimes a separate server is created for static content. If the volume of this content is pretty small a small SSD on the motherboard would allow for an OS + the content to be served very efficiently.

    • Re: (Score:2, Insightful)

      by jschen (1249578)
      Then why not just get a bit more RAM and load the whole site into RAM during boot-up? It's faster and more cost effective than getting a SSD hard drive if you're only going to use a few GB (if that) of the SSD drive.
      • by Skal Tura (595728)

        Hard drive cache. Do i need to say more?

        All modern OSs cache on available ram files accessed, depending on how often, when more ram is needed, most rarely used files are taken from the ram away.

        So if you got PLENTY of ram, you don't even need to put them on /dev/shm, tho i'd recommend that.

        16Gb even is rather cheap nowadays, even for servers. and it makes a huge difference especially if you deal with large data sets.

  • by Onaga (1369777) on Wednesday March 11, 2009 @03:59PM (#27156161)

    No [storagesearch.com].

    Before anyone complains about ssd wearing out quickly, please read here [storagesearch.com].

    • by Emb3rz (1210286)
      +1 informative if I had the mod points. Thanks for the link.
    • Re: (Score:3, Insightful)

      by Jurily (900488)

      Before anyone complains about ssd wearing out quickly, please read here.

      That is the single most fucked up page layout I've ever seen. It managed trigger my ad-blindness for both columns. I gave up after three seconds trying to read it.

      Page loads: article nowhere. Just a bunch of incoherent links and some cute drawings. Ok, page down... a bunch of incoherent sentences? Where does the article start? What's the content? Why is the page divided into two columns which have no visible connection? Where the fuck am I supposed to start reading?!

      5 page downs later I realize the article

  • by pla (258480) on Wednesday March 11, 2009 @04:08PM (#27156299) Journal
    long term, Sun will locate SSDs closer to the server CPUs to cut the bottleneck that occurs when powerful, multicore CPUs have to wait for data to be delivered from hard drives

    So close, and yet...

    SSDs allow us to stop thinking about attached "storage" devices, and instead think of them as their originally-intended purpose - Slow memory. For decades, they've run so much slower than the CPU that we can't treat them as a form of memory without paying a huge performance hit (try running XP with 64MB of RAM and a 2GB pagefile on the fastest HDD out there, and experience the suck); but finally, with SSDs, we may soon have the ability to treat them as a system's primary memory, with what we currently consider RAM acting as an L3/L4 cache. Not to say SSDs have come anywhere *near* DRAM for speed, but the no-seek-time-penalty starts putting them in the right ballpark.

    I also don't know that I'd consider building them right on the motherboard a good idea... Much like the same path DRAM took, in the end the limitations (no easy upgradeability) far outweighed the convenience ("just there" as a given).

    But one small step at a time, I guess, so kudos to Sun for taking even a baby-step in the right direction.
    • Re: (Score:3, Interesting)

      by icebike (68054)

      I'm not so impressed.

      The reason they are on the motherboard is because they have exceeded peripheral bus speed. Of course, so have many hard drives.

      Keeping them as hard drive replacements will force new bus technology, which in the long run will be more useful than SSD on the mobo, which will be obsolete the moment it reaches the end of the assembly line.

      • by negRo_slim (636783) <mils_oRgen@hotmail.com> on Wednesday March 11, 2009 @05:03PM (#27157099)
        I wasn't aware of _many_ hard drives that can saturate current bus standards.

        Today's mechanical hard disk drives transfer data at a maximum of about 118 MB/s,[5], within the capabilities of even the older PATA/133 specification. However, high-performance flash drives transfer data at 250 MB/s.

        For mechanical hard drives, SATA/300's transfer rate is expected to satisfy drive throughput requirements for some time, as the fastest mechanical drives barely saturate a SATA/150 link. A SATA data cable rated for 1.5 Gbit/s will handle current mechanical drives without any loss of sustained and burst data transfer performance. However, high-performance flash drives are approaching SATA/300's transfer rate.

        http://en.wikipedia.org/wiki/SATA [wikipedia.org]

    • Re: (Score:3, Informative)

      by Cadallin (863437)
      That's completely unworkable. For one, SSDs are at least an order of magnitude too slow, and two, while the number of read/write cycles for DRAM is effectively unlimited, the number of Read/Write cycles for even SLC flash is not.

      The ability of wear leveling currently to keep a Flash drive functional when used as Swap space is just barely there, use the flash as main memory and there is no hope. You'll constantly be killing cells.

    • by PCM2 (4486)

      SSDs allow us to stop thinking about attached "storage" devices, and instead think of them as their originally-intended purpose - Slow memory.

      And what, pray tell, is the use for slow memory in today's world? Back when "storage" (emphasis yours) was invented, RAM was very expensive. That's not true today, so what's the point of finding expensive ways to replace RAM with something slower?

    • by BikeHelmet (1437881) on Wednesday March 11, 2009 @05:43PM (#27157729) Journal

      I'm waiting for FusionIO ioDrives to become affordable.

      They run through PCIe 4x slots directly to the CPU, so you can skip a limiting SATA controller. I've seen benchmarks approaching 2GB/sec by RAIDing multiple of them. That's almost 1/10th the speed of DDR3.

      All I have to say is... bring it! I want it!

    • Re: (Score:3, Interesting)

      by Spit (23158)

      In the not too distant future, non-volatile will be as fast as RAM.

      http://en.wikipedia.org/wiki/Memristor [wikipedia.org]

  • So the old new thing resurfaces....
    Persistent ram drives have arrived. Should we be dredging up our old DOS disks again?

    Why put this on the MoBo?
    Why BUY this on the MoBo?

    Have we not been thru enough new-product cycles to learn NEVER NEVER NEVER buy an integrated version of new technology?

    How many modems lurking on motherboards were abandoned in the race from 300 baud to 56k? How many on-board video chip-sets are doing nothing at all, having been replaced by generation after generation of add-on video?

    In 8

    • Sun hardware is for servers, son.

      • Re: (Score:1, Troll)

        by Bearhouse (1034238)

        Sun hardware is for servers, for people with more money than sense, son.

        Fixed that for thee, m'Lud

      • Re: (Score:3, Insightful)

        by icebike (68054)

        With my slashdot ID half of yours I'd be careful about calling anyone "son".

        Being a server is even MORE reason this is an inappropriate use of SSDs.
        Servrs should be adequately sized and powered such that they can cache their
        workload and never have to reboot.

        • Re: (Score:2, Funny)

          by maxume (22995)

          Given your attitude, I bet you are some sort of curmudgeon. I'm not, and my id is half again lower than yours. It's almost as if it is a meaningless number.

        • The fact that you think upgradablitiy is an important feature for a server, and that you are very bad at math (68k != 103k/2), makes me again question your judgment, sport.

      • Your post reminded me of a Dilbert cartoon:

        Wally: You're one of those condescending unix computer users!
        Bearded Guy: Here's a nickel kid. Get yourself a better computer.

    • Re: (Score:3, Interesting)

      by spacey (741)

      Sun's using hardware that amounts to pluggable disks on a range of hardware. The same module they're putting into other devices will go into this motherboard, so it's sort of a commodity. A huge benefit of this tech is that if you can put your OS on it, you get faster swap, faster access to data on these devices, and much less electricity per rack. If they wanted to they could probably produce blades that were teeny tiny but still had on-board storage. RLX could have used this.

      -Peter

      • by icebike (68054)

        That makes sense, except the swap part.

        Throw and equivalent amount of money at REAL RAM, such that your machine never swaps and everything will run much better.

        • Re: (Score:3, Interesting)

          by poot_rootbeer (188613)

          Throw and equivalent amount of money at REAL RAM, such that your machine never swaps and everything will run much better.

          This approach works, but only up to a point.

          Sure, a system with a 64-bit address bus is theoretically capable of addressing 16 petabyes of RAM, but how many motherboards do you know of that have more than six or eight DIMM slots? I don't think they make 2-million-terabyte DDR3 sticks, yet...

    • by MightyYar (622222)

      How many modems lurking on motherboards were abandoned in the race from 300 baud to 56k?

      So what? You'd rather have abandoned a more-expensive 300 baud card? I really don't understand... when the integrated modem/video/SSD is obsolete you can simply stop using it. In the meantime you have a free slot and you've spent less money.

  • While SSDs can have decent latency, their typical bandwidth is _horrible_ (~5-10x slower) than spinning disks.

    Is Sun going to try $$$ome expen$ive proprietary (parallel flash) to overcome this?

    • Re: (Score:1, Informative)

      by Anonymous Coward

      http://www.newegg.com/Product/Product.aspx?Item=N82E16820167013

      sequential read: 250MB/s
      sequential write: 170MB/s

  • Integrated components are never good. When they break you can't fix them unless your an electrical engineer with some serious experience soldering.

    These days most people don't know how to solder anymore and actually desoldering and components is unheard of, even in logical cases like replacing/upgrading a graphics card on a laptop, upgrading/replacing the CPU on a laptop with a soldered on CPU etc.

    Now with advancements in manufacturing and mass production individual components aren't replaced. You send a fa

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      Thanks sun but no thanks. We don't want to have to replace a $700+ motherboard every couple of years just to upgrade the SSD.

      Look at the picture below at:
      http://www.enterprisestorageforum.com/technology/news/article.php/3809601

      Does this look like a integrated component?

      Looks like a Mini-DIMM to me.

  • I don't get it.

    With fiber channel and infiniband becoming more common, servers are moving away from direct attach storage. It simply doesn't need to be there.

    Also, the 2 most important things for server storage is capacity and bandwidth. Both of which SSDs are kind of poor at.

    Maybe this is for smaller servers or something. Or just a marketing gimmick.

    • Re: (Score:1, Insightful)

      by Anonymous Coward

      You're forgetting latency. Just count how many hops you have till you reach your storage array.

      This has a huge impact on certain loads.

    • I'd use it, it would be perfect for a VMware ESX install where all your VMs are on a SAN setup.
    • by drsmithy (35869)

      With fiber channel and infiniband becoming more common, servers are moving away from direct attach storage. It simply doesn't need to be there.

      Not everyone has the storage needs a SAN provides. Particularly when it has a pricetag in the ballpark of an order of magnitude higher.

      DASD isn't going anywhere.

  • This isn't such a bad idea. I mean, if you can have a boot drive on your mobo, then that's something you'd never have to mess with, and OS designers would be forced to keep their OS under that footprint.

    Just imagine, a computer where you knew that everything that was on the hard drive was expendable, and could be deleted without harming the system...

  • I wonder if this development has to do with this:

    http://www.enterprisestorageforum.com/technology/news/article.php/3809601 [enterprise...eforum.com]

    Which would mean, like processors & RAM, that it's designed to be replaced or upgraded as the need arises.
  • by eric2hill (33085) <eric@NOspaM.ijack.net> on Wednesday March 11, 2009 @11:14PM (#27161633) Homepage

    Everyone here seems to be missing the point.

    The integrated SSD probably has way more to do with being used as L2ARC [google.com] cache in ZFS than as the primary storage for the box. ZFS is a bit sluggish without any cache (every sync burns a minimum of 5 writes to disk at different places), but the L2ARC feature introduced in the latest builds of Solaris (and much earlier in OpenSolaris) gives ZFS a healthy performance boost. Sun is already selling SSD drives in their 7000 series storage appliances as L2ARC cache. It's turned on by default.

    And for those of you who think they can buy white-box servers cheaper, you're right. Sun's hardware is more expensive. However Sun's servers come with integrated ILOM in all models, even the really cheap ones. ILOM in servers is an absolute MUST for any server not deployed within 1 or 2 floors of your desk, and adding an ILOM/DRAC/ILO/whatever card to a stock server jumps the price of the server at least $250-300, with some cards costing over $700. Having an in-the-box 100% supported ILOM is well worth the typical $200 price difference between Sun and other vendors.

As long as we're going to reinvent the wheel again, we might as well try making it round this time. - Mike Dennison

Working...