Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Networking

New Network Design Exploits Cheap, Power-Efficient Flash Memory 41

jan_jes writes: The researchers at MIT were able to make a network of flash-based servers competitive with a network of RAM-based servers by moving a little computational power off of the servers and onto the chips that control the flash drives. Each server connected to a FPGA and each FPGA, in turn, was connected to flash chips and to the two FPGAs nearest it in the server rack. As it is connected to each other, they created a very fast network that allowed any server to retrieve data from any flash drive. Finally, the FPGAs executed the algorithms that preprocessed the data stored on the flash drives.
This discussion has been archived. No new comments can be posted.

New Network Design Exploits Cheap, Power-Efficient Flash Memory

Comments Filter:
  • Through in 4 gigs of ram, too!

    • Well, I actually wonder if this isn't something new.

      So, think about it, because (I think) this kind of extends a Von Neumann architecture.

      âoeBut there may be many applications that can take advantage of this new style of architecture. Which companies recognize: Everybodyâ(TM)s experimenting with different aspects of flash. Weâ(TM)re just trying to establish another point in the design space.â

      It gives you a larger degree of parallelism because all the little doohickies are using their loc

      • Something along those lines -- in a very different era -- were the CDC 6000 series computers designed by Seymour Cray, and their successors.

        One or more central processors did the number crunching and general program logic, and some of the OS, with a bunch of smaller, not-so-bright "peripheral processors" doing I/O and certain low-level OS functions. (How not-bright? They couldn't do division or multiplication except by powers of two, for instance.)

        Supposedly, Gene Amdahl, designer of the IBM 360 series

    • by Anonymous Coward

      Well FPGAs often do throw in an ARM core or 4 these days:

      https://en.wikipedia.org/wiki/Field-programmable_gate_array

      e.g. the A Xilinx Zynq-7000 includes a Dual Core Cortex A9 on the chip.

      Logic gates are fine, but sometimes you just need some serial execution of code!

    • Because general purpose processors are slower than a specific purpose module in an FPGA could be. Cheap is not an important consideration in this case.
    • by gweihir ( 88907 )

      Simple: That would not produce papers, PhDs and press-statements.

  • by Anonymous Coward on Friday July 10, 2015 @10:47PM (#50086753)

    Go back and look at an AS/400, iSeries, i5 and now called IBM i,

    Biggest machine I worked on, 32 cores and 3/4TB of ram. And that was still ovly 1/2 as big it could be. But that 3/4TB was not the total ram in the machine, it had IOPs and IOAs that oversaw the the disk drives. Those processors had large ram and were basically fast and faster cache. So our machine with 900 drives in multiple raid-6 groupings (IOPs) with multi-grouping in IOA, acted as a 900 drive raid-0 (stripe) to main core. So reading a file sequencially all the drives will start suppling data first data in 50ms... but then the rest was "just there". Processing 4 billion row history files was easy.

  • The EMC Isilon is a cluster of FreeBSD nodes with completely customized filesystem on top. All the nodes are connected to each other by Infiniband, and redundancy is built into OneFS.

    I'm visualizing this as someone adding FPGA cards to Isilon nodes, and installing SSDs instead of the usual HDDs in the array. Innovative, but this isn't revolutionary by any means.

    • Big slow cheap with low power consumption;

      Flash memory -- the type of memory used by most portable devices -- could provide an alternative to conventional RAM for big-data applications. It's about a tenth as expensive, and it consumes about a tenth as much power.

      The problem is that it's also a tenth as fast. But at the International Symposium on Computer Architecture in June, MIT researchers presented a new system that, for several common big-data applications, should make servers using flash memory as effi

  • The article explains how it's cost effective and they uses FGPAs contributed by their sponsors.

    If they had sponsors to give them free RAM somehow I imagine that would have tipped the price comparison the other direction.

    • Once you're got a large enough demand, you can throw away the FPGA's and use custom silicon.
      That would lower the power consumption even further.

  • Moving some processing out of the central processor and into processors that access storage is not exactly new.

    But I bet these servers don't look too terribly much like CDC 6000s. (Especially their FPGA parts.)

    The article should be an interesting read. Which I will get to soon, now that I've offered an uninformed opinion about TFA and incidentally exposed my geezerhood.

  • Packers and Movers Bangalore [top-packers.in] | Packers and Movers Chennai [top-packers.in] | Packers and Movers Delhi [top-packers.in] | Packers and Movers Faridabad [top-packers.in] | Packers and Movers Ghaziabad [top-packers.in] | Packers and Movers Greater Noida [top-packers.in] | Packers and Movers Gurgaon [top-packers.in] | Packers and Movers Hyderabad [top-packers.in] | Packers and Movers Mumbai [top-packers.in] | Packers and Movers Navi Mumbai [top-packers.in] | Packers and Movers Noida [top-packers.in] | Packers and Movers Pune [top-packers.in] | Packers and Movers Thane [top-packers.in] |
  • A bottom of the range FPGA - 200 I/O pins, with 216.5 Gb/s for $20.

    And the one I have sitting on my desk at the moment has about 500 Gb/s over about 300 pins..

    You can't get that on an ARM CPU.

No man is an island if he's on at least one mailing list.

Working...