Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Supercomputing Hardware Hacking Build

Adapteva Parallella Supercomputing Boards Start Shipping 98

hypnosec writes "Adapteva has started shipping its $99 Parallella parallel processing single-board supercomputer to initial Kickstarter backers. Parallella is powered by Adapteva's 16-core and 64-core Epiphany multicore processors that are meant for parallel computing unlike other commercial off-the-shelf (COTS) devices like Raspberry Pi that don't support parallel computing natively. The first model to be shipped has the following specifications: a Zynq-7020 dual-core ARM A9 CPU complemented with Epiphany Multicore Accelerator (16 or 64 cores), 1GB RAM, MicroSD Card, two USB 2.0 ports, optional four expansion connectors, Ethernet, and an HDMI port." They are also releasing documentation, examples, and an SDK (brief overview, it's Free Software too). And the device runs GNU/Linux for the non-parallel parts (Ubuntu is the suggested distribution).
This discussion has been archived. No new comments can be posted.

Adapteva Parallella Supercomputing Boards Start Shipping

Comments Filter:
  • by IAmR007 ( 2539972 ) on Tuesday July 23, 2013 @04:33PM (#44364933)
    I'm skeptical as to how useful this chip will be. High core counts are making supercomputing more and more difficult. Supercomputing isn't about getting massively parallel, but rather high compute performance, memory performance, and interconnect performance. If you can get the same performance out of fewer cores, then there will usually be less stress on interconnects. Parallel computing is a way to get around the limitations on building insanely fast non-parallel computers, not something that's particularly ideal. For things like graphics that are easily parallel, it's not much of a problem, but collective operations on supercomputers with hundreds of thousands to millions of cores are one of the largest bottlenecks in HPC code.

    Supercomputers are usually just measured by their floating point performance, but that's not really what makes a supercomputer a supercomputer. You can get a cluster of computers with high end graphics cards, but that doesn't make it a supercomputer. Such clusters have a more limited scope than supercomputers due to limited interconnect bandwidth. There was even debate as to how useful GPUs would really be in supercomputers due to memory bandwidth being the most common bottleneck. Supercomputers tend to have things like Infiniband networking in multidimensional torus configurations. These fast interconnects give the ability to efficiently work on problems that depend on neighboring regions, and are even then a leading bottleneck. When you get to millions of processors, even things like FFT that have, in the past, been sufficiently parallel, start becoming problems.

    Things like Parallella could be decent learning tools, but having tons of really weak cores isn't really desirable for most applications.
  • by ShieldW0lf ( 601553 ) on Tuesday July 23, 2013 @05:25PM (#44365463) Journal

    This device in particular only has 16 or 64 cores, but the Epiphany processor apparently scales up to 4,096 processors on a single chip. And, the board itself is open source.

    So, if you developed software that needed more grunt than these boards provide, you could pay to get it made for you quite easily.

    That's a big advantage right there.

If you have a procedure with 10 parameters, you probably missed some.

Working...