'Pi VizuWall' Is a Beowulf Cluster Built With Raspberry Pi's (raspberrypi.org) 68
Why would someone build their own Beowulf cluster -- a high-performance parallel computing prototype -- using 12 Raspberry Pi boards? It's using the standard Beowulf cluster architecture found in about 88% of the world's largest parallel computing systems, with an MPI (Message Passing Interface) system that distributes the load over all the nodes.
Matt Trask, a long-time computer engineer now completing his undergraduate degree at Florida Atlantic University, explains how it grew out of his work on "virtual mainframes": In the world of parallel supercomputers (branded 'high-performance computing', or HPC), system manufacturers are motivated to sell their HPC products to industry, but industry has pushed back due to what they call the "Ninja Gap". MPI programming is hard. It is usually not learned until the programmer is in grad school at the earliest, and given that it takes a couple of years to achieve mastery of any particular discipline, most of the proficient MPI programmers are PhDs. And this, is the Ninja Gap -- industry understands that the academic system cannot and will not be able to generate enough 'ninjas' to meet the needs of industry if industry were to adopt HPC technology.
As part of my research into parallel computing systems, I have studied the process of learning to program with MPI and have found that almost all current practitioners are self-taught, coming from disciplines other than computer science. Actual undergraduate CS programs rarely offer MPI programming. Thus my motivation for building a low-cost cluster system with Raspberry Pis, in order to drive down the entry-level costs. This parallel computing system, with a cost of under $1000, could be deployed at any college or community college rather than just at elite research institutions, as is done [for parallel computing systems] today.
The system is entirely open source, using only standard Raspberry Pi 3B+ boards and Raspbian Linux. The version of MPI that is used is called MPICH, another open-source technology that is readily available.
But there's an added visual flourish, explains long-time Slashdot reader iamacat. "To visualize computing, each node is equipped with a servo motor to position itself according to its current load -- lying flat when fully idle, standing up 90 degrees when fully utilized."
Its data comes from the /proc filesystem, and the necessary hinges for this prototype were all generated with a 3D printer. "The first lesson is to use CNC'd aluminum for the motor housings instead of 3D-printed plastic," writes Trask. "We've seen some minor distortion of the printed plastic from the heat generated in the servos."
Matt Trask, a long-time computer engineer now completing his undergraduate degree at Florida Atlantic University, explains how it grew out of his work on "virtual mainframes": In the world of parallel supercomputers (branded 'high-performance computing', or HPC), system manufacturers are motivated to sell their HPC products to industry, but industry has pushed back due to what they call the "Ninja Gap". MPI programming is hard. It is usually not learned until the programmer is in grad school at the earliest, and given that it takes a couple of years to achieve mastery of any particular discipline, most of the proficient MPI programmers are PhDs. And this, is the Ninja Gap -- industry understands that the academic system cannot and will not be able to generate enough 'ninjas' to meet the needs of industry if industry were to adopt HPC technology.
As part of my research into parallel computing systems, I have studied the process of learning to program with MPI and have found that almost all current practitioners are self-taught, coming from disciplines other than computer science. Actual undergraduate CS programs rarely offer MPI programming. Thus my motivation for building a low-cost cluster system with Raspberry Pis, in order to drive down the entry-level costs. This parallel computing system, with a cost of under $1000, could be deployed at any college or community college rather than just at elite research institutions, as is done [for parallel computing systems] today.
The system is entirely open source, using only standard Raspberry Pi 3B+ boards and Raspbian Linux. The version of MPI that is used is called MPICH, another open-source technology that is readily available.
But there's an added visual flourish, explains long-time Slashdot reader iamacat. "To visualize computing, each node is equipped with a servo motor to position itself according to its current load -- lying flat when fully idle, standing up 90 degrees when fully utilized."
Its data comes from the /proc filesystem, and the necessary hinges for this prototype were all generated with a 3D printer. "The first lesson is to use CNC'd aluminum for the motor housings instead of 3D-printed plastic," writes Trask. "We've seen some minor distortion of the printed plastic from the heat generated in the servos."
Actual Use? (Score:3)
Seems more of something to just mess around with. Everywhere you look people are having thermal problems with these, and that "wall" shows not a single board even equipped with a heatsink. Sure you can rely on thermal throttling, but this seems more like a gag setup than something actually used.
Re: (Score:1)
Exactly you don’t need a cluster to learn MPI it works exactly the same in a shared memory device.
Re: (Score:3)
Obviously.
I love it!
Re: (Score:1)
And can it run Crysis?
Re: (Score:2)
What to do with low cost, low power systems?
In a world of low cost very powerful desktop computers, 4K and 8K displays?
Re:Actual Use? (Score:5, Informative)
Seems more of something to just mess around with.
That is literally there in the summary. A tool to learn how to program for clusters, with the only performance goal being that the cluster is cheap.
Re: (Score:2)
it's not cheap though, a single cheaper x86 machine could whip its pants off
Re: (Score:2)
So run a mess of VMs on one computer and link them all?
Hey! (Score:1)
Hey, it's not often I see my alma mater on Slashdot. Too bad it's not so easy (read: cheap) to build a reliable model of a large scale NUMA machine.
Re: (Score:1)
So real-world NUMA hardware tends to have very different timing for both local and distant memory accesses. MPI on a network isn't quite the same thing.
Re: (Score:2)
well sure or just teach parallel programming in the first place...
but this one gets the cool 3d printed parts and shit that have nothing to do with the thing itself. yeah cnc machining parts, thats lowcost.
Lying flat when fully idle, standing up 90 degrees (Score:5, Funny)
> Lying flat when fully idle, standing up 90 degrees when fully utilized
So he built a computer that can get a boner. Got it.
Re: (Score:3)
Better still, line them all up side by side, then offer a prize for anyone who can configure and apply a load that makes the units do a "mexican wave".
Re: (Score:2)
Motor? (Score:3)
Servo motor? I guess it's kind of unique, but an LED display of some kind would be more useful.
Re:Motor? (Score:4, Interesting)
>"Servo motor? I guess it's kind of unique, but an LED display of some kind would be more useful."
I came to post the same thing. From the article:
"The original plan for this project was to make a 4ft Ã-- 8ft cluster with 300 Raspberry Pis wired as a Beowulf cluster running MPICH. When I proposed this project to my Lab Directors at the university, they balked at the estimated cost of $20â"25K and suggested a scaled-down prototype first." [...] "weâ(TM)ve seen some minor distortion of the printed plastic from the heat generated in the servos."
Almost all the complexity of the project are the hinges and their mounting. It would be far easier, faster, smaller, and cheaper to use LED's to indicate load. Plus it would use less power, too. I understand the "coolness" factor, but that wasn't the stated goal.
Re: (Score:3)
an LED display of some kind would be more useful
The project was a learning exercise, there's nothing "useful" about the finished product other than what Mr. Trask got out of building it.
Re: (Score:2)
LoB
Comment removed (Score:5, Funny)
Re: (Score:2)
Beowulf cluster... now there's a name I've not heard for a long, long time.
Previous work (Score:1)
Done before, for pretty much the same reasons, a few years ago: http://coen.boisestate.edu/ece/research-areas/raspberry-pi/
Cluster of 750 RPis has already been done (Score:5, Informative)
At 750-node raspberry cluster in Los Alamos was covered in 2017: https://slashdot.org/story/334... [slashdot.org]
An AC here referred to this 32-Raspberry cluster in 2013:
http://coen.boisestate.edu/ece... [boisestate.edu]
Re: (Score:2)
Cool(ing) solution (Score:2)
Am I the only one who think that they made a novel solution to the cooling problem by just flapping the entire computers in the air? .... well, unless they are fully loaded all the time and thus stop flapping when they would need it the most.
If the goal is low cost? (Score:2)
Why bother with hardware at all? Can't you just virtualise many machines and get to learning to program? Even a standard performance graph provided by the hypervisor would be more useful than BonerPi
Re: If the goal is low cost? (Score:3)
Here is one with 33 (Score:1)
Here is a 33 node one from 6 years ago: https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
I cannot knock the guy's proof-of-concept. It looks cool, and does a point. However, I'm with you. If I were looking at building a "supercomputer", I'd look at scaled up SBCs, be it ARM64 boards, or even PC-104 x86_64 machines with a decent amount of RAM in them. Or best of all, buy a chassis that has excellent cooling and buy a bunch of Xeon Phi boards for the best bang per buck. 72 cores on a PCIe card is hard to beat.
Of course, having a number of Raspberri Pis for other functions makes sense, and it
RAID storage required? (Score:2)
This cluster obviously needs storage to match. How about a five disk floppy RAID providing 4MB of Blistering Fast Storage [wired.com]?
This isnt exactly new... (Score:2)
Epson (Score:1)