Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Supercomputing United Kingdom Technology Hardware Science

Scaling To a Million Cores and Beyond 206

mattaw writes "In my blog post I describe a system designed to test a route to the potential future of computing. What do we do when we have computers with 1 million cores? What about a billion? How about 100 billion? None of our current programming models or computer architecture models apply to machines of this complexity (and with their corresponding component failure rate and other scaling issues). The current model of coherent memory/identical time/everything can route to everywhere; it just can't scale to machines of this size. So the scientists at the University of Manchester (including Steve Furber, one of the ARM founders) and the University of Southampton turned to the brain for a new model. Our brains just don't work like any computers we currently make. Our brains have a lot more than 1 million processing elements (more like the 100 billion), all of which don't have any precise idea of time (vague ordering of events maybe) nor a shared memory; and not everything routes to everything else. But anyone who argues the brain isn't a pretty spiffy processing system ends up looking pretty silly. In effect, modern computing bears as much relation to biological computing as the ordered world of sudoku does to the statistical chaos of quantum mechanics.
This discussion has been archived. No new comments can be posted.

Scaling To a Million Cores and Beyond

Comments Filter:
  • multi core design (Score:5, Insightful)

    by girlintraining ( 1395911 ) on Wednesday June 30, 2010 @02:22AM (#32741054)

    Simply put, there are some computational problems that work well with parallelization. And there are some that no matter how you try to approach it, you come back to a serial-based model. You could have a billion core machine running at 1Ghz get stomped by a single core machine running at 1.7Ghz for certain computational processes. We have yet to find a way computationally or mathematically to make intrinsically serialized problems into parallel ones. If we did, it would probably open up a whole new field of mathematics.

  • by i-like-burritos ( 1532531 ) on Wednesday June 30, 2010 @02:43AM (#32741142)
    The things we use computers for are different from the things we use humans for.

    Computers are consistant and predictable. The human brain is not.

    We have billions of human brains cheaply available, so let's use those when we want a human brain. And let's use computers when we want computers.

  • Damaged Brains (Score:4, Insightful)

    by b4upoo ( 166390 ) on Wednesday June 30, 2010 @02:48AM (#32741168)

    Some folks with severely damaged brains seem to make better human computers than people with healthy brains. Rain Man leaps to mind as well as other savants. It seems that when some parts of the brain are impaired the energy of thought is diverted to narrower functions. Perhaps we need to think of delivery more energy to less cores to make machines that do tasks that normal humans are not so good at doing.

  • Re:1 billion cores (Score:3, Insightful)

    by BhaKi ( 1316335 ) on Wednesday June 30, 2010 @02:49AM (#32741174)
    Wrong. You need 999999999 forks.
  • by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Wednesday June 30, 2010 @02:50AM (#32741180) Homepage Journal

    The main problem is that it's horribly hard to pass that many messages around without the overheads of the network exceeding the benefit from the parallelization. If they have found a way to reduce this problem, I'd call that a major novelty.

  • by Animats ( 122034 ) on Wednesday June 30, 2010 @03:03AM (#32741252) Homepage

    This is very similar to the Inmos Transputer [wikipedia.org], a mid-1980s system. It's the same idea: many processors, no shared memory, message passing over fast serial links. The Transputer suffered from a slow development cycle; by the time it shipped, each new part was behind mainstream CPUs.

    This new thing has more potential, though. There's enough memory per CPU to get something done. Each Cell processor, with only 256K per CPU, didn't have enough memory to do much on its own. 20 CPUs sharing 1GB gives 50MB per CPU, which has more promise. Each machine is big enough that this can be viewed as a cluster, something that's reasonably well understood. Cell CPUs are too tiny for that; they tend to be used as DSPs processing streaming data.

    As usual, the problem will be to figure out how to program it. The original article talks about "neurons" too much. That hasn't historically been a really useful concept in computing.

  • by Anonymous Coward on Wednesday June 30, 2010 @03:24AM (#32741346)
    Then all we have to do is, generate a algorithm (that can run on a computer) that is good enough at generating algorithms to solve specific problems.

    Mods, feel free to mod off-topic
  • Given you statement, why would you link to a document entitled Reevaluating Amdahl's Law [ameslab.gov]? Did you even read what you linked to? Here's an excerpt:

    Our work to date shows that it is not an insurmountable task to extract very high efficiency from a massively-parallel ensemble, for the reasons presented here. We feel that it is important for the computing research community to overcome the "mental block" against massive parallelism imposed by a misuse of Amdahl's speedup formula; speedup should be measured by scaling the problem to the number of processors, not fixing problem size. We expect to extend our success to a broader range of applications and even larger values for N.

  • Re:Dangerous idea (Score:5, Insightful)

    by ProfessionalCookie ( 673314 ) on Wednesday June 30, 2010 @03:33AM (#32741382) Journal
    From a science perspective I'm pretty sure that either computer are already "sentient" or (IMHO, more likely) that we don't really understand what sentience is. At all.
  • by teazen ( 876487 ) on Wednesday June 30, 2010 @03:55AM (#32741488) Homepage
    Exactly! New is the new old. A million processors? Pah! Old hat. There has been done lots of interesting research into parallel processing in the past. Read the Connection Machine book [google.com.my] It's a great read.

    Feynman was also involved with the machine at a certain point. There's a great writeup [longnow.org] on him and it for a quick introduction: '.. It was a complicated device; by comparison, the processors themselves were simple. Connecting a separate communication wire between each pair of processors was impractical since a million processors would require $10^{12]$ wires. Instead, we planned to connect the processors in a 20-dimensional hypercube so that each processor would only need to talk to 20 others directly. ..'

    The C-5 looked [wikipedia.org] awesome as well. And I'll just keep quiet about all the cool Lisp stuff they did on it.
  • by master_p ( 608214 ) on Wednesday June 30, 2010 @04:04AM (#32741534)

    The brain does not do arithmetic, it only does pattern matching. That's what most people don't get and that's the obstacle to understanding and realizing AI.

    If you ask how can humans can then do math in their brain, the answer is simple: they can't, but a pattern matching system can be trained to do math by learning all the relevant patterns.

    If you further ask how humans can do logical inference in their brain, the answer is again simple: they can't, and that's the reason people believe in illogical things. Their answers are the result of pattern matching, just like Google returning the wrong results.

  • by Wescotte ( 732385 ) on Wednesday June 30, 2010 @06:15AM (#32742162)

    but hopeless for calculating with a reasonable degree of accuracy the actual distance to that object- the margin of error for most people is on average going to be quite large.

    I disagree. How can we learn to throw a basketball into a tiny hoop from far away without having very accurate estimates? Think of any sport and just how many good estimates are done VERY quickly and pretty damn accurately. How can a painter look at any scene recreate (to scale) what they see on canvas? I'd say are brains are pretty damn good at calculating with very high accuracy.

    Just because I can't say the hoop is exactly 32.74578453 feet from me doesn't mean I don't know how far it is away. If I can throw the ball into the hoop then I have accurately calculated/predicted the distance.

    Look at how sometimes people are mid-conversation, talking about something they know in depth and suddenly they forget what they were going to say- this is because processing in the brain has gone completely off track.

    I'm having a hard time coming up with a good analogy but I suspect these situations are similar to interrupts in computers. Something more important requires the brains resources at that time. It's not like the information is forgotten it's simply not accessible at that movement in time. The information is never "lost" it's just unavailable for a time. If it was lost you wouldn't have the "oh yeah" moments when you remember it or look it up again. You recognize it because you already knew it.

    While I agree the brain isn't as effective at large scale number crunching I do believe it's something the brain can be trained to do. There are plenty of people out there who can do insanely complex arithmetic in their heads. I suspect the reason we all don't have such skills is because we don't need them.

    but there's a lot that current computers can do that the brain can't- serious large scale number crunching for example

    There is no real reason in the survival of the fittest terms for us to be able to accomplish such tasks. So those resources in the brain were put to use on other tasks like accurately processing visual and audio data. I can hear or spot a predator very quickly and accurately in all types of environments and lighting conditions. If we use a computer to perform these tasks we realize just how much computation is required. There is no reason these resources couldn't be allocated to general number crunching. It's just evolution says they are better used for other tasks.

  • by James Manning ( 4620 ) on Wednesday June 30, 2010 @12:22PM (#32746274) Homepage

    How would you know that the calculations were wrong?

    You know later. It's similar to things like the EPIC arch where you might (for example) execute both sides of a branch in parallel, but you don't 'commit' the results until you get the results of the predicate bits, at which point you'll throw away one side's results and commit the other side's.

    http://en.wikipedia.org/wiki/Explicitly_parallel_instruction_computing [wikipedia.org]

    Predicated execution is used to decrease the occurrence of branches and to increase the speculative execution of instructions. In this feature, branch conditions are converted to predicate registers which are used to kill results of executed instructions from the side of the branch which is not taken.

    http://en.wikipedia.org/wiki/Speculative_execution [wikipedia.org]

"The four building blocks of the universe are fire, water, gravel and vinyl." -- Dave Barry

Working...