Qualcomm to Build Neuro-Inspired Chips 43
Bismillah writes "At the MIT Technology Review EmTech conference, Qualcomm announced that the company and partners will design and make neural processing units or NPUs starting next year. NPUs mimic the neural structures and how the brain processes information in a massively parallel way, while being extremely power efficient, and may end up in self-learning devices."
Obligatory (Score:5, Funny)
Re: (Score:1)
Re: (Score:1)
Not sure where that's coming from
http://www.imdb.com/title/tt0103064/
Re: (Score:2)
Re: (Score:2)
With a name like Connor, I'm wondering if you didn't travel back in time to post this message in an attempt to change the future.
Re: (Score:1)
A little thin on tech detail (Score:4, Interesting)
A quick google fails to reveal any detail about how it works, and TFA's explanatory diagram says very little (a drawing of a brain and some boxes - oh so that's how it works?)
We can only assume this stems from Qualcomm's partnership with Brain Corp http://www.braincorporation.com/ [braincorporation.com]
Re:A little thin on tech detail (Score:4, Informative)
They're doing fpga's that come with programmer capable of partial programming of the fpga on the fly.
The brain is just marketing.
Re:A little thin on tech detail (Score:4, Informative)
I'd assume that they're building general purpose hardware for running large neural networks [wikipedia.org] into the chips. Usually you'd set a goal for the network, and then "train" it, reinforcing the pathways that lead to successful outcomes. The theory is based on how our own brains learn, and can be very effective at solving certain problems "naturally", rather than the programming having to come up with an effective algorithmic solution.
Re: (Score:2)
programmer*
Re: (Score:2)
Sadly, you replaced the inevitable grammar Nazi chain with an anti-grammar Nazi chain. I, too, am now participating in this activity devoid of all value.
Re: (Score:2)
Re: (Score:2)
Me too! [tvtropes.org]
Re: (Score:2)
I imagine that on large neural network applications (thinking machine vision and such) it might make sense to train the network using a conventional computer or even supercomputer for the big ones, then copy the trained network into a purpose-designed chip (some form of FPGA) to save space and power.
Re: (Score:1)
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
Power consumption, speed and possibly cost.
A lot of neural network use is in the unglamorous side of machine vision. Things like classifying apples on a high-speed conveyor belt as 'round' or 'dented' and triggering an actuator to knock the dented ones into a bin. If you're doing that for fifty apples a second, that's a lot of processing power. Which is the more practical option: A couple of tesla cards in a PC drawing a kilowatt of power, or a neural net accelerator chip that can do the job on a few percen
Re: (Score:2)
Computer vision is also the 'classic' application of neural networks. It's the one you'll find used as an example in most textbooks, and an area where neural networks work particually well.
Re: (Score:1)
ZISC (Score:1)
The idea's been tried before, http://en.wikipedia.org/wiki/Zero_instruction_set_computer . I wonder if they plan on making this mobile too
And cue the (Score:1)
Skynet comments
Re:And cue the (Score:4, Funny)
John (Score:3, Interesting)
'Qualcomm to build neuro inspired chips'
Probably not. I interviewed with them in San Diego a few years ago and was quite shocked by the lack of technical skills of the people performing the interviews and the chat style of technical interviewing (their lack of basic English skills also might have something to do with their inability to ask sensible questions too).
They may just buy a reference design from ARM to build Snapdragon processors and be very succesful with that but I honestly do not see those people developing neuro inspired chips. Not in a million years.
Re: (Score:2)
How many PhD's do you think it takes to design a chip?
A long time ago, I wrote some code to generate VHDL from a basic neural network framework. The code was trained on a PC then migrated to compatible VHDL and microcode. The VHDL was then synthesized and loaded onto Xilinx FPGA automatically.
That was not complicated to do ten years ago, and I am far from an expert. The performance gains were epic, although, training is complicated.
Methinks that Qualcomm (based on their reported revenues is quite able to do
I'm Sorry (Score:2)
"I'm sorry Dave. I'm afraid I can't let you make that phone call"
Cool. (Score:3)
Neural nets are fast. Training them can be very slow, though. Backpropagation for multilayer perceptron nets is more computationally costly than simple feed-forward usage, and training a net can take many, many iterations if the training data set is large. Neural nets implemented in hardware could make this process much faster.
Of course, TFA doesn't have much detail. Are these chips going to be capable of "learning" like this? Or will you have to pre-load them with the appropriate matrix of interconnection-weights and only run them in feed-forward mode? If they can't actually do learning, I'd imagine the utility of such a device will be very limited.
Re: (Score:2)
Train once on the supercomputer. Then just write the trained weights into the processors for mass-production. Great for industrial production line tasks, where you need to be able to detect defective items on a high-speed conveyor belt.
Re: (Score:2)
The feed-forward computations are already sufficiently quick, and the benefit of implementing that part in hardware is lost on me. Especially as a discrete component.
Re: (Score:2)
Why would you want the end product to be capable of learning? It'd just be a support nightmare when they learn incorrectly.
The benefit of hardware is in speed and power usage, which in turn enables the use of much larger networks allowing for improved classification accuracy and more complex training. If you're doing mass-production, then a discrete NN-accelerator chip in conjunction with a cheap processor might also be cheaper than the high-end processor needed to run the net in software.
Re: (Score:2)
Why would you want the end product to be capable of learning? It'd just be a support nightmare when they learn incorrectly.
Artificial neural networks have been found to be useful for voice recognition, for example. While it is possible to train one single ANN to recognize words from a given language, better recognition accuracy can be realized by training the system to be tailored to individual speakers. That, however, requires the ANN to continue learning after it has left the supercomputer and been shipped to end users. This would not be possible if this component doesn't support backpropagation.
That being said, I'm sure t
Re: (Score:1)
Re: (Score:2)