Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology

Qualcomm to Build Neuro-Inspired Chips 43

Bismillah writes "At the MIT Technology Review EmTech conference, Qualcomm announced that the company and partners will design and make neural processing units or NPUs starting next year. NPUs mimic the neural structures and how the brain processes information in a massively parallel way, while being extremely power efficient, and may end up in self-learning devices."
This discussion has been archived. No new comments can be posted.

Qualcomm to Build Neuro-Inspired Chips

Comments Filter:
  • Obligatory (Score:5, Funny)

    by MassiveForces ( 991813 ) on Friday October 11, 2013 @03:08AM (#45099343)
    "At the MIT Technology Review EmTech conference, Cyberdyne announced that the company and partners will design and make neural processing units or NPUs starting next year."
    • Not sure where that's coming from. The quotations would indicate it's from a 3rd party source, but Google could not find that quote... "Qualcomm chief technology officer Matt Grob said by next year, the company and its partners would design and manufacture neural processing units (NPUs) which function in a completely different manner to current processors." Itnews.
    • But Cyberdyne uses motorola 6502 processors for everything no?
  • by tttonyyy ( 726776 ) on Friday October 11, 2013 @03:22AM (#45099399) Homepage Journal

    A quick google fails to reveal any detail about how it works, and TFA's explanatory diagram says very little (a drawing of a brain and some boxes - oh so that's how it works?)

    We can only assume this stems from Qualcomm's partnership with Brain Corp http://www.braincorporation.com/ [braincorporation.com]

    • by Anonymous Coward on Friday October 11, 2013 @03:45AM (#45099469)

      They're doing fpga's that come with programmer capable of partial programming of the fpga on the fly.

      The brain is just marketing.

    • by somersault ( 912633 ) on Friday October 11, 2013 @05:25AM (#45099833) Homepage Journal

      I'd assume that they're building general purpose hardware for running large neural networks [wikipedia.org] into the chips. Usually you'd set a goal for the network, and then "train" it, reinforcing the pathways that lead to successful outcomes. The theory is based on how our own brains learn, and can be very effective at solving certain problems "naturally", rather than the programming having to come up with an effective algorithmic solution.

      • programmer*

      • I imagine that on large neural network applications (thinking machine vision and such) it might make sense to train the network using a conventional computer or even supercomputer for the big ones, then copy the trained network into a purpose-designed chip (some form of FPGA) to save space and power.

      • Not a helpful answer. There is no information anywhere on the web on whether these are continuous or discrete neural networks. As far as putting it on the chip, if it is a discrete neural network, then there is no advantage over a cuda enabled neural network running on an nvidia tesla. It is just Malibu Stacy with a new hat.
        • by Bengie ( 1121981 )
          I hope your neural network code isn't branchy because GPUs are horrible with branches.
          • I hope it isn't branchy either, because that would imply that I was completely ignorant of all modern neural algorithms. How'd you get modded up to three?
        • Power consumption, speed and possibly cost.

          A lot of neural network use is in the unglamorous side of machine vision. Things like classifying apples on a high-speed conveyor belt as 'round' or 'dented' and triggering an actuator to knock the dented ones into a bin. If you're doing that for fifty apples a second, that's a lot of processing power. Which is the more practical option: A couple of tesla cards in a PC drawing a kilowatt of power, or a neural net accelerator chip that can do the job on a few percen

          • A the math for a neural network accelerator chip is indistinguishable from the math for graphics. It's all the same multiplying vectors and matrices. And most of the work is done up front in training the neural network, the results of which can be distributed to computers with processors of lesser power. If qualcomm is going to get a couple of teraflops on a single core then power to them.
  • by Anonymous Coward

    The idea's been tried before, http://en.wikipedia.org/wiki/Zero_instruction_set_computer . I wonder if they plan on making this mobile too

  • Skynet comments

  • John (Score:3, Interesting)

    by Anonymous Coward on Friday October 11, 2013 @06:08AM (#45099967)

    'Qualcomm to build neuro inspired chips'

    Probably not. I interviewed with them in San Diego a few years ago and was quite shocked by the lack of technical skills of the people performing the interviews and the chat style of technical interviewing (their lack of basic English skills also might have something to do with their inability to ask sensible questions too).

    They may just buy a reference design from ARM to build Snapdragon processors and be very succesful with that but I honestly do not see those people developing neuro inspired chips. Not in a million years.

    • by xtal ( 49134 )

      How many PhD's do you think it takes to design a chip?

      A long time ago, I wrote some code to generate VHDL from a basic neural network framework. The code was trained on a PC then migrated to compatible VHDL and microcode. The VHDL was then synthesized and loaded onto Xilinx FPGA automatically.

      That was not complicated to do ten years ago, and I am far from an expert. The performance gains were epic, although, training is complicated.

      Methinks that Qualcomm (based on their reported revenues is quite able to do

  • "I'm sorry Dave. I'm afraid I can't let you make that phone call"

  • by NoImNotNineVolt ( 832851 ) on Friday October 11, 2013 @10:09AM (#45101611) Homepage
    This seems rather interesting. I've dabbled in artificial neural networks out of curiosity. This seems like it could be really useful.

    Neural nets are fast. Training them can be very slow, though. Backpropagation for multilayer perceptron nets is more computationally costly than simple feed-forward usage, and training a net can take many, many iterations if the training data set is large. Neural nets implemented in hardware could make this process much faster.

    Of course, TFA doesn't have much detail. Are these chips going to be capable of "learning" like this? Or will you have to pre-load them with the appropriate matrix of interconnection-weights and only run them in feed-forward mode? If they can't actually do learning, I'd imagine the utility of such a device will be very limited.
    • Train once on the supercomputer. Then just write the trained weights into the processors for mass-production. Great for industrial production line tasks, where you need to be able to detect defective items on a high-speed conveyor belt.

      • Sure, that works. Unless, of course, you want the mass-produced devices to be capable of learning. I thought that was the whole point.

        The feed-forward computations are already sufficiently quick, and the benefit of implementing that part in hardware is lost on me. Especially as a discrete component.
        • Why would you want the end product to be capable of learning? It'd just be a support nightmare when they learn incorrectly.

          The benefit of hardware is in speed and power usage, which in turn enables the use of much larger networks allowing for improved classification accuracy and more complex training. If you're doing mass-production, then a discrete NN-accelerator chip in conjunction with a cheap processor might also be cheaper than the high-end processor needed to run the net in software.

          • Why would you want the end product to be capable of learning? It'd just be a support nightmare when they learn incorrectly.

            Artificial neural networks have been found to be useful for voice recognition, for example. While it is possible to train one single ANN to recognize words from a given language, better recognition accuracy can be realized by training the system to be tailored to individual speakers. That, however, requires the ANN to continue learning after it has left the supercomputer and been shipped to end users. This would not be possible if this component doesn't support backpropagation.

            That being said, I'm sure t

"I'm a mean green mother from outer space" -- Audrey II, The Little Shop of Horrors

Working...