Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology

Implementing Artificial Neural Networks 78

Floydian Slip wrote to us with the updated story about a company called Axeon that is aiming to use the concept of artificial neural networks in a processor called the "Learning Processor." It's an array of 256 8-bit [RISC] chips in parallel. The company is aiming in a lot of places - mobile communications, inertial navigation and image analysis. The article also gives some of the background of the "neural chips".
This discussion has been archived. No new comments can be posted.

Implementing Artificial Neural Networks

Comments Filter:
  • previous posters have mentioned that current hardware is really slow for neural nets. okay - so why don't we create a distributed neural net??? sort of like seti but using the processor power for a learning organism...there might be one database and an ever-expanding (and contracting) supply of processor pwoer.
  • by mewse ( 69150 ) on Wednesday September 15, 1999 @10:05AM (#1679764)
    Well, there's TD Gammon, which plays a pretty mean game of Backgammon using a neural network (and is rated as a master level player, no less!).. The author, Gerald Tesauro, has written a paper on the subject at:

    http://web.cps.msu.edu/rlr/pub/Tesauro2.html

    from the abstract:

    "TD Gammon is a neural network that is able to teach itself to play backgammon soley by playing against itself and learning from the results, based on the TD(Lambda) reinforcment learning algorithm (Sutton, 1988). Despite starting from random initial weights (and hence random initial strategy), TD Gammon achieves a surprisingly strong level of play. With zero knowledge built in at the start of learning (i.e. given only a "raw" description of the board state), the network learns to play at a strong intermediate level. Furthermore, when a set of hand crafted features is added to the network's input representation, the result is a truly staggering level of performance: the latest version of TD Gammon is now estimated to play at a strong master level that is extremely close to the world's best human players. "


    The folks at Cyberlife would have also used neural networks at the core of their 'Creatures' games and other applications. I like Creatures as an example, just because the neural networks clearly show their strengths and weaknesses in that context. The networks quickly reach an uncanny level of 'intelligence', but that 'intelligence' vanishes rapidly if training continues for too long. (In the Creatures fanbase, this loss of intelligence is called the 'One Hour Stupidity Syndrome', since the Norns typically start showing signs of it about an hour after being hatched)

    Cyberlife can be found at:
    http://www.cyberlife.co.uk/
    or
    http://www.creatures2.com/
  • Thanks for the info. It doesn't seem like they've published anything recently, though...

    Note: If this message is double-posted, sorry (stupid enter key is too close to the shift key :-( :-( :-( ).
  • There may be some confusion :

    1)I'm afraid Neural Networks have nothing to do with AI. AI tries to enables a computer (or anything else) to react the right way in front of previously unknown situations. Having the right behaviour in known situations is no intelligence, it's only algotithm, IMHO.

    2) Neural Networks are basically a very powerful algorityhm to build models out of examples. That means your NN is able to find out the behaviour of a system out of the examples you give it. To be a bit more precise : you give the NN exemples of inputs and outputs, and it learns. When it's done, you give it inputs, and it gives you outputs. So basically, NN react the way it learned how to react. It's no intelligence.

    There are already many messages around here giving applications of this algorithm.

    Bye,
  • Could this thing be used as a generalised systolic-array-on-a-chip, to implement other signal processing algorithms, or is it only suitable for neural networks?
  • LISP is the language of the future

    And always will be.

    jsm




    sorry and all that, but when you think of a line like that, you've got to use it!
  • by Anonymous Coward
    Some folks at Georgia tech, Dr. Glezer and a few others, have been employing neural networks to estimate the turbulent boundary layer roughly 10 time steps into the future (a measure which depends heavily on the flow parameters) to activel tune their optical configuration. There are a number of other research efforts under way which employ the use of neural networks as controllers for turbulent flow fields. I'm not talking about making a big trailing edge flap move around, I'm talking about sensing the flow conditions at hundreds of simultaneous locations, passing the information through a neural network (essentially just an, n-dimensional curve fit), and using the resulting output to drive hundreds/thousands/millions of surface mounted actuators to alter the virtual aerodynamic (not necessarily physical) shape of an object..
  • When I studied Artificial Neural nets at uni, one of the major obstacles to widespread acceptance of them was that they were very difficult to verify. Although they tended to do certain jobs very well (pattern recognition, in the main), it was very difficult to mathematically explain why they worked. People are loth to trust something which they don't understand and may break in some wierd, unforseen circumstance.

    It's also interesting that the founder of the company studied at Aberdeen Uni, where I did my first degree (but not in computing).
    --

  • ...in which he showed how neural nets implemented in analogue (sub-threshold transistor technology) were able to...
    I think the main reason for operating in the sub-threshold region is more to do with obtaining a non-linear transfer function than power saving, although power saving is an added bonus.
  • You have to train a neural net. Don't ever underestimate the time, difficulty, and fragility of this step. In order to use a neural net, you have to use a very large data set to intialize it (the dataset tends to grow exponentially as the complexity of what you are trying to do increases). You have to pick the right dataset, which can be extremely difficult if others haven't already figured it out.

    This is not unique to artificial neural networks (ANNs). Disregarding the research which are trying to emulate biological neural nets, ANNs are actually statistical in nature. So the "problems" that you mention are not unique but are also applicable in statistical regression and classification problems. The main cause for concern is that engineers and computer scientist fail to make the statistical link and approach ANNs in a wholly different and frequently wrong direction. Learn from the vast statistics literature! Warren Sarle who maintains the comp.ai.neural-nets FAQ correctly identifies multilayer perceptrons (MLPs) as a relabelling of multivariate multiple nonlinear regression models. See this postscript file [sas.com] and this jargon file [sas.com]

    ANNs are a classic example where engineers and computer scientist have attempted to reinvent the wheel.
  • There's been a huge amount of hype around neural nets (NNs) some time ago, thankfully most of it has died down.

    Basically, NNs are highly tunable and very flexible nonlinear statistical models. That's it. Once you understand that you are dealing with statistical modeling and not bionic miracles, a lot of things fall into place and life generally becomes much easier. Note that many popular NN types are mathematically equivalent to well-known statistical models -- only the names are different. For example, projection pursuit regression is basically a direct match for a three-layer feed-forward neural net.

    Because NNs are statistical models, you have to deal with all the classic statistical problems. You still have the problems of input selection, of overfitting, of regime switches in the process that you are trying to model, etc. etc. Basically, NNs are very useful and quite complicated statistical tools. To use them correctly demands considerable sophistication and more than a passing acquaintance with statistics.

    For the interested, there is a very good FAQ on neural nets maintained by Warren Searle. It is technically the FAQ of alt.ai.neural-nets (I am not sure about the name of the newsgroup) and can be found in the usual places.

    Kaa
  • > When it comes down to it, a NN is just a
    > non-linear function.


    I completely agree with this statement. Neural networks are simply very overparameterized, flexible nonlinear models. The problems from training come from the fact that if there is noise in the training data set then the neural network will eventually learn the noise (i.e. garbage). I think the term used for this is overtraining.

    Specific nonlinear and linear model structures can be used to do the exact same things neural nets can do IF YOU KNOW THE CORRECT STRUCTURE. This is where the power of neural networks lies. A neural network can (I think) match arbitrary nonlinearity, given enough hidden nodes.

    -Alex
    Vote Cthulhu! Why settle for the lesser evil?
  • No problem..

    My enter key happened to fall off today.. Stupid cheap generic keyboard. Maybe it could turn into a poll..

    My enter key:
    Is too close to my shift key
    Fell off
    etc... =P
  • Okay, so basically what you're saying is:

    If we model a computer after a brain, it will make mistakes.

    The whole "flying into the sun" thing could be averted by installing three such nets, and have them try to agree. Then, one of three things would happen:

    1. One would think, "hey, let's fly into the sun." It tells the other two, because the ship is programmed to get instructions from two nets, and the other two say, "wowe, buddy, that's a square, not a planet. Were you off the day they said planets were shaped like spheres?" Or vice versa, and into the sun we go.

    2. One would think, "hey, let's fly into the sun." The other two say no, so it takes over and flies into the sun while the other two desperately try to save themselves somehow.

    3. One would think, "hey, let's fly into the sun." The second one would say no, and the third would say Idunno. The whole net freezes, and the ship keeps going straight until it runs into Alpha Centauri.

    This is sort of like putting three people in charge of flying the ship. I would still trust this more than putting a normal computer in charge, because if something happens that the programmers haven't thought of, you are bound to die.

    The Lord DebtAngel, Lord and Sacred Prince of all you owe
  • That would be cool but very hard to actually get working. One way to do it, I guess, would be to pit the neural net against a traditional AI and rank/feedback the results. The major problem, thoguh, is that software-based neural nets are not yet perfected to a level even equal to a traditional game AI. The neural net that you described would be perfect on the Learning Processor or something similar, but neural nets are not yet powerful enough for video games using traditional hardware.
  • Does continuous voice recognition software use some sort of neural net technique to interpret the sounds?

    No. Most common speech recognition systems use something like Hidden Markov Modeling. Basically it means that some heavy mathmatical analysis is done of the speech data to extract interesting features. Then the features are matched to known patterns.

    The trainning stuff for current speech recognition systems is to tune the mathmatical models of speech components to a particular user.

    Speech recognition using neural networks might work well, if you can do the processing fast enough. I can't imagine any software-simulated neural net being run fast enough for real-time recognition. But a hardware solution could.

    Caveat: I am not a speech expert. Ask your local speech expert for more details.

  • My PhD thesis was mostly about using neural networks to predict protein secondary structure. Neural networks are currently the most accurate method for predicting what parts of a protein (that you know nothing much about besides the sequence) will fold into various local structures. No, you can't currently apply this to nanotechnology.

    More info is available at my server... Go ahead and slashdot me! [ucsf.edu]

    JMC

  • Unfortunately, the naive implementation (where naive is defined as something I can dream up in a couple of minutes) is communication bound. The individual computations per node are quite trivial (dot product followed by sigmoid), but you have to tell a lot of other nodes about your results. So you really need specialised hardware to get around that.

    This is why the processor sounds trivial -- we;ve only see the computational specs, but the magic is is the compuatation.... I think

    The above is all unseasoned speculation. Add condiments to taste.
  • by Hobbyspacer ( 91941 ) on Wednesday September 15, 1999 @10:58AM (#1679784)
    Neural networks are now used in many commercial
    products:

    - most OCR programs, such as the ones that now come free with your scanners,
    use neural networks for at least some of the steps to recognize
    characters. See, for example, Caere OmniPage [caere.com] and Ligature [ligatureltd.com],
    which uses them in its "ocr-on-a-chip" that goes into its
    handheld "Quicktionary" pen.

    - data mining programs used NNW's to analyse
    transactions for unusual patterns, e.g. credit card fraud. This is
    now a big time business. See, for example, HNC Software [hnc.com], co-founded
    by Robert Hecht-Nielson, a famous NNW guru at Univ. of San Diego.

    - Sensory Inc. [sensoryinc.com] uses them in its voice recognition chips.
    They've sold millions of such chips, which recognize just a few words
    but with speaker independence, high background noise, and for low cost.
    See the recent article at EE Times: "Toys that talk... [eet.com]"

    - Synaptics [synaptics.com] , co-founded by Carver Mead, uses analog hardware
    neural network techniques in its Touchpad that is used in many notebooks.

    Have I convinced you yet? Most of these applications are at the
    infrastructure level and don't get much PR, often for proprietary
    reasons. Calera for example, was using NNW's in its OCR already
    in the late 80's but didn't say anything about them until Caere started
    bragging in ads in 1992 that it was using NNW's.

  • I tested a racing game that did this (at least the developers said it did.) When you started the game, you chose a player profile so the AI could keep track of who you were. Supposedly, the more you played the more it adapted to your playing style so the game was always challenging.
  • Idunno. The neat thing about brains (and neural nets) is that no two people/nets will learn exactly the same things. Yes, they would have to be trained seperately; the idea is to make three different nets, not three identical nets (although, given a little time, these nets should start thinking differently anyway). Besides, while you would teach each net everything, in a real application you would stress different aspects to each net. That way, one really knows navigation, one really knows life support, one really knows maintenance, etc. But, since they all know a little bit of everything, if the main one screws up, the other two can point it out.
  • You are completely right but with all the hype, it took me a long time to figure this out.
    I would have hoped that this would be clearly stated in all the books on neural nets.

    Zeb.
  • I hope they package it like the "CPU" from the movie "The Terminator." You remember: the little cluster of chips arranged in a grid. After all, that was a "learning computer" too. ;-)

    -eo
  • Is this what we are looking at? If so...

    nunnnahnaaa nunnnahnaa nunnnahnaa

    (My cheap attempt at sound effects)

  • I hope they make LISP the first language on it :)

    Actually I remember SIMD coding on the CM-2 ages back.. what was that thing? 8 1 bit processors per chip? or 32 one bit processors or some nonsense like that?



  • >How close is Star Trek?
    About 35 minutes at warp 6. However, in emergency situations, we can go to warp 9.6 and get there within 30 seconds....
  • We could use this in place of Al Gore, and nobody would know the difference. Except maybe its responses would be more intelligable.
  • The only thing I ever heard a neural net doing was simple character recognition. Is there any other successful neural network appliction? In my humble experiments with neural net, I could never get anything interesting to work.
  • by Stigma ( 35884 ) on Wednesday September 15, 1999 @09:03AM (#1679794)
    I remember an experiment in which a neural network was used to distinguish between objects in photographs. Although successful, the researchers discovered that instead of noticing differences in the objects, what was actually being compared were such subtleties as the color of the sky in the background.

    One of the points brought up was that although neural nets hold interesting possibilities in the future, we first must find ways of dealing with the infinite number of possible relations that we take for granted in our own minds.

    --
    The only thing worse than being redundant is being redundant.
  • by PhiRatE ( 39645 ) on Wednesday September 15, 1999 @09:05AM (#1679795)
    There are a fairly large set of questions to be asked about this sort of project, especially in which AI is being utilised as the sole method for optimising signals or navigation. The primary one is an age old question which is just going to get harder: Is it a feature or a bug?

    Take a neural network as the sole navigational utility. Sure its been trained through 100,000 generations to work out the opimal path in realtime to fly out to saturn and back, but when it finally coes down to it, do you trust it? There is no algorithm you can check, there is no definite way of predicting what it may do if it encounters some previously unthought of situation.

    Imagine, you turn on your ship, give your target, ship starts flying there no problem, then a meteor flies past you which happens to look remarkably like a square. Neural network gets a flashback to its initial training when simple squares were used to indicate planets because it was simpler, and it makes a massive erroneous gavitational adjustment and starts flying towards the sun.

    Thats bad enough, but you can't even tell whether its idea of flying towards the sun is a good idea it has suddenly had about a slingshot that it could do to get you to your destination faster, or whether it has just gone nuts and is trying to get you killed.

    The same, although less extreme cases apply to most things, if the AI is the only thing doing signal adjustment on your cellphone, maybe it'll flip out for no reason that is discernable. What do you do then? you can't "fix" the bug, its buried deep in such a complex neural network that it'd be like trying to figure out why a mute human with no body language drew a picture of a frog when told to draw a picture of an apple.

    At least to start with, I think we are going to find that neural networks will only be good for tweaking certain aspects of a standard algorithmic system, and while these limitations are in place, they won't be able to show such huge advantages in signal tracking etc that is proclaimed for them. It will be some time yet before we can figure out ways of making AI safe.
  • Why didn't analog neural nets become more popular? I thought that since you could use stuff like op-amps as "approximate" analog multiplication devices, you could pack a massive number of neurons onto a single piece of silicon (using today's silicon manufacturing techniques), Certainly a lot more efficently then building a digital multiplier per neuron...

    Changing the subject suddenly, about how many small processors (like the 8-bit processors described in the above article) or perhaps like Z80s, plus memory, could you put on a single chip using modern manufacturing techniques? (And how fast could you make them go?)

    Would anybody know how to take advantage of such a beast?
  • "We are targeting four specific areas... These are mobile communications, automated image analysis, inertial navigation sensors, and network management for routers."

    Cool. Very very cool. But they have failed to mention one possible application that I see as the most promising use for artificial neural nets (a-nets? I just coined a word!): Artificial Intelligence. Those four applications mentioned above all involve learning and AI to some degree, but I see much broader implications for their technology. Think about our brain for a moment. It handles:
    • Speech recognition
    • Spatial perception and image analysis
    • Muscular control
    • Sensor input
    • Critical systems (blood, breathing, digestion, etc.)
    • Learning, memory, thinking


    • Obviously, some systems such as muscular control and "critical systems" are better-handled by traditional processors; they involve no skill, only timing. However, other systems are best controlled by an a-net (ding!).

      For example, speech recognition could be handled extremely easily by the Learning Processor. It deals primarily with fuzzy logic (i.e. what the word sounds most like), which is handled very poorly by traditional processors. Sensory input deals with conditioned responses -- butt sensors have low alert levels for pressure because we're used to sitting on chairs; however, extreme levels of heat should (hopefully) attract attention. This, too, is best handled by a-nets.

      I purposely have left the one biggie for last. It is... (drumroll)... learning and memory. This may seem too ambiguous to be useful; one might argue that the 4 applications mentioned in the quote all employ some form of AI. I am talking about something completely different: An AI with no practical value whatsoever! An AI that is designed to be a true intelligence, not just a limited intelligence with a clear purpose (translation, image analysis, etc). An AI that tries to come as close as possible to sentience. It could be outfitted with a robotic body or whatever. Remember COG?

      This goal would not be suitable for Axeon; they are a commercial company, and they have to make money somehow. However, once the Learning Processor becomes available to the public (or at least to research institutions), I can foresee such a machine coming into being.

      BTW, whatever happened to COG? Is that project still going on? I haven't heard anything about it recently.
  • At this point, learning computers are a question of when, not if.

    The first application that occured to me was of its potential power in some sort of 'universal translator'. If I understand correctly, one of the current problems with voice translation is that each voice/ pronunciation has to be calibrated to the database of terms.

    Is this the end to that problem? Pretty soon the software will learn to understand variations in language. How close is Star Trek?
  • by trims ( 10010 ) on Wednesday September 15, 1999 @09:12AM (#1679800) Homepage

    We did alot of neural network work at the Media Lab (using them with HMMs are really popular now in "intelligence" systems).

    I can see this as being particularly useful for some applications, like the cellular network example the article had. However, there are several problems with Neural Nets that don't make them a panacea, or a wiz-bang solution to duplicating the human brain.

    • You have to train a neural net. Don't ever underestimate the time, difficulty, and fragility of this step. In order to use a neural net, you have to use a very large data set to intialize it (the dataset tends to grow exponentially as the complexity of what you are trying to do increases). You have to pick the right dataset, which can be extremely difficult if others haven't already figured it out.
    • Neural nets are by no means generalized learning computers. You can't just set it up, turn it on, and it "learns" about something.
    • Programming is the biggest hurdle to usefulness of neural nets, not hardware. We haven't really figured out how to appropriately model many of the possible problems that neural nets might be useful to solve.

    The last point is the biggest hinderance to neural net usage - we don't really know how to apply it to generalized (or even many specific) problem areas.

    This is not to belittle to accomplishment. There are quite a few well-defined areas that neural nets are extremely useful, and we should find more as time progresses and our knowledge increases.

    Just don't expect any kind of general intelligence system to be coming soon. It won't.

    -Erik

  • It begins to memorize the right answers, and has them fixed too firmly in mind when encountering new situations. It gets rigid, sort of like a dumb person with one idea in his head.

    -Matt

  • Well, this isn't going to replace conventional CPUs. So that's kind of a bummer. But I can think of a few uses for it. Having such a large array of processors would enable you to attack much more of a problem at once than a conventional processor. I'm sure the crypto guys will have a field day with that.

    Ironically, I can think of only one use for a chip like this right now - traffic controllers. The very same reason the original 8086 was developed by intel.

    --

  • >> When it comes down to it, a NN is just a
    >> non-linear function.

    > Specific nonlinear and linear model structures
    > can be used to do the exact same things neural nets can do IF YOU KNOW THE CORRECT STRUCTURE.
    > This is where the power of neural networks lies.
    > A neural network can (I think) match arbitrary
    > nonlinearity, given enough hidden nodes.

    Yes, it can "match" it if you are lucky with the training. The problem is that there is no guarantee that this will occur.

    Also, it IS possible to extract a nonlinear function from a trained neural network. It just isn't pretty (or easy).

  • There's still work going on (for example, in Carver Mead's group at Caltech [caltech.edu]), but it certainly hasn't lived up to the hype of a few years ago. (Then again, what has, I guess.) Some interrelated problems are
    • Precision. Beyond a certain point, bits get very expensive in analog, and analog operations add noise. If you need to chain more than a few operations, it wins to A/D convert and do them in digital-land.
    • Power. It once looked like analog would have a big watt-per-bit advantage over digital, for low-precision stuff anyway. But digital VLSI just keeps getting better and better in this respect, and in some applications the analog advantage is no longer there.
    • Stability. It is hard in practice to keep analog calibrated, and taking care of this adds circuit complexity that is not at first obvious. Digital circuits by comparison (not to put too fine a point on it) only need to worry about being off or saturated.
    Still, in niches where the physics of the device just "naturally" does what you want, analog will be the way to go. See the above link for some possible examples.

    --Seen

  • I see a redundant statement in many of the posts here and it brings me to a question.

    I realize that the "Learing time" involved in a system of this type seems to be pretty long, from what I've read. But how difficult would it be to take what has been learned and simply, back it up, and reinstall it on another system? Like once the vocabulary has been built, mass producing the resulting system?

    Is that possible, or do I need another cup of coffee to wake up?
  • I think it was PBS's "The Machine that Changed the World". One of the episodes near the end of the series.
  • Ok, but you'll have to train them sepretly you can't just make three copys of the same network.

    Also they have to be trained with complely diferent training systems. Otherwise it'll go like this :

    AI1 : Hey, look at that! That looks like one of those old fashioned planets.
    AI2 : Yea, one of the square ones!
    AI3 : Ok then, Let's turn left. Towards the sun.
    AI2 & 1 : Sounds good!
  • Stop me if you've heard this one. I actually first heard about this in my college AI class.

    Apparently there was a military project to train a neural net to identify tanks hidden in surveillance photos. They took pictures with hidden tanks and pictures without hidden tanks and fed 'em to their proggy. The NN worked fine on the training pictures, but when they tried other pictures it was almost always incorrect. What could be wrong?

    DOH! The training pictures with tanks were all taken on a sunny day, and the sans tank pix were taken on a cloudy day. And voila...a neural net that could successfully determine whether you might need sunscreen that day. :)

    Dunno if it's true or an urban legend; I saw apparent slides of the training photos in class, though. As the saying goes, computers don't do what you want them to do but only what you tell 'em.
  • I wrote a little mp3 player wich uses something that is not a neural network (but could be replaced by one) to learn your music lissening patterns and adjust it's random number generator acordingly. It would be great if some CS type who knows something about real neural networks could write one into x11amp or something.

    I was more concerned with the user interface issues of how to maximize the amount of information available to be learned then with the actual learning algorithm. Check out the code [gtf.org] it's in Perl and uses Perl/GTK and mpg123.. and it will crash because it was only meant as proof of concept.

    Jeff
  • "Overfitting" is the AI terminology. When you do neural networks, you feed the net some set of data that they "learn." Continuously iterating that set, over and over again, will always yield higher and higher results for that set- however, if that set is not completely and utterly representative of the entire set of possible things the net might be asked about when it's finished training, it will have learned too much specific information about the training set (which might just be happenstance- it so happens that 80% of the pictures in the "good guys" set have a pixel of color #09C45A in coordinate (564, 345), while it's only true of 10% of the pictures in your "bad guys" set) to be of use with any other set than yours.
  • So? Sounds to me like this neural net decided to seperate things out based on context.. somewhat similar to how we parse out sentences when we don't know what a word means.

    --
  • We use them extensively in our blemish sorter for fresh fruit; properly applied they give amazing results.

  • That's exactly the point that I was trying to make. The difficulty in the experiment was not determining differences in the photographs, but the context in which to make the comparison.
  • Hmm, well, where I work we have our share of supercomputers -- massively parallel -- think millions of 8-bit RISC processors all chugging away. They are great at simulating fluid and aero dynamics. Each processor calculates the path of a molecule. Of course, each processor is limited in and of itself. There are no ALUs/FPUs etc. (Go play Quake on something else.)

    The future of this field is very, very exciting. Fluid-state supercomputers. Each bit corresponds to a specific molecule. Our next purchase will feature a _billion_ processors. Drool.

    Of course, the difficulty comes in programming the damned thing. I suspect the same holds true for neural nets.


  • Of course, you do realize that this same reasoning applies to the thousands of biological neural networks you deal with every day as well.

    In fact, they even have a large track record of general failures to performance reasonably. In fact, certain such systems seem to loose their ability to make logically correct conclusions alltogether. And this after at least 100,000 generations...

    Witness politics...

    -
    /. is like a steer's horns, a point here, a point there and a lot of bull in between.
  • I remember seeing an episode of Nova or something on that very project. The system was supposed to learn to identify friendly and enemy tanks visually. However, all of the training photos of friendly tanks were taken on a sunny day, and the photos of enemy tanks were taken on a cloudy day. So, consequently, the neural net really learned only to distingush sunny vs. cloudy.

    And that just goes to show that the problem with neural nets is that you never really know exactly what they are learning.
  • Several companies have produced neural ne chips... this is no big deal!
  • Some state is using an artificial neural network to read zip-codes off envelopes. Also, some credit card companies use artificial neural networks in evaluating credit histories n' stuff.

    The military likes to use neural networks all over the place. I don't know of any other applications actually at the commercial stage.

    Of course non-artificial neural networks are used for lots of good stuff :)

  • Yes, neural nets are useful in some applications, but the hard fact of the matter is that they have been completely oversold. If you have a messy problem to solve that you have no clue how to solve, chance are that a neural net won't do it. You can waste weeks of CPU time trying to train the thing, and you will never be able to get the error down to anything usable.

    I suppose the main point to keep in mind is that, NNETs are a special kind of AI. I've always preferred to think of them as... artificial stupidity programs. Yes, they can learn (if you know what you are doing), but, damn, will they take a long time, even if you DO know what you're doing. Of course, perhaps someday better supervised training algorithms will come along, but right now the bread and butter of the trade is the back-propagation error correction scheme. It is highly effective for some problems, but it always takes a long time for the network to converge to something useful.

    As far as the processor in the article, it sounds very interesting. If it can be made for cheap, great. Though I don't think that it is going to show up in handhelds and other small devices as much as the article seems to imply. Training is what takes so horribly long with neural nets. But the great thing about them is that once they're trained, they will spit out answers in no time at all, because are usually feed-forward, and hence not iterative. For many applications on handhelds, offline training will be fine.
  • The biggest problem with most neural nets is that they are inherently parallel algorithms implemented on a serial architecture. As such, they are horrendously slow. I think you'll find that they start to do useful things once implemented on truly parallel hardware.

    I did once attempted to implement a very simple short-term memory system using a neural net when in college. In doing so, I managed to use up twice the cpu quota for the entire class in one evening on a Vax. At that point, it was decided that the assignment did not need completing.
  • Alan Perlis [cmu.edu] is a wonderful source for some relevant quotes. [yale.edu]

    In particular, he notes that

    When we write programs that "learn," it turns out that we do and they don't.

    This goes along nicely with Douglas Hofstadter in his book ``Creative Analogies and Fluid Concepts'' where he outlines areas that are critical to language translation that happen to be real tough to even think about algorithms to process.

    Hofstadter asks the question: ``What is the Chicago of Russia?'' which does not admit unambiguous results. I have parallelled this somewhat with the question What is the Moscow of New York? [hex.net] which has too many potentially valid answers for comfort.

    I think "Star Trek Computing" is about as near as "Star Trek Economics," which is to say, no way soon.

    There are certainly things to be learned; it's mostly humans that are doing the learning, not the computers...

  • I saw an interesting presentation recently by an academic at Sydney University - Richard Coggins - in which he showed how neural nets implemented in analogue (sub-threshold transistor technology) were able to be used for classification neural networks at much lower power levels (an order of magnitude less) than digital logic systems - and presumably many orders of magnitude less than microprocessor systems.

    His application was in implantable defibrilators (sp?) where you need extremely long battery life and needed to be able to classify QRS waveforms as "need's a shock", or "leave them alone".

    It certainly got me thinking - who'd have thought analogue technology would be better than digital :-)











  • Please correct me if I am wrong.

    I thought for NeuroNet to be successful, it has to be a dynamic process - that is, to learn via trial and error, and to built the pattern-recognizing ability from the errors and success of previous tries.

    Then, my question is, won't a hardware-based NeuroNet thingy like the one reported above kinda limiting?

    That is, if everything has been hardcoded, then is there any room left for the dynamic process of "learning"?

    Again, please correct me if I am wrong.

    Thank you.









  • What you're touching on is the essential problem of AI: nothing that AI does is easy to do for computers. In fact, most things that AI researchers do is ridiculously computationally expensive (you know, 2^100-state optimization problems and the like) and simply cannot be approached by traditional algorithms. What AI researchers give up, explicitly, are the guarantees that you're concerned about. They say, "Well, let's see how well we can do on this problem if we give up the guarantee of completeness." And lo and behold, your 2^100-state problem becomes a lot easier- you can either get the best answer in a trillion years (actually quite a bit longer than that, if I did my math right) or a very good answer in a week.

    So the thing is: if you can't live without that guarantee, you probably can't solve your problem at all.

    And incidentally, neural nets are, in practice, enormously profitable to use for systems in which there is no standard algorithmic solution. Face recognition, image processing, real-time traffic shaping- bunches of things. Not space-craft flying, though- NASA uses different AI techniques [nasa.gov] that have much better safety guarantees.
  • It is not evident that a processor array designed to run neural networks would be useful for anything other than solving what amounts to nonlinear regression problems.

    For all that neural nets have been arousing excitement for a dozen years, they seem to have successfully avoided usefulness in general purpose computing environments.

  • What have to rember is that all AI is
    is just a good way of searching a very large,
    and some times dinmac,
    sloution space in a efficant maner. If you keep
    this in mined then things seem a bit less seary.
    yes you don't allways get the best anser but you
    get a good one most of the time. but just like doing complex math by hand you should do sanity checkes on you answer to make sure it is close.
  • Good points! Many people still have the idea of a neural network as a `magic' solution to problems (just throw your data at a network and hey presto! it'll magically `learn' the right answer...).

    Although the original work on neural networks was (in part) biologically inspired this soon became more of a hindrance than a help. These days, although there is work being done in computational neurology, most of the areas where neural networks are used for practical problem-solving treat them as a statistical method.

    In reading the article you need to forget ideas about aritificial intelligence and the like. This sounds like a standard net implemented in hardware which means that training on larger datasets may be possible. However, this (as pointed out before) is the BIG problem --- training is difficult.

    Anyway... basically there are a lot of rather out-dated perceptions about the use of neural nets floating around. When it comes down to it, a NN is just a non-linear function.
  • Analog neural nets work best in special purpose application -- where what you want to do is to take a stream of input vectors, feed it into a MLP, and generate output vectors, ideally at the maximum rate that the analog circuits work. Or, if you're into power instead of speed, push down the power used by the net until you're just barely making the timing. This works the best when there's integrated sensors (like photoreceptors or a silicon cochlea) or actuators (micromachined stuff) on the chip, so you dispense with an A/D or a D/A by going analog as well.

    There are a few products out there that use analog nets just in this way -- there's one in your Logitech Marble Trackball, computing the motion vectors of a pseudo-random dot pattern on the ball in IR.

    But most problems out there actually don't fit in that nice little niche -- instead, the neural net is part of a signal processing chain, and the adjacent steps don't fit well in analog, so you need a DSP anyways. And once you have the DSP, its usually is a lose to put an analog neural net on the die, instead of just adding another ALU to the DSP, since the rest of the system is digital anyways.

    One hope is this networking protocol, the address-event representation, that lets analog neural net chips communicate with each other in a very digital way, while being very efficient to implement in analog. See this paper [berkeley.edu] for details ...

  • Some company makes a single-chip collection of 64k 1-bit microprocessors. I'm pretty sure those were being sold back when 0.35 micron was the standard, so these days a million ought to be a reasonable target.
  • Does continuous voice recognition software use some sort of neural net technique to interpret the sounds? It seems that they do due to the way you "train" them to your voice. Though I could be wrong.

    -------------
    The following sentence is true.
  • by chadmulligan ( 87873 ) on Wednesday September 15, 1999 @09:53AM (#1679837)
    Objectively they've made great progress. The new architecture is very powerful and orders of magnitude faster than anything done previously... "2.4 giga connections per second" are quite better than the 100 to 1000 claimed by previous hardware= :-)

    The question, which others have commented on here, is what neural networks are especially good at. Certainly narrow things like character recognition, face recognition and so forth seem to be a natural. Picking data out of noise also is promising - IIRC IBM was trying to use neural nets for one of their ultra-huge storage technology, where the signal-to-noise ratio on the magnetic heads is way too low for traditional encoding. Communication technology has essentially the same problem and will benefit.

    Now, will you trust a neural network to pilot a Boeing you're on? Arguably, you already do - witness the recent disaster with a Korean jet which was ruled a pilot suicide. I doubt that we'll see such a "general AI" application in the field soon. Using dozens of small neural networks in sharply defined functions on an airplane, or on a car, will be more useful and done early... Mercedes' A-Class car's electronic suspension already uses serious heuristics to stabilize the car in dangerous situations, this sort of thing will probably be one of the first applications of neural networks.

    Personally I think the term is somewhat of a misnomer. It's based on an early and too reductionistic view of how neurons were hypothesized to work in the brain. Sort of like the steam-era metaphors in Freud's work are being superseded by information-age metaphors in psychology. And there's the possibility that neurons aren't the basic "neural" building blocks at all... Roger Penrose in his books - "The Emperor's New Mind" [amazon.com] and "Shadows of the Mind" [amazon.com], proposes that each neuron's behaviour is actually defined by millions or more of tiny quantum switches in each neuron's microtubuli. His theory is very well argued but still controversial among orthodox researchers... I like it personally. If he's right, we'd need first to build a million-element quantum processor building block and then build useful neural networks out of millions such blocks - and even a single block wouldn't be mathematically simulatable by a conventional processor, even at very low speeds!

    Frankly, I believe any "strong AI" applications for a neural network chip are out for the next decade or so. All this talk about pattern recognition is nice, but as soon as you get into symbolic processing - meaning as soon as "meaning" is involved" - you get into uncharted territory. "Meaning" is an emergent, bottom-up quality rather than a top-down macrofunction in the human brain (or any animal brain for that matter).

  • Hey, I've been hunting down AI info all day... and i found the cog web page:
    http://www.ai.mit.edu/projects/cog/
  • >Is this the end to that problem? Pretty soon the software will learn to understand variations in language. >How close is Star Trek?

    What if it was here already, its just that you havent had a chance to look around and notice it, i believe we are almost there, with the new propulsion drive that some uni has made we are really getting somewhere...

    I better not start...

    Anyways back to this chip, i cant wait to get one :)

    But somehow, on my salary, i dont think ill be getting one till there out of date...

Algebraic symbols are used when you do not know what you are talking about. -- Philippe Schnoebelen

Working...