Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Technology

CAM-Brain: Artificial Self-Teaching Brain 32

lostkluster writes "Genobyte is developing Robokoneko, a kitten (at first, computer simulated) to use CAM-Brain technology, a self-teaching artificial brain with a goal to have billions of artificial neurons by year 2001, but as a first step, it will have 32,000 evolved neural network modules. The CAM-Brain project is even to enter the Guinness book as "Most Powerfull Artificial Brain". More news and info at Prof. Dr. Hugo de Garis homepage (head of the Brain Builder Group at ATR), and at whatis.com. "
This discussion has been archived. No new comments can be posted.

CAM-Brain: Artificial Self-Teaching Brain

Comments Filter:
  • So is this another way of escalating the cat and mouse game. You dont need a smelly cat anymore. You can just turn on the kitty and say find me a mouse and remove it from the house. Gotta love technology.
  • ... then this is perfect. Of course, a Robokoneko looks like it could kick Aibo's metal ass. Of course, any virtual robot is going to seem cool when compared to one that exists.
    I think we've found the subject for our next flame war.
  • by Otto ( 17870 )
    The original aim of our "CAM-Brain Project", as stated at its beginning in 1993, was to build an artificial brain with a billion artificial neurons, by the year 2001, using evolved cellular automata (CA) based neural circuit modules. In reality, 6.5 years later, this number will be maximum 75 million neurons and 64,000 modules. These CA based neural network modules grow and evolve at electronic speeds inside special FPGA based hardware called a CAM-Brain Machine (CBM), which can update CA cells at a rate of 130 Billion a second, and can evolve a neural net module in about 1 second.


    Questions I have:

    a) These CA Modules.. Are they discrete units? Are you just tossing a bunch of gates onto a chip and then connecting them randomly or what? Details, man.. All I wanna know is what you consider to be the "individual neuron".

    b) How the hell are you gonna get a billion of these inside that little cat thing, and still have room for wiring, motors, etc. Build something bigger, like a good sized tiger that you can ride around. :-)


    ---
  • by rkent ( 73434 )

    Only you can prevent the /. effect.

    There's a European mirror at http://foobar.starlab.net/~degaris/ [starlab.net]

  • They sound pretty optimistic. They have yet to do a 32'000 'neurons' based unit and they already think they can do a one billion unit within a year of the 32'000 one?
  • 32000*1150~36000000 = ALOT of braincells to simulate.
  • This brings to mind the science-fiction idea of storing human consciousness via mechanical means, and having that machine consciousness interact with the world (I'm thinking of Greg Bear's Eon series, for example). Would the billion neuron model be strong enough to start this line of enquiry, or is there still a lot about human neural mapping that we still don't understand?
  • by lk ( 50892 )
    They're only going to keep the mobile functions on the kitten and use infrared to communicate the remote (bigger) brain with it's body...
  • I don't really remember the specs for sony's robodog (what was his name?), but this thing sounds, um, very superior. Besides, I dont think sony's dog could "learn", right?

    Well, one more step towards towards terminator 2 :)
  • They're project is headed making a "real"-like artificial brain, capable of learning on rules... I think it's way too much serious than aibo.

    Stop comparing them! =]
  • Sorry, but im still under the impression that these scientists were preoccupied with the technology, not with the reprecussions. Don't get me wrong, this kitten is cute as can be, but has anyone here ever heard of wintermute? the matrix? terminator? Im not sure i want (like the website on the link says) within a few decades neural networks that are much more complex than the human brain. I want to still be able to pull the plug before they get smarter than us. Call me paranoid, it dosen't mean they aren't out to get me.
  • Well, Sonys dog did learn. That was the appeal of it. it could learn to "play" it would learn how to move around and return to its charging bay when it was "hungry". similar I think to the famous MIT turtles. Just bigger and shineyer and taking less intelligence to make "go".
    I also iam of the opinion that this is big on talk, short on real technical info. I mean, the mechanical overview points out to use the "shoulder" and "paw". come on.. I'm not *that* dense!
  • Agreed! we need more laser fights.
  • A comment was made about moview like the matrix, terminator...etc. Where ai becomes stronger and more intelligent than us, and they take over. Well, those are very real possabilities, but it also cannot be prevented, nor should it be. We are going to create a ai neural system that will have the potential to far surpass our natural biological potential. That is fact. Due to it's nature, it will produce major improvements for itself and it's new offspring that it creates...without human intervention. We are going to have to adapt to that in some way. If the potential for technology is there, but the only obstacle is fear of what it will do... that obstacle will be surpassed ultimately.
  • So you want to kill everyone that's smarter than you?

    Being (along with the rest of earthlife) laughably stupid I wouldn't mind to have someone think through difficult things and explain them to me. Just like I don't at all mind having a calculator divide 78646/427 for me. And I most certainly don't want to 'pull their plug' because the can calculate better than me.
  • I had an opportunity to speak with Dr. de Garis over a year ago at a party thrown by an acquaintance of mine [speakeasy.org] who had interviewed de Garis for a documentary on Nanotechnology and AI. I found Dr. de Garis intelligent, personable and amusing.

    At the time he was rather pessimistic about the Robokoneko project, but mostly because of the cultural problems he was dealing with as a Britisher in Japan. However he claimed that the artificial neuron work was proceding well, even though they were doing it all with simulators. He predicted then that, before 2000, they would be creating silicon versions. From the information in the links it would seem that his prediction has come true. Only they are using FPGA chips instead of going to a foundry for CAM specific VLSI.

    It is interesting to note that Dr. de Garis has made incredible progress by following a path the mainstream AI community has largely discounted -- that of modeling real neurons and real brain structures. I wonder what will come out of his next collaborative development at Starlab in Brussels [atr.co.jp]? From his statements to me I would certainly hope he would find the living and working arrangements more congenial.

    I do find it very interesting that he will be working with Lernout and Hauspie [lhs.com] (developers of Voice Recognition software). The spin-offs from that may be more important than the original research!

    Jack

  • Terminator?? The Matrix?? C'mon now, you know the truth. Bill Gates came from the future where Linux drives behemouth machines towards cleansing the Earth. He's come back to the past to release Windows...a technology CERTAIN to always be less than its creator.

    Techno-militia: "Shit, he's got a gun! We should..." Blue screen takes over cyborg...
  • Reading the detailed paper [atr.co.jp], I can't help but see this "cat/brain" as essentially an implementation of Rodney Brooks's subsumption architecture. Maybe the "cat" will be capable of a few reactive behaviors, but it'll be just as brainless as it's technical soulmate Cog. The real breakthrough in making an artificial brain will be when we figure out how to do it (i.e. what the architecture is), not when Moore's law brings the number of neurons or processing power within reach.
  • "It is interesting to note that Dr. de Garis has made incredible progress by following a path the mainstream AI community has largely discounted -- that of modeling real neurons and real brain structures. I wonder what will come out of his next collaborative development at Starlab in Brussels? From his statements to me I would certainly hope he would find the living and working arrangements more congenial."

    Umm, no. The reason people stopped trying this is that (1) we can't model everything about the neuron (2) what we did try didn't work (3) we don't know how real neurons learn.

    This is probably a big backpropagation net on a chip, thus after 10,000 trials it will learn some stuff, while forgetting everything else that it learned before. If you ask connectionist people if the brain is a big set of backprop nets and nothing else, they will say "no" (notably among them would be McClelland).

    The AIBO's sony sells adapt their behavior paramaters, but don't really learn. The modified AIBO's in Robocup had some learning. For example, the team from CMU (which I worked on) had a vision system that would learn in a limited way.

    Machine learning right now depends mostly on the fact that problems are well broken up... Large scale, full "perception -> action" systems have so far been simplistic in what they learned, slow, or largely unsuccessful. I'll believe results, not speculations.

    Hard, Sobering Facts:
    All they've evolved yet is some primitive motions in a simulator. Rodney Brooks & Co. did that on a real robot several years ago. It was by no means a trivial task.

    It took Sony around 2 years to get a mobile quadruped working in the real world, after they already had a simulator for it in which it worked just fine.
  • I went to a seminar by Dr. de Garis last year.

    The gentleman is very intelligent and personable, as well as slightly mad. (I believe this is a prerequisite for working in his field.) Some of the main points in his speech:

    • There are ten billion neurons in the human brain. We regularly sell machines with 8 billion bytes of RAM for not that much money. (This was last year, remember.) It doesn't take much thought to realise that the capacity is affordable. The only problem is interconnections between neurons.
    • The easiest way to produce a neural net on that scale is to grow neural paths via a genetic algorithm within an array of fabricated cells. This has the added benefit that you can "work around" mis-fabricated cells. (Silicon fabrication is still imperfect.)
    • Robokoneko is still very much a toy. They planned out the motions in advance (i.e. the kitten is working on pre-written reflexes) and are basically teaching the neural system to run the algorithms. This is fairly unexciting except as a proof of concept that a "real" implementation of CAM can learn at the appropriate rates.
    • He did speak at length with the FPGA foundry and told them about what he would want to implement CAM. Reports were very encouraging, and it looks like that project succeeded.
    • Eventually, we will have the technology to build huge neural systems, the size of asteroids. There will be factions of humans who think this is a good idea and factions of those who don't. This will result in war on an unprecedented scale. (BTW, he brought a copy of one of Asimov's Foundation books in with him. Maybe he was reading it a bit too much?)

    BTW, he's not a Britisher. He's an Australian. Not that most Japanese care. He describes Japan as a harsh environment for any Westerner to work in, and says he's only still there because the funding is good.

  • Even once we get to the point where we can extract the weights of the neurons, and simulate all the important parts of their physical properties (including propagation speed), there are still a number of things that make neural nets' thinking much different (I didn't say necessarily better) than humans. These include:

    - Hormones. Irrationality. It is not known how much of a positive and how much of a negative force this is in learning, but most people agree it's definitely a factor. Would you have decided on your own theory if it hadn't been for your personal animosity towards your opponent? Would you have spent *more* time thinking about logical stuff if you hadn't spent so much time daydreaming in love? Without these constraining forces a computer would behave radically differently than its original human *even at the very beginning*.

    - Sleep and unconsciousness. These provide a "reset period" for the brain that is in some way important to it. I personally think that your subconscious has more of the brain to play with at that point and that that is where a lot of your intuitive and creative leaps come in.

    - The input devices we are hooked up to: eyes, ears, mouth, nose, our bodies. A machine without all of these (or without their flaws) would evolve in a different manner than the original person by virtue of the "handicap". People adapt to their problems, and so would a machine. But in so adapting it would become a different person, at least to some extent.

    - People would treat the computer differently than they would treat the original, and that has a profound effect on humans, so why not on a computer? The lack of hormones might alleviate this effect somewhat, though--I don't know.

    --John
  • This is so wrong, I don't know where to start.

    Where ai becomes stronger and more intelligent than us, and they take over.

    Not if I can help it. You seem to be overestimating the human race's ability to put its 'best interests' before its wants. Specifically, I don't want AI's to take over; I'm pretty sure most of the world agrees, even if only out of fear of the unkmown.

    Well, those are very real possabilities, but it also cannot be prevented, nor should it be.

    Why not? We have no obligation to help our little AI expand. If we don't like what it's doing, we should stomp on it.

    We are going to create a ai neural system that will have the potential to far surpass our natural biological potential. That is fact.

    No, it isn't. It is possible, maybe even likely, but it is certainly not a fact. Unless of course you don't include empirical evidence in your requirements for 'facts'.[0]

    Due to it's nature, it will produce major improvements for itself and it's new offspring that it creates...without human intervention.

    Again, baseless. We don't produces major improvements in ourselves, so why should be machine? Machinery is not inherently expandable and upgradable; complex organisms (like minds) do not take well to tinkering.

    We are going to have to adapt to that in some way.

    True. I vote for controlling it, and killing it if necessary. We are under no obligation to help it evolve, esp. if it runs counter to our interests.

    [0]: I hate getting in flamewars about AI. Simply stated, there's more reason to believe it won't work than it will. Evidence--or something resembling it--comes from systems theory, biology, neuroscience, philosophy and even computer science itself. I don't have any references handy, unfortunately, but people should do their own research, anyway.
  • 32000*1150~36000000 = ALOT of braincells to simulate.

    Sure. It's totally feasible if you use the right hardware. I remember back when de Garis was just starting his CAM-brain stuff, he was using my old research group [mit.edu]'s hardware, the CAM-8 [mit.edu] (which is probably where he got the "CAM" part of his project's name.) We did plenty of volumetric simulations on the CAM-8 with 2^24 sites in realtime, and that was back in 1994. Considering the tech advance since then, and that he's now using his own custom hardware, it sounds completely resonable.

    The problem is, as least back when he was still collaborating with us, his stuff just didn't work very well, and his ideas were kinda flaky. (At least in the opinion of the MIT undergrad who worked in both groups.) But hopefully he's worked things out since then. I wish him the best of luck.

    Aside: There's something to be said for spatially organized computation, such as the CAM-brain. Fundamentally, the physics that we use to do calculation constrains all interactions to be uniform and local. It's sometimes easy to forget when we're writing software (which seems very ethereal) that the actual computation is constrained by physical laws that really do impose limits on speed and efficiency. So any calculation that is spatially organized (such as most physical simulations) is inherently parallelizable (of course) especially for fine-grained hardware. (or SIMD machines)

  • Repurcussions are for science-fiction writers. :)

    Seriously, though: there are a lot of technologies currently being researched which have disturbing implications --- mite-sized cameras which can move themselves around; plastic-eating biotech creatures; energy generation from radioactive waste; artificially grown organ replacements; etc, etc.

    These are all being actively researched. Some will pan out in the near future, some will remain as mythical as the flying car. But either way, there's too much money, and too many people who think the technologies are cool, to stop them ... nor would it necessarily be a good thing to do so: for each of the nightmarish uses I can imagine all of these things being put to, there are an equal number of incredibly good uses, as well.

    As for AI? It's hard to tell, because it's hard to imagine what the use value of these experiments are right now. Some of the side-effects are clear: neural net technology could make things like internet search engines actually usable, and a friend of mine was recently talking about an interesting neural-net tech possibility that could provide a cure for writer's block. But in general, it seems like playing ... which probably makes the potential for a mistake more scary, because it's a risk we don't _have_ to take ... except that naybe we do.

    Maybe we have to know if we can do it. Maybe, as has been suggested in numerous science fiction novels, maybe this is essentially evolution happening before our eyes, only we are creating the next step. Who knows? Don't you want to find out?
  • Maybe because the project is actually in Japan? Quote from the webpage:- "I head the Brain Builder Group at ATR, a research lab in Kyoto, Japan. " and in case u notice, that URL is in Japan.
  • I realize it's a long way off yet, but as a profound cat-lover and techno-geek, I'd kill to get one of these.

    "A cute little robotic pet that's fun to be with!"
    And it mimics a real-life kitten? As long as they make it unbelievably cute, I'm hooked. :)

To be is to program.

Working...