CAM-Brain: Artificial Self-Teaching Brain 32
lostkluster writes "Genobyte is developing Robokoneko, a kitten (at first, computer simulated) to use CAM-Brain technology, a self-teaching artificial brain with a goal to have billions of artificial neurons by year 2001, but as a first step, it will have 32,000 evolved neural network modules. The CAM-Brain project is even to enter the Guinness book as "Most Powerfull Artificial Brain".
More news and info at Prof. Dr. Hugo de Garis homepage (head of the Brain Builder Group at ATR), and at whatis.com. "
A better mouse trap? (Score:2)
If you've already got Aibo, (Score:1)
I think we've found the subject for our next flame war.
Whoa (Score:1)
Questions I have:
a) These CA Modules.. Are they discrete units? Are you just tossing a bunch of gates onto a chip and then connecting them randomly or what? Details, man.. All I wanna know is what you consider to be the "individual neuron".
b) How the hell are you gonna get a billion of these inside that little cat thing, and still have room for wiring, motors, etc. Build something bigger, like a good sized tiger that you can ride around.
---
Mirror (Score:2)
Only you can prevent the /. effect.
There's a European mirror at http://foobar.starlab.net/~degaris/ [starlab.net]
optimistic (Score:1)
Is this possible? (Score:1)
Storing human consciousness (Score:2)
Re:Whoa (Score:1)
How does this compare to Sony's dog? (Score:1)
Well, one more step towards towards terminator 2
i think it has nothing to do with aibo (Score:1)
Stop comparing them! =]
Hey, did they stop to think (Score:1)
Re:How does this compare to Sony's dog? (Score:1)
I also iam of the opinion that this is big on talk, short on real technical info. I mean, the mechanical overview points out to use the "shoulder" and "paw". come on.. I'm not *that* dense!
Re:Terminator (Score:1)
You Can't stop AI Evolution... nor should we. (Score:2)
Probably not, did you? (Score:1)
Being (along with the rest of earthlife) laughably stupid I wouldn't mind to have someone think through difficult things and explain them to me. Just like I don't at all mind having a calculator divide 78646/427 for me. And I most certainly don't want to 'pull their plug' because the can calculate better than me.
Dr. Hugo de Garis and CAM's (Score:2)
I had an opportunity to speak with Dr. de Garis over a year ago at a party thrown by an acquaintance of mine [speakeasy.org] who had interviewed de Garis for a documentary on Nanotechnology and AI. I found Dr. de Garis intelligent, personable and amusing.
At the time he was rather pessimistic about the Robokoneko project, but mostly because of the cultural problems he was dealing with as a Britisher in Japan. However he claimed that the artificial neuron work was proceding well, even though they were doing it all with simulators. He predicted then that, before 2000, they would be creating silicon versions. From the information in the links it would seem that his prediction has come true. Only they are using FPGA chips instead of going to a foundry for CAM specific VLSI.
It is interesting to note that Dr. de Garis has made incredible progress by following a path the mainstream AI community has largely discounted -- that of modeling real neurons and real brain structures. I wonder what will come out of his next collaborative development at Starlab in Brussels [atr.co.jp]? From his statements to me I would certainly hope he would find the living and working arrangements more congenial.
I do find it very interesting that he will be working with Lernout and Hauspie [lhs.com] (developers of Voice Recognition software). The spin-offs from that may be more important than the original research!
Jack
Re:You Can't stop AI Evolution... nor should we. (Score:1)
Techno-militia: "Shit, he's got a gun! We should..." Blue screen takes over cyborg...
Company for Cog? (Score:1)
Not that new... (Score:2)
Umm, no. The reason people stopped trying this is that (1) we can't model everything about the neuron (2) what we did try didn't work (3) we don't know how real neurons learn.
This is probably a big backpropagation net on a chip, thus after 10,000 trials it will learn some stuff, while forgetting everything else that it learned before. If you ask connectionist people if the brain is a big set of backprop nets and nothing else, they will say "no" (notably among them would be McClelland).
The AIBO's sony sells adapt their behavior paramaters, but don't really learn. The modified AIBO's in Robocup had some learning. For example, the team from CMU (which I worked on) had a vision system that would learn in a limited way.
Machine learning right now depends mostly on the fact that problems are well broken up... Large scale, full "perception -> action" systems have so far been simplistic in what they learned, slow, or largely unsuccessful. I'll believe results, not speculations.
Hard, Sobering Facts:
All they've evolved yet is some primitive motions in a simulator. Rodney Brooks & Co. did that on a real robot several years ago. It was by no means a trivial task.
It took Sony around 2 years to get a mobile quadruped working in the real world, after they already had a simulator for it in which it worked just fine.
Re:Dr. Hugo de Garis and CAM's (Score:2)
I went to a seminar by Dr. de Garis last year.
The gentleman is very intelligent and personable, as well as slightly mad. (I believe this is a prerequisite for working in his field.) Some of the main points in his speech:
BTW, he's not a Britisher. He's an Australian. Not that most Japanese care. He describes Japan as a harsh environment for any Westerner to work in, and says he's only still there because the funding is good.
Re:Storing human consciousness (Score:2)
- Hormones. Irrationality. It is not known how much of a positive and how much of a negative force this is in learning, but most people agree it's definitely a factor. Would you have decided on your own theory if it hadn't been for your personal animosity towards your opponent? Would you have spent *more* time thinking about logical stuff if you hadn't spent so much time daydreaming in love? Without these constraining forces a computer would behave radically differently than its original human *even at the very beginning*.
- Sleep and unconsciousness. These provide a "reset period" for the brain that is in some way important to it. I personally think that your subconscious has more of the brain to play with at that point and that that is where a lot of your intuitive and creative leaps come in.
- The input devices we are hooked up to: eyes, ears, mouth, nose, our bodies. A machine without all of these (or without their flaws) would evolve in a different manner than the original person by virtue of the "handicap". People adapt to their problems, and so would a machine. But in so adapting it would become a different person, at least to some extent.
- People would treat the computer differently than they would treat the original, and that has a profound effect on humans, so why not on a computer? The lack of hormones might alleviate this effect somewhat, though--I don't know.
--John
Re:You Can't stop AI Evolution... nor should we. (Score:2)
Where ai becomes stronger and more intelligent than us, and they take over.
Not if I can help it. You seem to be overestimating the human race's ability to put its 'best interests' before its wants. Specifically, I don't want AI's to take over; I'm pretty sure most of the world agrees, even if only out of fear of the unkmown.
Well, those are very real possabilities, but it also cannot be prevented, nor should it be.
Why not? We have no obligation to help our little AI expand. If we don't like what it's doing, we should stomp on it.
We are going to create a ai neural system that will have the potential to far surpass our natural biological potential. That is fact.
No, it isn't. It is possible, maybe even likely, but it is certainly not a fact. Unless of course you don't include empirical evidence in your requirements for 'facts'.[0]
Due to it's nature, it will produce major improvements for itself and it's new offspring that it creates...without human intervention.
Again, baseless. We don't produces major improvements in ourselves, so why should be machine? Machinery is not inherently expandable and upgradable; complex organisms (like minds) do not take well to tinkering.
We are going to have to adapt to that in some way.
True. I vote for controlling it, and killing it if necessary. We are under no obligation to help it evolve, esp. if it runs counter to our interests.
[0]: I hate getting in flamewars about AI. Simply stated, there's more reason to believe it won't work than it will. Evidence--or something resembling it--comes from systems theory, biology, neuroscience, philosophy and even computer science itself. I don't have any references handy, unfortunately, but people should do their own research, anyway.
Re:Is this possible? (Score:1)
Sure. It's totally feasible if you use the right hardware. I remember back when de Garis was just starting his CAM-brain stuff, he was using my old research group [mit.edu]'s hardware, the CAM-8 [mit.edu] (which is probably where he got the "CAM" part of his project's name.) We did plenty of volumetric simulations on the CAM-8 with 2^24 sites in realtime, and that was back in 1994. Considering the tech advance since then, and that he's now using his own custom hardware, it sounds completely resonable.
The problem is, as least back when he was still collaborating with us, his stuff just didn't work very well, and his ideas were kinda flaky. (At least in the opinion of the MIT undergrad who worked in both groups.) But hopefully he's worked things out since then. I wish him the best of luck.
Aside: There's something to be said for spatially organized computation, such as the CAM-brain. Fundamentally, the physics that we use to do calculation constrains all interactions to be uniform and local. It's sometimes easy to forget when we're writing software (which seems very ethereal) that the actual computation is constrained by physical laws that really do impose limits on speed and efficiency. So any calculation that is spatially organized (such as most physical simulations) is inherently parallelizable (of course) especially for fine-grained hardware. (or SIMD machines)
Considering the repurcussions is a writer's job. (Score:1)
Seriously, though: there are a lot of technologies currently being researched which have disturbing implications --- mite-sized cameras which can move themselves around; plastic-eating biotech creatures; energy generation from radioactive waste; artificially grown organ replacements; etc, etc.
These are all being actively researched. Some will pan out in the near future, some will remain as mythical as the flying car. But either way, there's too much money, and too many people who think the technologies are cool, to stop them
As for AI? It's hard to tell, because it's hard to imagine what the use value of these experiments are right now. Some of the side-effects are clear: neural net technology could make things like internet search engines actually usable, and a friend of mine was recently talking about an interesting neural-net tech possibility that could provide a cure for writer's block. But in general, it seems like playing
Maybe we have to know if we can do it. Maybe, as has been suggested in numerous science fiction novels, maybe this is essentially evolution happening before our eyes, only we are creating the next step. Who knows? Don't you want to find out?
Re:Why was it given a Japanese name? (Score:1)
When do they go on sale? (Score:1)
"A cute little robotic pet that's fun to be with!"
And it mimics a real-life kitten? As long as they make it unbelievably cute, I'm hooked.
Re:You Can't stop AI Evolution... nor should we. (Score:1)