Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software

Cutting-Edge AI Projects? 346

Xeth writes "I'm a consultant with DARPA, and I'm working on an initiative to push the boundaries of neuromorphic computing (i.e. artificial intelligence). The project is designed to advance ideas all fronts, including measuring and understanding biological brains, creating AI systems, and investigating the fundamental nature of intelligence. I'm conducting a wide search of these fields, but I wanted to know if any in this community know of neat projects along those lines that I might overlook. Maybe you're working on a project like that and want to talk it up? No promises (seriously), but interesting work will be brought to the attention of the project manager I'm working with. If you want to start up a dialog, send me an email, and we'll see where it goes. I'll also be reading the comments for the story."
This discussion has been archived. No new comments can be posted.

Cutting-Edge AI Projects?

Comments Filter:
  • by ascendant ( 1116807 ) <ascendant512+slashdot@gmail.com> on Monday June 23, 2008 @09:33PM (#23912143) Homepage Journal

    Just a small company, I'm sure no-one's noticed it.
    Cyberdyne Systems [wikipedia.org]

    • by Anonymous Coward on Monday June 23, 2008 @10:12PM (#23912399)

      Hello,

      I'm studying theoritical computer science, meaning it's often called math (things like complexity theory, lambda calculus, even linear logic...).

      I always loved AIs, but I was often told that there is no research on it which is that theoretical; that it's more like a collection of applied domains, like learning neural networks or computer vision.

      So, what is the most theoritical aspect of AI research that you know? Or put otherwise, is there a branch of AI research where you prove theorems rather than writing code?

      I know it's slightly off topic, but people working on that kind of thing are probably wondering if they should mention it here (wondering if it interests DARPA or not).

      • by debatem1 ( 1087307 ) on Monday June 23, 2008 @10:41PM (#23912583)
        A lot of the older AI research is pure theory, but in the last 20 years or so it has been driven by the realization that we don't really have the tools to meet some of the early expectations of the field. If you are interested in the theoretical foundations of AI, though, you might want to look into compression, data representation, and computability, as well as general information theory. Claude Shannon's work would be a good place to start, and is cited frequently enough to give you a guided tour through AI.
      • Re: (Score:3, Informative)

        by dominious ( 1077089 )

        Or put otherwise, is there a branch of AI research where you prove theorems rather than writing code?
        It is called Automated Reasoning [wikipedia.org] and there are already some theorem provers out there like Otter [unm.edu] or Prover9 [unm.edu]
      • Re: (Score:2, Interesting)

        by xtracto ( 837672 ) *

        This is an interesting post. As far as I knew, things like logic in some way part of AI. One of my PhD supervisors is a monster in logics, and he has published in AI Journal [Elsev.](one of the most presigious journals for AI).

        I think a lot of research on Agents is related with logics, including but not limited to CTL, ATL, first order L., that combined with game theory.

        What I would suggest you is looking at academia, mostly in Europe (UK, Netherlands [they are good at theoretical research], Germany). If yo

    • Lots of words, nothing said.
      • by Xeth ( 614132 )
        Well, yes, I'm asking a question, not offering an answer. Is there something specific you'd like me to clarify?
        • by Plutonite ( 999141 ) on Tuesday June 24, 2008 @04:23AM (#23914251)

          Possibly: what exactly the research on self-awareness/sentient learning systems comprises for you guys. The reason I bring this up is that I left AI as a main research field because even the most flexible research goals set by academia today are just tiny steps of applicative advances in statistical inference and mathematical techniques in general. Not that these things are not useful (machine learning is quite amazing, actually) but the initial goals have little to do with all this. We have surpassed brains in many ways, but the massively parallel brain does not even "learn", let alone think(which is what we want) in the manner in which any AI system today is based. And yes, that includes adaptive ones that change their own set of heuristics.

          I spent a lot of my time thinking about neuroscience and reading psychology , and while I slowly moved towards rationalizing certain things, the main obstacles to what I needed to know were deeply biological. How exactly does the mind "light up" certain areas of memory while recalling things (sounds, sights..etc) stored nearby? How "randomized" is this? And how can it be represented mathematically (von neumann architecture)? Is there ANYTHING we do that is not entirely memory based (even arithmetic seems to stem from adaptive memory)? Why do we need to sleep, and what part of the whole "algorithm", if there is one, is dependent on this sleeping business? What exactly does it change in our reasoning?

          If we know precisely some good answers, rather than the guesswork in literally all major texbooks, we can begin to model something useful and perhaps introduce much more powerful intelligence with all we have developed in NN, probabilistic systems..etc. I think once sentience is conquered, human beings are going to be immediately made inferior. It's just this abstraction business that is so damn complicated.

  • It it just me? (Score:4, Insightful)

    by Anonymous Coward on Monday June 23, 2008 @09:36PM (#23912163)

    The project is designed to advance ideas all fronts, including measuring and understanding biological brains, creating AI systems, and investigating the fundamental nature of intelligence.


    Why is it that the first application that I can think of for such project developed by DARPA is that to use it against the citizens?
    • Re:It it just me? (Score:5, Interesting)

      by Xeth ( 614132 ) on Monday June 23, 2008 @10:47PM (#23912621) Journal
      I suspect that any assurances from me will mean little and less (you seem to have a well-defined opinion about what DARPA does and why), I think that the ideas I'm pursuing here are sufficiently general that it would be foolish to shy away from them on grounds that they might be used for some military application. You could say the very same about any advanced computing device.
    • Give me a break (Score:5, Interesting)

      by Louis Savain ( 65843 ) on Monday June 23, 2008 @10:50PM (#23912637) Homepage

      Why is it that the first application that I can think of for such project developed by DARPA is that to use it against the citizens?

      Like it or lump it, you are in this boat with everyone else. If AI is solved, it will be used for good and evil. If your country does not use it for evil (extremely doubtful), somebody else's country will. Better yours than theirs. What I mean is that true AI will be an extremely powerful thing; if any country other than yours gets an early monopoly on AI, you can bet they are going to use it to kick your country's ass. I don't think you'd like that very much.

      Having said that (and to get back on topic), I have been working on ageneral AI project called Animal [rebelscience.org] for some time. Animal is biologically inspired. It attempts to uses a multi-layer spiking neural network to learn how to play chess from scratch using sensors, effectors and a motivation mechanism based on reward and punishment. It is based on the premise that intelligence is essentially a temporal signal-processing phenomenon [rebelscience.org]. I just need some funding. The caveat is that my ideas are out there big time and there is a bunch of people in cyberspace who think I am kook. LOL. But hey, send me some money anyway. You never know. :-D

      • Humanity's Problem (Score:2, Insightful)

        by Anonymous Coward

        If your country does not use it for evil (extremely doubtful), somebody else's country will. Better yours than theirs.

        Yes, that is the precise kind of thinking that demonstrates why mankind deserves to be wiped off the planet. Man, what a wonderful place the universe would be without this single species.

        • by Culture20 ( 968837 ) on Tuesday June 24, 2008 @06:56AM (#23914973)
          Lions kill cubs sired by other lions. Penguins and monkeys steal babies from other parents when their babies die prematurely. Ants wage war. Almost all female mantises engage in cannibalism. We have no monopoly on evil. If you believe in a struggle between good and evil, humans are in a unique position clean up things; we understand we're evil and can admit it.
      • if any country other than yours gets an early monopoly on AI, you can bet they are going to use it to kick your country's ass.

        I can think of one country [wikipedia.org] that wouldn't kick the US's ass (in a military confrontation) or any one else's if it's citizens solved the Strong AI problem.
      • if any country other than yours gets an early monopoly on AI, you can bet they are going to use it to kick your country's ass.
        I see this line of thinking very dangerous. Many European countries, including one I live in, are certainly able to obtain nuclear weapons, but simply refuse to do so. So, what if "bad guys" (say, terrorists) activate nuclear bomb in here? First, society is reasonably certain our military is not *humiliating* civilians in other countries by randomly killing (especially children) or
      • by Cheesey ( 70139 )

        Well, good luck. But I can see why they think you're a kook. This AI stuff is a bit too close to "free energy" and "cold fusion" for my liking; decades of research and no progress to speak of, wild promises of what *might* happen if all the problems were magically solved, and an army of AI believers who won't let go of the dream. That's not to say you can't do it, just that lots of smart people have already tried and failed, which is generally a bad sign.

        But AI has already succeeded in some ways. It's contr

      • Louis, could you send me an email please, j@ww.com

        thanks !

    • by nurb432 ( 527695 )

      Why is it that the first application that I can think of for such project developed by DARPA is that to use it against the citizens?
      Why not? Everything else is.
  • Yea, lots (Score:5, Funny)

    by mlwmohawk ( 801821 ) on Monday June 23, 2008 @09:40PM (#23912195)

    Yea, I have lots of ideas and things I've been working on.

    Fund me! :-)

  • by markybob ( 802458 ) on Monday June 23, 2008 @09:40PM (#23912197)
    If DARPA is now so desperate as to seek out totally random and unknown readers of slashdot...my god the US is screwed.
    • by Anonymous Coward on Monday June 23, 2008 @09:44PM (#23912215)

      "If DARPA is now so desperate as to seek out totally random and unknown readers of slashdot...my god the US is screwed."

      Dont' be an idiot genius and smarts reside in unexpected places. I'm sure many slashdotters have come across some very smart people.

      • I'm sure many slashdotters have come across some very smart people.

        We have, but not on slashdot.

    • by stor ( 146442 )

      If DARPA is now so desperate as to seek out totally random and unknown readers of slashdot...my god the US is screwed.
      Huh? Slashdot is the most appropriate place on the Internet to search for Artificial Intelligence.

      -Stor

  • by edwebdev ( 1304531 ) on Monday June 23, 2008 @09:46PM (#23912233)
    ... I had a totally sweet aritifical intelligence lead, but I already told China about it, and they said I shouldn't tell anyone else.

    :-/
  • Games (Score:2, Insightful)

    by garphik ( 996984 )
    Games? It is the best scratch pad for AI experiments.
    • by julesh ( 229690 )

      Games? It is the best scratch pad for AI experiments.

      Bollocks. Very few games have AI that's even approximately interesting. The most advanced stuff that's commonly used is stuff like algorithms for navigating around a map and obstacle avoidance that were basically mastered by the robotics community in the late 80s and early 90s.

      Show me a game that does something truly novel in terms of AI, and I'll be impressed. I don't see any, though.

    • by Ihmhi ( 1206036 )

      The AI in games are not as free-thinking as you might think.

      AIs, for instance, can not figure out how a map works in a first person shooter entirely on their own. Bots in older games (or on maps without waypoints) will often walk into a wall, stop, get their bearings, and then move in another direction. I loved watching Foxbots in TFC just stand around during a CTF map walking in circles around the flag.

      In Half Life 2, the enemy AI runs on paths. There are multiple plotted paths, and it follows them and

  • don't call it Skynet.
  • An obvious one. (Score:5, Informative)

    by v(*_*)vvvv ( 233078 ) on Monday June 23, 2008 @09:53PM (#23912289)

    numeta [numenta.com]

    It's mainly a teaching + learning system for a system with input and output. I don't see anything built with it answering any rational questions or coming up with new ideas anytime soon, but if you do AI and don't know about them, you better catch up.

  • Hi (Score:4, Funny)

    by martin-boundary ( 547041 ) on Monday June 23, 2008 @09:55PM (#23912301)
    I'm a consultant from Slashdot. For a low fee, I can point you towards research materials to save you the time and effort of doing it yourself, and if you elect to pay for the premium service, I also guarantee that all provided materials are not fake. Send me an email and we'll see where it goes :)

  • It would be great to hear of any interesting original research. It seems to me that most of the news in this space are more about applications of already well known ideas rather then new well publicized developments.

    The 'Semantic Web' companies that are springing up all over like Twine, AdaptiveBlue, etc. are the best examples. They seem to be using some basic NLP, classifiers and statistical models to provide various services on the web. This may not be cutting edge artificial intelligence research but,
  • Just started working on this project again...

    http://sourceforge.net/projects/ebla [sourceforge.net]
    http://acl.ldc.upenn.edu/W/W03/W03-0607.pdf [upenn.edu]

  • It looks like DARPA is trying new methods [slashdot.org] to get some more funding.

  • Blue Brain (Score:3, Informative)

    by Usquebaugh ( 230216 ) on Monday June 23, 2008 @10:12PM (#23912395)

    Take a look at the project http://bluebrain.epfl.ch/ [bluebrain.epfl.ch]

  • A problem, divided (Score:4, Interesting)

    by Lije Baley ( 88936 ) on Monday June 23, 2008 @10:17PM (#23912431)

    You've got to quit trying to advance on separate fronts. People have been exploring and reinventing the same old niches for sixty years. Little has changed except for the availability of powerful hardware with which to realize these disconnected bits and pieces. What is needed is a way to bring the many different segments of the AI and robotic communities together, because the solution is not to find the "winning approach", but to realize the value of the various perspectives and combine efforts. This is not a new idea, it is an old one which apparently just doesn't fit into the established research environments. Go to the library and read some old books on AI if you really want an appreciation of how pathetic the progress of ideas (not hardware) has been. To whet your appetite try some of Marvin Minsky's old papers - http://web.media.mit.edu/~minsky [mit.edu] He recognized this situation nearly 40 years ago.

  • Why doesn't the Government start wokrng on making Congress work?

    Oh, wait..... They already are robots.

    • Why do you think he needs better AI? they aren't currently acting very human, a lot of us don't believe they are.

  • Dear Friend (Score:5, Funny)

    by trainsnpep ( 608418 ) <mikebenza@nOSpAM.gmail.com> on Monday June 23, 2008 @10:34PM (#23912537)

    Dear Friend,

    Compliment of the day to you and your entire family how are you today? Hope all is well with you I hope this email meets you in a perfect condition. I am using this opportunity to thank you inform you that I have come upon a large repository of AI source code left to me by my brother, Prince Abdullah of Nigeria.

    It is my desire to transfer this source of of my home country to a place where it will be safe, and I wish your association in this business matter. I've been recommended to you by Mr. Smith of New York. I would like to transfer the source to your FTP server as an escrow service. In recompense, I will offer you 10% of the code, which is LoC 150,000,000.

    To complete this transaction which will be beneficial to both of us, please contact my secretary with the following information:

    1. YOUR FTP SERVER ADDRESS.
    2. YOUR USERNAME.
    3. YOUR PASS WORD.

    The name and contact address of MY SECRETARY is as follows below.

    MR.Brwon Adebayor
    14 Island Street Lagos Nigeria
    E-MAIL brwonadebayor@yahoo.com
    TEL +2348083322221

    In the moment, I am very busy here in Paraguay because of the investment projects which myself and my new partner are having at hand IN PARAGUAY.Finally, remember that I have forwarded instruction to my SECRETARY MR.Brwon Adebayor, his E-mail, (brwonadebayor@yahoo.com) to assist you on your behalf to send the source code to you as soon as you contact him.

    Please I will like you to accept this grant offer with good faith as this is from the bottom of my heart. You should contact my secretary for the claim of you'r 10% which i willingly offer to you immediately you receive this mail, Presently I am in Paraguay.

    pls make sure that you inform me as soon as you collect the bank draft so that we can share the joy together. Thanks and God bless you and your family.

    Best Regards,
    MR. RICHARD WANG
    PRESENTLY IN PARAGUAY

  • by Kainaw ( 676073 ) on Monday June 23, 2008 @10:36PM (#23912551) Homepage Journal

    For many decades, there has been a push to have an AI that acts just like a human. In other words, it makes rash decisions, based on bad anecdotes and stereotypes, full of mistakes, and then tries to rationalize that everything was planned with intelligence.

    AI should understand the failings of human intelligence and fix it. For example, I have the sad job of normalizing health data. Every day, I dread coming into work and going through another million or so prescriptions. Doctors and nurses seem to continually find new ways to screw up what should be a very simple job: What is the name of the medication? What is the dosage? How often should it be taken? When should the prescription start? When should it end? How many refills/extensions on the prescription are allowed before a new prescription must be written? Instead of something reasonable like: "Coreg 20mg. Every evening. 2008-06-10 to 2006-07-10. 5 Refills." -- I get: "Correk 20qd. 10/6/08x5." It seems to me that some form of AI could learn how stupid humans are and easily make sense of the garbage. Of course, there's no reason the AI couldn't replace the doctor and write the prescriptions itself in a very nice normalized form.

    • I thought the idea of having an AI working like a human (strong AI anyways) was not so it can have all the flaws we have, but so we can have a conscious machine. We are the conscious beings that we know the best, so conscious and human-like becomes largely the same thing.

      That being said, standardizing human behavior is possible. The easiest way is to set up a standard form for prescriptions with nice fields for name, type, dosage, and etc, and then adding a nice 'other' field just in case. You know, just li
    • There's an indefinable line out there where AI and human intelligence will meet, in much the same way that alien life and life as we know it will also meet, but will they cross over?

      Sure, it'll be cool describing your symptoms to a robot Doc one day when you get a sore throat, and having it drum up some perfectly sripted scrip that actually fixes your throat, but what happens when the robot Doc gets a sore throat?

      And it will.

      I know we're only talking about the kind of AI that can answer phone calls
    • by Xest ( 935314 )

      A system that performs set tasks isn't necessarily intelligent nor does it necessarily require intelligence. Or to put it another way, just because humans perform some tasks doesn't mean that task requires intelligence to perform.

      Intelligence is defined by the likes of free thought, the ability to learn from mistakes, come up with ideas, adapt fluidly to changing situations and so on. If it's not capable of making mistakes it's not capable of learning how to deal with the unpredictable.

      What you're after isn

  • No. (Score:3, Insightful)

    by God of Lemmings ( 455435 ) on Monday June 23, 2008 @10:36PM (#23912557)

    Maybe you're working on a project like that and want to talk it up?
    Not under this administration.
    • by mabu ( 178417 )

      I agree. But at the same time I understand why they're asking. The current administration seems to have depleted its supply of natural intelligence.

  • by CrazyJim1 ( 809850 ) on Monday June 23, 2008 @10:43PM (#23912597) Journal
    My AI page which has several links that go deeper to older write ups is at www.fossai.com [fossai.com]

    Basically I say that the better computer vision you make, the better software you can write advanced bots leading up to AI. I see AI as being something we'll naturally get to even if no one makes an effort to it: Our 3d cards are getting better, video games are making better 3d worlds, memory is getting bigger, and computer speeds are getting faster. Even if you couldn't hold AI on a current computer's memory, you have wireless internet that links up with a supercomputer to make thin client bots. So there really isn't anything in current technology that is holding us back except computer vision.

    Now I am not so good in the computer vision field, but as I see it(excuse pun), there are two ways to do vision.

    1) Exact matching. You model an object in 3d via CAD, a Pixar style, or using Video Trace [acvt.com.au] First you database all the objects that your AI will see in its environment then you make a program that identifies objects it "sees" with computer cameras and laser range finding devices. So then the AI can reconstruct its environment in its head. Then the AI can perceive doing actions on the objects.

    I'm currently not in the loop here. I can't talk to anyone at Video Trace because I'm just a person, and they don't want to let me in on their software. So I can't database my desk. So I can't make the program that would identify things.

    2) Even better than exact matching is similar matching. No two people look alike besides twins, so you can't really just database in a person and say that is a human. And as humans go, there are different categories such as male and female, and some are androgynous so we can't tell their sex. Similar matching has a lot of potential in its ability to detect things like trees and rocks. Similar matching is good at an environment that is tougher to put into exact matching situations. So just from this information alone, I wouldn't start on similar matching unless you had exact matching working in a closed environment. I'm not saying that some smart individual couldn't come up with similar matching before exact matching. I'm just saying that for myself, I'd start with exact matching, and then extend it with similar matching. There are a lot of clues you can pick up on if you know exact locations of things.

    And then once you have singular location vision working, you can add multi point vision working. Multi point vision would mean that if you had more robotic eyes on a scene that you'd gain more detail about it. You could even get as advanced as conflict resolution when one robotic eye thinks it sees something, but another thinks it is something different. The easiest way to think of a good application for this would be if you had a robotic car driving behind a normal semi trick and another robotic car infront of the semi. The robotic car in the back can't see past the semi to guess traffic conditions of when the semi will slow down, but the car in front of the truck can see well, so they can signal to each other information that would let the car in behind the semi truck follow closer. If you get enough eyes out there, you could really start to put together a big virtual map of the world to track people.

    I wouldn't say AI that learns like humans is desirable. After all, you'd have to code in trusting algorithms to know who to listen to. I'd say AI that downloads its knowledge from a reliable source is the way to go. It is easy to see: Sit in class for years until you learn a skill, or download it all at once like Neo on training seat.

    Anyway, you can do a lot with robots that have good computer vision. Thething that has to be done next is natural language understanding. So far we've discussed the AI viewing a snap shot of a scene and being able to identify the objects. Next you'll have to introduce verbs and moving.
    • You wouldn't happen to be a student of the late Prof. Sheldon Klein, would you?

      http://pages.cs.wisc.edu/~sklein/sklein.html [wisc.edu]

      [In one of his classes, we tried (in a very rudimentary way) to give computers a "3D imagination space" by extracting spatial information from natural language and displaying it in a virtual reality environment. (We could visualize sentences such as "the chair is behind the table"). There was also much discussion of visual/spatial metaphors that humans use to understand abstract sen

    • by Xest ( 935314 )

      Better visual recognition and understanding of natural language would certainly aid us in producing better systems but I'm not convinced they'll allow us to simply create intelligent robots.

      The search for an intelligent machine requires more than this, it's not just about sensing your environment of which vision is just one facet that is for example (i.e. blind people are still intelligent).

      I've had a quick read of your website and whilst interesting I'm not sure that it's entirely correct, I think it overs

  • by CoolGuySteve ( 264277 ) on Monday June 23, 2008 @10:45PM (#23912605)

    I recently threw together a prototype for my company using OpenCV. That OpenCV exists for this sort of thing is a godsend. One of our interns recently completed a UI research project that also relied on OpenCV.

    But one of the problems I had while doing it was that whenever I searched for more documentation about the algorithms I was trying to write, all I could find where either papers describing how some researcher's system was better than mine, or some magic MATLAB code that worked on a small set of test images. There were no solid implementations written in C for any of these systems.

    I would love to dick around for weeks implementing all these research papers and then evaluating their results and real world performance, but I don't think my boss or my company's shareholders would enjoy that. Like every company, resources are limited for something that isn't making money.

    With that said, the best way to further AI research, particularly in the highly marketable fields of machine learning and computer vision (but probably others as well), is to add implementations of cutting edge research to existing BSD-licensed libraries like OpenCV for companies to evaluate. If products that use that research become profitable, private companies are likely to throw a lot more money and researchers at the problem, all competing to one-up the other.

    If you think I'm being unrealistic, you should check out the realtime face detection that recent Cannon cameras use for autofocus. Once upon a time, object recognition was considered a cutting edge AI problem.

  • by Theovon ( 109752 ) on Monday June 23, 2008 @10:47PM (#23912617)

    Let's see.... what I'm working on....

    Pure pareto multiobjective genetic algorithms (just submitted a paper to IEEE TEVC)
    Hinge-loss function discriminative training of neural nets as classifiers
    Computer vision as a KNOWLEDGE problem (i.e. not just mostly signal processing and statistics)
    Persistent surveillance (entity tracking)
    Sensor asset allocation (using a GA)
    Various things involving abductive inference

    http://www.cse.ohio-state.edu/~millerti/ [ohio-state.edu]

    • Damn it! And I thought I was clever because I'm automating creation of Xen instances for a MySQL cluster. Oh well, at least it pays better than graduate work.

  • I hear that AI is only about 15 years away, so you could try just waiting until then. Unfortunately, that estimate hasn't changed for 30 years.

    Given the slow progress in AI research, I think a radical approach is in order. I doubt that we'll see any breakthroughs from a small crew of programmers with quad cores and c++.

    The human brain is a massively parallel, self-reconfiguring network of nodes. How far have we come in building any sort of scalable technology that can operate in such a manner? I know there

  • Matrix Logic (Score:4, Interesting)

    by Darth Cider ( 320236 ) on Monday June 23, 2008 @11:17PM (#23912821)
    The Matrix Logic series of books by August Stern should give you some ideas. Maybe DARPA has the resources to test if isospin of oxygen is really the basis of intelligence, as Stern considers plausible, due to the vector basis of "logicspace." Look for that missing particle predicted by logic groups while you're at it. I don't know why those books aren't cited more, or why symbolic logic is still taught as it always has been, when matrix logic makes things so much clearer and more consistent. The vector approach to logic can also replace standard programming structures in everyday code. Instead of if-then or case structures, querying a truth table or testing for equivalence term by term--the usual practice in conventional logic, too--a matrix multiplication can calculate the answer directly, if the terms are properly conceptualized. The books are easy to read, too, very clear and straightforward. Everybody oughta check em out.
  • Xeth,

    What truth do you know of the following statement?

    CENNS stands for Core Engine Neural Network System, and started as a research consolidation project under DARPA's Intelligent Systems and Software program in 1995. It was a joint effort with the RAND institute to leverage all A.I. research in the past 50 years under a single initiative.

    Project SUR paved the way for systems HARPY and HEARSAY-I, then abandoned until 1984, under the Strategic Computing Program. HEARSAY-II introduced the concept of a comm

    • Re: (Score:3, Informative)

      by julesh ( 229690 )

      I doubt he'll comment. But I will: it sounds like bullshit to me. Unless you can propose how exactly somebody might interface a neural network to a knowledge-based system. That's substantially more advanced than any ANN system I've encountered so far, and I've looked at some fairly esoteric ANN designs.

  • Cutting Edge AI?!? (Score:5, Interesting)

    by Bill, Shooter of Bul ( 629286 ) on Monday June 23, 2008 @11:42PM (#23912977) Journal
    I have the perfect project: A smart knife. Think about it Knives are deadly, deadly weapons. People get stabbed every day. Even innocent people stab themselves all while trying to prepare the simplest of dishes. The solution is simple: Build a knife that knows its target. With an active memory metal that blunts itself to the sharpness of a baseball bat if its positioned at anything other than its target. Furthermore it will dynamically alter its blade to ensure the optimal cut of the material, taking into consideration all of the grain, moisture, temperature, and density of the object. It also has zibgee wireless mesh networking built in to communicate with other intelligent kitchen objects. The cutting board will communicate with the knife to let it know how close it is to the board. It will speak with the oven to let it know the specific moisture and condition of the meat to allow the oven to set the temperature and time of cooking to an optimal level. It will also prob for bacterial, viral of prion content communicating with any compatible devices to warn the user of the danger.

    The smart knife. Cutting edge AI at its finest. Prospecitive investers, feel free to contact me @ bill_AT_ultimatesalsaparty_DOT_COM
  • we are working on integrating cutting edge planners (currently the award winning fast forward planner FF, see http://members.deri.at/~joergh/ff.html [members.deri.at] ) with controllers for dynamic worlds, like Golog (this means we make robots, that react to changes in the world, decide faster what to do, to achieve a goal) http://www.computational-logic.org/content/projects/wisslogc.php?id=53 [computational-logic.org]
  • I am actually working on an neural processor. It is primarily, a platform for developing neural applications as appose to an application itself. Similar to how a database provides middle ware functionality. And temporarily coined Neurox.

    Neurox is subdivided into two parts:
    Firstly a database where neurons have position and are allowed to move or create new connections (plasticity) in a more permanent manner. This can be a slower process. And secondly a processing node, or cluster of nodes, Where a slice o

  • Peter Turney (whose programs have achieved human level performance on the SAT verbal analogy test) and I have been discussing an experimental test of Ockham's Razor in AI. This is a question that is both fundamentally important and experimentally tractable.

    I recommend you read our discussion [wordpress.com] of an experiment to test Ockham's Razor (and related theories such as MDL, algorithmic probability...).

  • Cyc corp, but it is already working for NSA, has the most advanced AI system I am aware of. I am not sure Cyc is an improvement over Eurisko, its predecessor, but well, it managed to make its creator raise a few dozen million dollars.

    Also, dear DARPA official, don't you think that an AI researcher could have ethical reservation about working with the US Army ? I don't try to troll here, this story is already tagged 'skynet', don't you think that many AI researchers are very worried about the mix of milit
  • Im working on an augmented visual display system that uses a network of firing signals to forge 'paths' in an ever-evolving AI processor for visual recongition. My goal is to completly replace my pc mouse so I can dominate my foe in Warcraft III. Stay away from the USEAST servers or prepare to be dominated.
  • by synaptic ( 4599 ) on Tuesday June 24, 2008 @04:20AM (#23914235) Homepage

    I have had an interest in AI over the years and have found Gerald Edelman's books particularly insightful.

    See:
    _Neural Darwinism_ (ISBN 0-19-286089-5)
    _Bright Air Brilliant Fire: On the Matter of the Mind_ (ISBN 0-465-00764-3)

    The ideas in these books might be outdated by now but I doubt it. I think the works of Norbert Weiner are still relevant.

    I particularly liked the NEAT project, however crude it may be. I like the changing neural topology via genetic evolution concept and think this is consistent with what Edelman tells us really happens in biology.

    See: http://www.cs.ucf.edu/~kstanley/neat.html

    My other suggestion is to define the many different scopes of the AI. For some, it seems the bar has been placed at natural language processing and full-on human cognition. Without the frame of reference and body of experience of a human though, this seems to be an unrealistic goal. I just don't think we can "program" a computer to do it. To pull it off, this would seem to require duplicating the nervous system of a human to enough of a degree that the AI can experience sensory input compatible with our shared human experience. Think about how many years it takes for a human to reach the level of intelligence we are seeking in AI. I don't think there are any overnight solutions here. We need to teach it like a baby, child, adolescent, and adult. While we may be able to speed train an AI, it may be that there is something to the lack of interesting input that enables us to reflect and refine our mental models of the world. The AI must also continue to interact with the human world in order to stay current.

    But AI doesn't have to match a human. There are much simpler organisms we can model as a start that may pay off in other ways. Nature seems to excel at reusing novel patterns and we should exploit that code/model library. The AI produced from this research may not be able to hold a conversation, but it can probably keep an autonomous robot alive and on it's mission, whatever that may be. And I think it's a better foundation for the eventual human equivalent and beyond.

    For some possible hardware platforms, see:

    http://www.neurotechnology.neu.edu/
    http://biorobots.cwru.edu/
    http://birg.epfl.ch/

  • Umm this guy works for DARPA and hes asking for help on here? Something is amiss.

  • by curmudgeon99 ( 1040054 ) on Tuesday June 24, 2008 @06:38AM (#23914875)
    An AI system must at its heart understand the two hemispheres of the human brain and how they process information differently. Though, for example, both hemispheres receive inputs from both eyes, how they process information is radically different. The right brain is looking first at the outline of an object. Then, as that outline has been sketched out, it feeds that information up the column and more specificity is gained. The left hemisphere--being used to process information in a linear sequential manner--looks at individual items inside the image and tries to name them. These two separate processes are then passing information constantly across the corpus callosum and that is how we get our consciousness. An AI system must do this cross pollination. I have been working on various aspects of this idea for years in the Godwhale Project [google.com]. The first stop on anyone's journey to write this code is no one else than Dr. Roger Sperry. [Nobel Prize 1980].
  • "understanding biological brains, creating AI systems, and investigating the fundamental nature of intelligence."

    Maybe begin with a bit of background on the complexity of shotgunning the task: some Hofstadter, maybe some Dennett, maybe something like John Pollock's "How to Build a Person: A Prolegomenon".

    Then define, in the sense of a formal systems analysis, the #1 task DARPA would have an AI system perform in 5-10 years and then specialize and concentrate and specialize and concentrate some more in resea

Life is a game. Money is how we keep score. -- Ted Turner

Working...