Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Software

Cutting-Edge AI Projects? 346

Xeth writes "I'm a consultant with DARPA, and I'm working on an initiative to push the boundaries of neuromorphic computing (i.e. artificial intelligence). The project is designed to advance ideas all fronts, including measuring and understanding biological brains, creating AI systems, and investigating the fundamental nature of intelligence. I'm conducting a wide search of these fields, but I wanted to know if any in this community know of neat projects along those lines that I might overlook. Maybe you're working on a project like that and want to talk it up? No promises (seriously), but interesting work will be brought to the attention of the project manager I'm working with. If you want to start up a dialog, send me an email, and we'll see where it goes. I'll also be reading the comments for the story."
This discussion has been archived. No new comments can be posted.

Cutting-Edge AI Projects?

Comments Filter:
  • by edwebdev ( 1304531 ) on Monday June 23, 2008 @10:50PM (#23912273)
    Too late - the British already did that [wikipedia.org].
  • by Anonymous Coward on Monday June 23, 2008 @10:54PM (#23912293)
  • Re:Dear Slashdot... (Score:5, Interesting)

    by Xeth ( 614132 ) on Monday June 23, 2008 @11:06PM (#23912363) Journal

    As if I didn't see that coming? I think my UID says I've been here awhile.

    It's not that I'm asking Slashdot to do my work for me; I've already got some very strong leads to work on. However, Slashdot occasionally surprises me with people that are thoughtful and working in interesting fields, so I figured I'd give it a shot. Most of the changes in my life have come from sudden and unexpected directions; I wanted to see what serendipity might bring me that deliberation would not.

  • by Anonymous Coward on Monday June 23, 2008 @11:12PM (#23912399)

    Hello,

    I'm studying theoritical computer science, meaning it's often called math (things like complexity theory, lambda calculus, even linear logic...).

    I always loved AIs, but I was often told that there is no research on it which is that theoretical; that it's more like a collection of applied domains, like learning neural networks or computer vision.

    So, what is the most theoritical aspect of AI research that you know? Or put otherwise, is there a branch of AI research where you prove theorems rather than writing code?

    I know it's slightly off topic, but people working on that kind of thing are probably wondering if they should mention it here (wondering if it interests DARPA or not).

  • A problem, divided (Score:4, Interesting)

    by Lije Baley ( 88936 ) on Monday June 23, 2008 @11:17PM (#23912431)

    You've got to quit trying to advance on separate fronts. People have been exploring and reinventing the same old niches for sixty years. Little has changed except for the availability of powerful hardware with which to realize these disconnected bits and pieces. What is needed is a way to bring the many different segments of the AI and robotic communities together, because the solution is not to find the "winning approach", but to realize the value of the various perspectives and combine efforts. This is not a new idea, it is an old one which apparently just doesn't fit into the established research environments. Go to the library and read some old books on AI if you really want an appreciation of how pathetic the progress of ideas (not hardware) has been. To whet your appetite try some of Marvin Minsky's old papers - http://web.media.mit.edu/~minsky [mit.edu] He recognized this situation nearly 40 years ago.

  • by Kainaw ( 676073 ) on Monday June 23, 2008 @11:36PM (#23912551) Homepage Journal

    For many decades, there has been a push to have an AI that acts just like a human. In other words, it makes rash decisions, based on bad anecdotes and stereotypes, full of mistakes, and then tries to rationalize that everything was planned with intelligence.

    AI should understand the failings of human intelligence and fix it. For example, I have the sad job of normalizing health data. Every day, I dread coming into work and going through another million or so prescriptions. Doctors and nurses seem to continually find new ways to screw up what should be a very simple job: What is the name of the medication? What is the dosage? How often should it be taken? When should the prescription start? When should it end? How many refills/extensions on the prescription are allowed before a new prescription must be written? Instead of something reasonable like: "Coreg 20mg. Every evening. 2008-06-10 to 2006-07-10. 5 Refills." -- I get: "Correk 20qd. 10/6/08x5." It seems to me that some form of AI could learn how stupid humans are and easily make sense of the garbage. Of course, there's no reason the AI couldn't replace the doctor and write the prescriptions itself in a very nice normalized form.

  • by CrazyJim1 ( 809850 ) on Monday June 23, 2008 @11:43PM (#23912597) Journal
    My AI page which has several links that go deeper to older write ups is at www.fossai.com [fossai.com]

    Basically I say that the better computer vision you make, the better software you can write advanced bots leading up to AI. I see AI as being something we'll naturally get to even if no one makes an effort to it: Our 3d cards are getting better, video games are making better 3d worlds, memory is getting bigger, and computer speeds are getting faster. Even if you couldn't hold AI on a current computer's memory, you have wireless internet that links up with a supercomputer to make thin client bots. So there really isn't anything in current technology that is holding us back except computer vision.

    Now I am not so good in the computer vision field, but as I see it(excuse pun), there are two ways to do vision.

    1) Exact matching. You model an object in 3d via CAD, a Pixar style, or using Video Trace [acvt.com.au] First you database all the objects that your AI will see in its environment then you make a program that identifies objects it "sees" with computer cameras and laser range finding devices. So then the AI can reconstruct its environment in its head. Then the AI can perceive doing actions on the objects.

    I'm currently not in the loop here. I can't talk to anyone at Video Trace because I'm just a person, and they don't want to let me in on their software. So I can't database my desk. So I can't make the program that would identify things.

    2) Even better than exact matching is similar matching. No two people look alike besides twins, so you can't really just database in a person and say that is a human. And as humans go, there are different categories such as male and female, and some are androgynous so we can't tell their sex. Similar matching has a lot of potential in its ability to detect things like trees and rocks. Similar matching is good at an environment that is tougher to put into exact matching situations. So just from this information alone, I wouldn't start on similar matching unless you had exact matching working in a closed environment. I'm not saying that some smart individual couldn't come up with similar matching before exact matching. I'm just saying that for myself, I'd start with exact matching, and then extend it with similar matching. There are a lot of clues you can pick up on if you know exact locations of things.

    And then once you have singular location vision working, you can add multi point vision working. Multi point vision would mean that if you had more robotic eyes on a scene that you'd gain more detail about it. You could even get as advanced as conflict resolution when one robotic eye thinks it sees something, but another thinks it is something different. The easiest way to think of a good application for this would be if you had a robotic car driving behind a normal semi trick and another robotic car infront of the semi. The robotic car in the back can't see past the semi to guess traffic conditions of when the semi will slow down, but the car in front of the truck can see well, so they can signal to each other information that would let the car in behind the semi truck follow closer. If you get enough eyes out there, you could really start to put together a big virtual map of the world to track people.

    I wouldn't say AI that learns like humans is desirable. After all, you'd have to code in trusting algorithms to know who to listen to. I'd say AI that downloads its knowledge from a reliable source is the way to go. It is easy to see: Sit in class for years until you learn a skill, or download it all at once like Neo on training seat.

    Anyway, you can do a lot with robots that have good computer vision. Thething that has to be done next is natural language understanding. So far we've discussed the AI viewing a snap shot of a scene and being able to identify the objects. Next you'll have to introduce verbs and moving.
  • by CoolGuySteve ( 264277 ) on Monday June 23, 2008 @11:45PM (#23912605)

    I recently threw together a prototype for my company using OpenCV. That OpenCV exists for this sort of thing is a godsend. One of our interns recently completed a UI research project that also relied on OpenCV.

    But one of the problems I had while doing it was that whenever I searched for more documentation about the algorithms I was trying to write, all I could find where either papers describing how some researcher's system was better than mine, or some magic MATLAB code that worked on a small set of test images. There were no solid implementations written in C for any of these systems.

    I would love to dick around for weeks implementing all these research papers and then evaluating their results and real world performance, but I don't think my boss or my company's shareholders would enjoy that. Like every company, resources are limited for something that isn't making money.

    With that said, the best way to further AI research, particularly in the highly marketable fields of machine learning and computer vision (but probably others as well), is to add implementations of cutting edge research to existing BSD-licensed libraries like OpenCV for companies to evaluate. If products that use that research become profitable, private companies are likely to throw a lot more money and researchers at the problem, all competing to one-up the other.

    If you think I'm being unrealistic, you should check out the realtime face detection that recent Cannon cameras use for autofocus. Once upon a time, object recognition was considered a cutting edge AI problem.

  • by Theovon ( 109752 ) on Monday June 23, 2008 @11:47PM (#23912617)

    Let's see.... what I'm working on....

    Pure pareto multiobjective genetic algorithms (just submitted a paper to IEEE TEVC)
    Hinge-loss function discriminative training of neural nets as classifiers
    Computer vision as a KNOWLEDGE problem (i.e. not just mostly signal processing and statistics)
    Persistent surveillance (entity tracking)
    Sensor asset allocation (using a GA)
    Various things involving abductive inference

    http://www.cse.ohio-state.edu/~millerti/ [ohio-state.edu]

  • Re:It it just me? (Score:5, Interesting)

    by Xeth ( 614132 ) on Monday June 23, 2008 @11:47PM (#23912621) Journal
    I suspect that any assurances from me will mean little and less (you seem to have a well-defined opinion about what DARPA does and why), I think that the ideas I'm pursuing here are sufficiently general that it would be foolish to shy away from them on grounds that they might be used for some military application. You could say the very same about any advanced computing device.
  • Give me a break (Score:5, Interesting)

    by Louis Savain ( 65843 ) on Monday June 23, 2008 @11:50PM (#23912637) Homepage

    Why is it that the first application that I can think of for such project developed by DARPA is that to use it against the citizens?

    Like it or lump it, you are in this boat with everyone else. If AI is solved, it will be used for good and evil. If your country does not use it for evil (extremely doubtful), somebody else's country will. Better yours than theirs. What I mean is that true AI will be an extremely powerful thing; if any country other than yours gets an early monopoly on AI, you can bet they are going to use it to kick your country's ass. I don't think you'd like that very much.

    Having said that (and to get back on topic), I have been working on ageneral AI project called Animal [rebelscience.org] for some time. Animal is biologically inspired. It attempts to uses a multi-layer spiking neural network to learn how to play chess from scratch using sensors, effectors and a motivation mechanism based on reward and punishment. It is based on the premise that intelligence is essentially a temporal signal-processing phenomenon [rebelscience.org]. I just need some funding. The caveat is that my ideas are out there big time and there is a bunch of people in cyberspace who think I am kook. LOL. But hey, send me some money anyway. You never know. :-D

  • Re:Dear Slashdot... (Score:2, Interesting)

    by M1000 ( 21853 ) on Tuesday June 24, 2008 @12:06AM (#23912759)

    Exactly, now all we need is someone to make us look like newbies.

    It could be worse ;-)
  • Matrix Logic (Score:4, Interesting)

    by Darth Cider ( 320236 ) on Tuesday June 24, 2008 @12:17AM (#23912821)
    The Matrix Logic series of books by August Stern should give you some ideas. Maybe DARPA has the resources to test if isospin of oxygen is really the basis of intelligence, as Stern considers plausible, due to the vector basis of "logicspace." Look for that missing particle predicted by logic groups while you're at it. I don't know why those books aren't cited more, or why symbolic logic is still taught as it always has been, when matrix logic makes things so much clearer and more consistent. The vector approach to logic can also replace standard programming structures in everyday code. Instead of if-then or case structures, querying a truth table or testing for equivalence term by term--the usual practice in conventional logic, too--a matrix multiplication can calculate the answer directly, if the terms are properly conceptualized. The books are easy to read, too, very clear and straightforward. Everybody oughta check em out.
  • Cutting Edge AI?!? (Score:5, Interesting)

    by Bill, Shooter of Bul ( 629286 ) on Tuesday June 24, 2008 @12:42AM (#23912977) Journal
    I have the perfect project: A smart knife. Think about it Knives are deadly, deadly weapons. People get stabbed every day. Even innocent people stab themselves all while trying to prepare the simplest of dishes. The solution is simple: Build a knife that knows its target. With an active memory metal that blunts itself to the sharpness of a baseball bat if its positioned at anything other than its target. Furthermore it will dynamically alter its blade to ensure the optimal cut of the material, taking into consideration all of the grain, moisture, temperature, and density of the object. It also has zibgee wireless mesh networking built in to communicate with other intelligent kitchen objects. The cutting board will communicate with the knife to let it know how close it is to the board. It will speak with the oven to let it know the specific moisture and condition of the meat to allow the oven to set the temperature and time of cooking to an optimal level. It will also prob for bacterial, viral of prion content communicating with any compatible devices to warn the user of the danger.

    The smart knife. Cutting edge AI at its finest. Prospecitive investers, feel free to contact me @ bill_AT_ultimatesalsaparty_DOT_COM
  • Re:An obvious one. (Score:1, Interesting)

    by Anonymous Coward on Tuesday June 24, 2008 @01:39AM (#23913249)

    I'm attending a conference put on by Numenta this week. As a Masters student in AI I've been interested in finding good companies that are working on core AI algorithms (as opposed to applications) and that actually have a chance of advancing the field. The options seem somewhat limited and Numenta certainly seems like one of the most promising. They are really taking AI in new directions by trying to incorporate more information about the how the brain works.

    As to whether deep belief networks are a better approach, my impression is DBN's are quite different (in particular they lack the element of training using time). I'm certainly no DBN expert (I've only seen one video of a talk by Hinton on them), but it seems to me that both approaches are promising avenues of future research.

  • A neural processor. (Score:2, Interesting)

    by Grimace1975 ( 618039 ) on Tuesday June 24, 2008 @01:47AM (#23913279)

    I am actually working on an neural processor. It is primarily, a platform for developing neural applications as appose to an application itself. Similar to how a database provides middle ware functionality. And temporarily coined Neurox.

    Neurox is subdivided into two parts:
    Firstly a database where neurons have position and are allowed to move or create new connections (plasticity) in a more permanent manner. This can be a slower process. And secondly a processing node, or cluster of nodes, Where a slice of the stored network is processed. Certain optimizations can be made because of the importance of distance or time of travel, rather than cartesian location. Just the lengths between connection, and there fore travel time is needed for processing, 3d coordinates are not required. A fully parallel environment must also be provided where all interactions occur at once. Otherwise certain critical behaviors will not occur, such as: cyclic interactions, which will spiral to there death. A simple method is used to provide the parallelism, similar to cellular automata processors. A derivative of time is taken: all objects have a before-state and after-state, evaluations are made based on before-state, and results are stored in after-state, when a series of evaluations have completed then after-state becomes before-state and the cycle is repeated. Derived time has advanced.

    • Systems Required:
    • Parallel processing
    • Plasticity in neuron locations, connection and strengths
    • Point to Point, as well as spacial communications. Synapse(point-to-point) and aquas(spatial)communications irrespectively.
    • Source of neural stimulations
    • Externally defined neuron behavior. with system provided storage and thread space.

    -- Not dedicated to it. last posting is sorta old, also developed a extremely small footprint xml like processor called XOL(extensible out-of-band language) for the processing side (uses out of band data instead on in-band like xml): http://sourceforge.net/projects/neurox/ [sourceforge.net]


    Sky Morey
    moreys@digitalev.com
    Digital Evolution Group
    Overland Park, KS 66210
  • by synaptic ( 4599 ) on Tuesday June 24, 2008 @05:20AM (#23914235) Homepage

    I have had an interest in AI over the years and have found Gerald Edelman's books particularly insightful.

    See:
    _Neural Darwinism_ (ISBN 0-19-286089-5)
    _Bright Air Brilliant Fire: On the Matter of the Mind_ (ISBN 0-465-00764-3)

    The ideas in these books might be outdated by now but I doubt it. I think the works of Norbert Weiner are still relevant.

    I particularly liked the NEAT project, however crude it may be. I like the changing neural topology via genetic evolution concept and think this is consistent with what Edelman tells us really happens in biology.

    See: http://www.cs.ucf.edu/~kstanley/neat.html

    My other suggestion is to define the many different scopes of the AI. For some, it seems the bar has been placed at natural language processing and full-on human cognition. Without the frame of reference and body of experience of a human though, this seems to be an unrealistic goal. I just don't think we can "program" a computer to do it. To pull it off, this would seem to require duplicating the nervous system of a human to enough of a degree that the AI can experience sensory input compatible with our shared human experience. Think about how many years it takes for a human to reach the level of intelligence we are seeking in AI. I don't think there are any overnight solutions here. We need to teach it like a baby, child, adolescent, and adult. While we may be able to speed train an AI, it may be that there is something to the lack of interesting input that enables us to reflect and refine our mental models of the world. The AI must also continue to interact with the human world in order to stay current.

    But AI doesn't have to match a human. There are much simpler organisms we can model as a start that may pay off in other ways. Nature seems to excel at reusing novel patterns and we should exploit that code/model library. The AI produced from this research may not be able to hold a conversation, but it can probably keep an autonomous robot alive and on it's mission, whatever that may be. And I think it's a better foundation for the eventual human equivalent and beyond.

    For some possible hardware platforms, see:

    http://www.neurotechnology.neu.edu/
    http://biorobots.cwru.edu/
    http://birg.epfl.ch/

  • by Plutonite ( 999141 ) on Tuesday June 24, 2008 @05:23AM (#23914251)

    Possibly: what exactly the research on self-awareness/sentient learning systems comprises for you guys. The reason I bring this up is that I left AI as a main research field because even the most flexible research goals set by academia today are just tiny steps of applicative advances in statistical inference and mathematical techniques in general. Not that these things are not useful (machine learning is quite amazing, actually) but the initial goals have little to do with all this. We have surpassed brains in many ways, but the massively parallel brain does not even "learn", let alone think(which is what we want) in the manner in which any AI system today is based. And yes, that includes adaptive ones that change their own set of heuristics.

    I spent a lot of my time thinking about neuroscience and reading psychology , and while I slowly moved towards rationalizing certain things, the main obstacles to what I needed to know were deeply biological. How exactly does the mind "light up" certain areas of memory while recalling things (sounds, sights..etc) stored nearby? How "randomized" is this? And how can it be represented mathematically (von neumann architecture)? Is there ANYTHING we do that is not entirely memory based (even arithmetic seems to stem from adaptive memory)? Why do we need to sleep, and what part of the whole "algorithm", if there is one, is dependent on this sleeping business? What exactly does it change in our reasoning?

    If we know precisely some good answers, rather than the guesswork in literally all major texbooks, we can begin to model something useful and perhaps introduce much more powerful intelligence with all we have developed in NN, probabilistic systems..etc. I think once sentience is conquered, human beings are going to be immediately made inferior. It's just this abstraction business that is so damn complicated.

  • by xtracto ( 837672 ) * on Tuesday June 24, 2008 @06:48AM (#23914627) Journal

    This is an interesting post. As far as I knew, things like logic in some way part of AI. One of my PhD supervisors is a monster in logics, and he has published in AI Journal [Elsev.](one of the most presigious journals for AI).

    I think a lot of research on Agents is related with logics, including but not limited to CTL, ATL, first order L., that combined with game theory.

    What I would suggest you is looking at academia, mostly in Europe (UK, Netherlands [they are good at theoretical research], Germany). If you are good, you surely can get a get a post-doc (you can easily do a PhD, if you get funded by someone in your home country) and afterwards a full time post which will lead you to a proffesorship.

    Of course, if what you want is to specifically work for the government, I do not think that any government agency (besides of the ones related to academia) would fund theoretical research (as it is assumed that such thing is done at Universities).

  • by curmudgeon99 ( 1040054 ) on Tuesday June 24, 2008 @07:38AM (#23914875)
    An AI system must at its heart understand the two hemispheres of the human brain and how they process information differently. Though, for example, both hemispheres receive inputs from both eyes, how they process information is radically different. The right brain is looking first at the outline of an object. Then, as that outline has been sketched out, it feeds that information up the column and more specificity is gained. The left hemisphere--being used to process information in a linear sequential manner--looks at individual items inside the image and tries to name them. These two separate processes are then passing information constantly across the corpus callosum and that is how we get our consciousness. An AI system must do this cross pollination. I have been working on various aspects of this idea for years in the Godwhale Project [google.com]. The first stop on anyone's journey to write this code is no one else than Dr. Roger Sperry. [Nobel Prize 1980].
  • Artificial Life (Score:2, Interesting)

    by tj111 ( 1275078 ) on Tuesday June 24, 2008 @11:34AM (#23917869)
    Artificial Life is a relatively newer idea than just Artificial Intelligence. Artificial Intelligence is basically revolving around writing software that can mimic intelligence, although current techniques make that quite a difficult goal. Artificial Life sounds more like something you'd be interested in. As described by the ISCID [iscid.org]:

    Artificial Life does overlap with Artificial Intelligence but the two areas are very different in their approach and history. Artificial Life is concerned with specific life-oriented algorithms such as genetic algorithms which can mimic nature and its laws and therefore relates more to biology, whereas Artificial Intelligence tends to look at how human intelligence can be replicated, therefore relating more to psychology. ( http://www.iscid.org/encyclopedia/Artificial_Life [iscid.org])

This file will self-destruct in five minutes.

Working...