Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Robotics Technology

Palm Founders Form AI Company 184

Mentifex writes "As reported in the New York Times, Kansas City Star and other news media, Jeff Hawkins (co-author of On Intelligence) and Donna Dubinsky, co-founders of Palm Computing and Handspring, along with Dileep George as the principal engineer, are starting an AI company named Numenta as a follow-up to Hawkins' recent work on visual processing."
This discussion has been archived. No new comments can be posted.

Palm Founders Form AI Company

Comments Filter:
  • Somewhat Offtopic (Score:2, Interesting)

    by AKAImBatman ( 238306 )
    Can anyone point me toward some research on associative AI? i.e. Instead of AI that trained by nueral nets or genetic algos, does anyone know of research on "scoring" words based on their relation to other words? Extending words into concepts, an AI could become quite intelligent at things like Spam filtering.

    Just something I was thinking about lately. Anyone?
    • Re:Somewhat Offtopic (Score:3, Interesting)

      by Anonymous Coward
      That is part of Natural Language Processing, where the goal is to figure out the meaning of sentances. There has been much progress in this field, including programs that can read news articles and then paraphrase the information.

      Google "Natural Language Processing".
      • That was the bootstrap I needed to find info on the subject. Thanks! :-)
        • AI Reinasence (Score:3, Informative)

          by projectNOR ( 870543 )
          There are actually quite a few projects now taking similar, cortex-centric approaches to AI hard problems. Are we up to something here? The guys responsible of these projects are not wacko types at all, but established entrepreneurs and/or well-known researchers:

          CCortex [ad.com] "A 20-billion neuron simulation of the Human Cortex and peripheral systems."
          Cyc [cyc.com] a knowledge base with vast collection of facts about the real world and logical reasoning ability. Financed by Paul Allen AI related investment company,Vulca [vulcan.com]
    • HAL: Dave, do I need a penis enlargement?
      Dave: For the millionth time HAL, no. You don't have one, remember?
      HAL: But if I did, do you think I would get better functionality if I used Viatroxx?
      Dave: No. Now Hal...
      HAL: Dave, it looks like there's another poor Nigerian who needs my help.
      Dave: Aaaaaaaaaaaaaaaaaarrrrrrrrrrrrrrrrgggggg!
      HAL: Dave? What are you doing Dave?
    • Isn't that essentially what Bayesian filtering [paulgraham.com] is?
      • No. Bayesian filters are merely scoring systems that rate the words in a message according to their likelyhood of appearing in an unwanted message. There's no real AI involved in the filters. (Although they are pretty good.)

        Linky [wikipedia.org]

        The advantage to an AI approach is that the AI could actually "understand" the message and be able to tell the difference between His naked balls and the ping-pong balls in this experiment. On many of the more conservative sites, both instances have "balls" replaced with "****s"
        • I'm no AI expert, but it seems unlikely to me that one can make an AI that can "understand" the message without making a full-blown Touring-test-passing AI, and if you had such a thing there are certainly better things it could be applied to than filtering spam.

          What I mean when I say it's like bayesian filtering is that you could add another meta level to the filter that compares strings of words, or something similar.

          In a way, it seems to me that Bayesian filtering is a form of AI, simply because it "le
  • by Spencerian ( 465343 ) on Thursday March 24, 2005 @11:21AM (#12036210) Homepage Journal
    You had to reset Palm PDAs in interesting ways, like poking a tiny button hidden ina hole with a paper clip. Imagine what you'd have to do a bot with Palm-like AI...

    "Sir, to reset the machine, you'll need to sharply press its reset button, located at the back of the machine, just before its legs. just quickly pop your foot against it to press it."

    "Uh, are you telling me that to reset it, I have to kick its ass?"

    "Er...yes, sir."
    • Re:Palm-Like "AI"? (Score:2, Interesting)

      by Hachey ( 809077 )
      "Uh, are you telling me that to reset it, I have to kick its ass?"

      "Er...yes, sir."



      Er, if you want an AI's reset to be life-like give it a good swift kick in the balls. Ever seen a guy go down after a good kick? In hindsight, it kinda reminds me of a hard reset...


      -----
      Check out the Uncyclopedia.org [uncyclopedia.org] , the only wiki source for not-semi-kinda-untruth about things like Kitten Huffing [uncyclopedia.org] and Pong! the Movie [uncyclopedia.org]!
  • by canfirman ( 697952 ) <pdavi25 AT yahoo DOT ca> on Thursday March 24, 2005 @11:21AM (#12036213)
    Great, just what I need, an AI app that keeps poping up saying, "You know you should go to that meeting. What do you mean you don't want to go? Did you remember your wedding anniversary? Have you called your wife? Who's this 'Elle' person in your phone book. You should stop playing 'Tetris' so often..."
    • Great, just what I need, an AI app that keeps poping up saying, "You know you should go to that meeting. What do you mean you don't want to go? Did you remember your wedding anniversary? Have you called your wife? Who's this 'Elle' person in your phone book. You should stop playing 'Tetris' so often..."

      Sounds like one of those Disorganizers from Discworld Bingley Bingley beep Insert Your Name Here , it is eight thirty aye em, you have a meeting with the Patrician

    • by CodeBuster ( 516420 ) on Thursday March 24, 2005 @12:30PM (#12036882)
      Just wait until it says, "I'm sorry Dave, but I'm afraid that I just can't do that..."
      • Just wait until it says, "I'm sorry Dave, but I'm afraid that I just can't do that..."

        Dave? Who's Dave?

        Have you been organizing someone else? I knew I didn't recognize all those contacts. I told you I didn't have to visit that Daisy woman about her bloody bicycle, but you kept sending me there. And now I see why.

        You wanted me to believe it was just my bad memory, that you were helping me. I was dumb enough to depend on you. And all this time, you've been someone else's slutty little notebook.

        Well,

    • Me: "This doesn't look like GenCon."
      PalmAI: "No, this is your dentist appointment. I only told you it was GenCon so you'd be here."
      Me: "But, but . . ."
      PalmAI: "Now, be a good girl and go sit in the nice chair."
  • PRINT "I WILL GUESS YOUR WEIGHT"
    FOR I=0 TO 1000
    PRINT "DO YOU WEIGH "; I; " POUNDS?"
    IF INSKEY="Y" THEN BREAK
    NEXT I

    Apologies to Penn Jillette [pennandteller.com]
  • by affinity ( 118397 ) on Thursday March 24, 2005 @11:24AM (#12036234) Homepage
    • by DoctoRoR ( 865873 ) * on Thursday March 24, 2005 @12:01PM (#12036589) Homepage

      The article gives little detail of the technology, and it's not like the general ideas Hawkins describes haven't been explored by people during the many decades of AI/neural networks research. The Numenta website gives the following:

      HTM is "hierarchical" because it consists of memory modules connected in a hierarchical fashion. The hierarchy resembles an inverted tree with many memory modules at the bottom of the hierarchy and fewer at the top. HTM is "temporal" because each memory module stores and recalls sequences of patterns. HTM is hierarchical both temporally and spatially. An HTM system is not programmed in a traditional sense; instead it is trained. Sensory data is applied to the bottom of the hierarchy and the HTM system automatically discovers the underlying patterns in the sensory input. You might say it "learns" what objects are in the world and how to recognize them. Time is an essential element of how HTM systems work. First, to learn the patterns in the world, the sensory data must flow over time just as we move our eyes to see and move our hands to feel. Second, because every memory module stores sequences of patterns, HTM systems can be used to make predictions of the future. They not only discover and recognize objects but they can make predictions about how objects will behave going forward in time.

      That sounds like a number of neural network approaches, including Stephen Grossberg's work [bu.edu] at BU. Although Hawkins seems to be a very bright guy, this field is littered with very bright researchers who made bold claims, and none of those efforts have yielded revolutionary businesses. Anyone remember (Stanford AI researcher) Edward Feigenbaum's Fifth Generation book in the 1980s? Doug Lenat's Cyc project?

      Remember the huge difference between one neuron's firing rate and the clock speed for processors. The brain operates in a way that's fundamentally different from how we program and how computers operate: massive parallelism with slow components versus (mostly) serial computation. So when a company says they'll market a software solution to something which scientists haven't figured out yet, I am indeed skeptical. This is really more research effort than commercial venture, and Numenta admits this: "It may well take several years before products based on HTM systems are commercially available."

      I hope there's something here. I'd love to see an outsider come in with fresh ideas and create a software platform to explore neuro-inspired programs. But let's be realistic and remember the history of AI. A red flag is the lack of any scientific papers available from the Numenta web site. If they are far enough along to make a software development kit, they should have been publishing results in peer-reviewed journals (with appropriate patent filings if necessary). So far, the only literature published is a trade book: On Intelligence.

  • by tquinlan ( 868483 ) <tomNO@SPAMthomasquinlan.com> on Thursday March 24, 2005 @11:24AM (#12036238) Homepage
    According to news.com.com.com.com, IBM is working on something similar [com.com]...

  • neocortex? (Score:5, Interesting)

    by dhbiker ( 863466 ) on Thursday March 24, 2005 @11:24AM (#12036241) Homepage
    Numenta is developing a new type of computer memory system modeled after the human neocortex

    surely this technology would be incredibly slow? (this is not a troll, read on before you mod me down!)

    From what I remember from my neural networks days the human brain/neocortex works so well because of its massively parallel nature (not because of the processing power of any one neuron), and that computers simply aren't able to exploit this as they aren't designed to work like this - They are instead designed to to massively serial operations using extremely powerful chips (neurons) because the overhead of managing these parallel operations synchronously is too great (the human brain/neocortex work asynchronously)

    am I wrong about this or am I missing something great that they've stumbled accross?
    • Re:neocortex? (Score:5, Insightful)

      by AKAImBatman ( 238306 ) * <akaimbatman@gmaiBLUEl.com minus berry> on Thursday March 24, 2005 @11:36AM (#12036353) Homepage Journal
      From what I remember from my neural networks days the human brain/neocortex works so well because of its massively parallel nature (not because of the processing power of any one neuron), and that computers simply aren't able to exploit this as they aren't designed to work like this

      Computers aren't *normally* designed like this. They can be however, and in recent years have been moving in that direction. When neural networks were first being researched, a Cray supercomputer was about the closest you could get to that sort of parallelism. Fast forward to today and we find that Intel (Pentium), AMD (AMD64), Sun (Sparc), and Sony (Emotion Chip) are all building machines that are highly parallel in nature.

      Even more interesting is that today you can build yourself a custom, massively parallel computer on a shoestring budget. All you need is a handful of FPGAs, a PCB layout service like Pad2Pad [pad2pad.com], a few other parts, and reasonable VHDL or Verilog skills. That's more or less what OpenRT [openrt.de] did to build their SaarCORE [saarcor.de] architecture. :-)
      • Even more interesting is that today you can build yourself a custom, massively parallel computer on a shoestring budget. All you need is a handful of FPGAs, a PCB layout service like Pad2Pad, a few other parts, and reasonable VHDL or Verilog skills. That's more or less what OpenRT did to build their SaarCORE architecture. :-)

        Holy Christ. I just had an acronym anyeurism.

        Bleeding out of ear...dying...damn you /. users with user #'s lower than my 600K series...#...[thud]
      • Re:neocortex? (Score:2, Insightful)

        by Babesh ( 763879 )
        You're assuming that neurons have to be simulated directly. But the mathematical research may have found a mechanism to simulate the behavior of neurons without simulating the (individual) neurons themselves. For example, like finding the eigenvectors to a matrix.
    • From what I remember from my neural networks days the human brain/neocortex works so well because of its massively parallel nature (not because of the processing power of any one neuron), and that computers simply aren't able to exploit this as they aren't designed to work like this ...

      Most current computers aren't designed to work like this.

    • Yes, you are indeed missing something. But it's probably not your fault, the people who taught you neural networks probably didn't know enough about the brain.

      The parallelity of human brains is widely and hugely overestimated.

      Just think about the fact that you can easily recognize 2 random objects if you are shown them for as little as a second. In this second, there is only enough time for about 100 of your neurons firing. The path trough your brain therefore _cannot_ be longer than a dozen neurons or "
      • Just think about the fact that you can easily recognize 2 random objects if you are shown them for as little as a second. In this second, there is only enough time for about 100 of your neurons firing.

        I suspect you may have misinterpreted whatever you read that said this. This is actually often cited as evidence in favor of massive parallelization. It doesn't indicate that "100 of your neurons" had to fire in this time, but that a 100-step sequence of neurons (with who-knows-how-many neurons involved in e
      • I don't see how you get the figure for about 100 neurons firing.

        Even if the path is only 12 neurons deep (and the signal only goes that far in 1 second), a typical neuron has 1000 synaptic connections with other neurons. Assuming only 0.25 of the connections fire 250^12 is still quite a big number.

        AFAIK the brain appears to have neurons for everything - down to like bunches of neurons that fire if you see lines at a particular angle. And probably bunches of neurons that fire if they detect particular bunc
    • Re:neocortex? (Score:2, Insightful)

      by neurozack ( 866185 )
      Stretched flat, the human neocortex -- the center of our higher mental functions -- is about the size and thickness of a formal dinner napkin. With 100 billion cells, each with 1,000 to 10,000 synapses; the neocortex makes roughly 100 trillion connections and contains 300 million feet of wiring packed with other tissue into a one-and-a-half-quart volume in the brain. And this is just the neocortex. Some brain events occur in fractions of milliseconds while others, like long-term memory formation, can take
    • Re:neocortex? (Score:3, Insightful)

      by timeOday ( 582209 )

      the human brain/neocortex works so well because of its massively parallel nature (not because of the processing power of any one neuron), and that computers simply aren't able to exploit this as they aren't designed to work like this

      A serial computer can compute anything a parallel computer can.

      Hardware isn't the problem anyways. If anybody could currently write an algorithm to understand and solve general problems in the way people can, but it took a 1000 node cluster to run at 1/100th of human speed,

  • by bigtallmofo ( 695287 ) on Thursday March 24, 2005 @11:25AM (#12036246)
    It appears the article summary might be misleading. From the first sentence of www.numenta.com:

    Numenta is developing a new type of computer memory system modeled after the human neocortex. The applications of this technology are broad and can be applied to solve problems in computer vision, artificial intelligence, robotics and machine learning.

    They further go on to say:

    Numenta is a technology platform provider rather than an application developer. The Company is creating a scalable software toolkit that will allow developers and partners to configure and adapt HTM systems to particular problems.

    My reading on this is that they aren't an AI company - they're just developing a technology that could be used for AI or many, many other uses.
    • by gl4ss ( 559668 )
      for me it looks more like they're developing system tha lets you strap on some ai behauvior on whatever project you're working on, so that you can make your systems more adaptable.

      remember that in the industry ai is not really about making self aware monsters... what they would be more intrested would be machines that adjust their behauvior.

    • True, but AI just sounds cooler and evokes images from a certain Stephen Spielberg movie. You need to cater for your audience. Just imagine: Apple and Google team up to build AI nanorobots running Linux!
  • by Anonymous Coward on Thursday March 24, 2005 @11:26AM (#12036253)
    By training neurons, they learn to achieve the desired result of a user.

    Pretty complex material, anyone wanting to delve into should do some reading on Minsky (proposed neural networks could make dead bodies perform tasks...creepy to say the least) http://en.wikipedia.org/wiki/Marvin_Minsky [wikipedia.org]

    When they release a white paper Im sure itll only be the beginning of a prosporus field of study.

    ~ Jon
  • by Spencerian ( 465343 ) on Thursday March 24, 2005 @11:26AM (#12036254) Homepage Journal
    FWIW to ya, A.L.I.C.E [alicebot.org] is an cool webbot AI similar to the old ELIZA bots of old, but with some sophistication that allows it to be programmed to answer specific questions and recognize some words and phrases well. Won't pass a Turing test, but hey, it's free.

    The webpage above has an animation that appears to have a bot attached to it. Pretty and cool.
  • by Mrs. Grundy ( 680212 ) on Thursday March 24, 2005 @11:27AM (#12036259) Homepage
    Nothing starts my day better than the pleasant scent of vaporware wafting from my computer. We live in a great time. This shows what a kid with nothing but a formalism and a dream can accomplish.
    • Nothing starts my day better than the pleasant scent of vaporware wafting from my computer. We live in a great time. This shows what a kid with nothing but a formalism and a dream can accomplish.

      Yeah, Hawkins has a history of making a big deal of concepts that never get anywhere. I remember a decade ago when there was all sorts of vaporware BS about a programmable handheld electronic organizer that could be operated with a stylus and easily synchronized with a desktop computer. What a farce that turn

    • Nothing starts my day better than the pleasant scent of vaporware wafting from my computer. We live in a great time. This shows what a kid with nothing but a formalism and a dream can accomplish.

      It would be better to say: I love the smell of vaporware in the morning.

  • by filmmaker ( 850359 ) * on Thursday March 24, 2005 @11:28AM (#12036271) Homepage
    In the book, Hawkins remarks that AI researchers often took the misguided approach that intelligence is a set of principles or properties, when in fact it's strictly a matter of behavior. To be intelligent is to behave intelligently. If he's right, then it's the act of being, wherein which the brain's primary tool is the continuous analogizing of current circumstances to past situations in order to make good predictive decisions, which constitutes intelligence.

    He's the first to claim that he's not looking for sentience or to answer the question of sentience, but is instead only looking for a practical engineering approach to building intelligent machines. I think this is doubly clever since the issue of sentience should not be addressed until well after, as Hawkins often remarks, our own brains are understood first, in terms of how they operate. Why they operate, or what motivates us or what makes us 'cognitive agents' don't enter the equation with his approach.
    • Agreed, his book is so straight forward i almost makes too much sense. It's quite easy and quick to read. I suggest everyone grab a copy.
    • IMO "AI" research is misguided whatever approach you take. As they say, trying to make a machine think is like trying to make a submarine swim. Maybe it's the modern technological equivilent of the ancient search for god - you either never find it but have a big adventure doing so, or realize they were intelligent all along. Heck, a thermostat is "intelligent" - it senses the enviroment, makes a "decision" and takes action. All you can do it just make things more & more self contained, self sufficient,
    • Don't tell these guys that: http://cyc.com/
      They'll get that white whale any day now. =)
    • I am currently reading Mapping the Mind by Rita Carter, a great book about our current (1998) knowledge about the brain. The most valuable about this book is its factual nature. Reading about countless experiments, observations and other facts, usually with explanations about underlying neural mechanisms about particular behaviours helps realise the nature of the brain - just a complex modular organ that is not unlike computer programs. When you see how easily certain aspects of consciousness (or intelligen
      • I'm no AI expert, but it seems a lot of these AI stuff is about trying to find "meanings" of stuff - build from a set of axioms. The other ones try to automatically group stuff.

        However I'm not sure those approaches would be good at dealing with "analogies/metaphors".

        If I told someone from a few hundred years ago (but reasonably smart) who knows a bit about "cows" and "grass", but nothing about cars and then I tell them: "petrol/gasoline" is to "car" about the same way "grass" is to "cow".

        And they'd under
        • In that book Rita Carter mentioned that brains in our ancestors developed the following way (I can't look up the quote right now, so please accept this simplified recollection).

          First we (our fish/reptilian/whatever ancestors) had general purpose intelligence without specialisation. Organisms could learn anything, but the process was rather slow and complex behaviour was impossible.

          Second came specialised units, where you had, e.g. a object recognition centre in the brain that would do only one thing, but
    • I am actually currently reading his book--started about a month ago and am finishing the last of it now (a little every night before bed, when I'm not too tired).

      His approach is surprising similar to my own (which I was initially happy to see), but less developed in some important ways. His book sometimes makes reference to being the first to consider this or that--nothing of which was new to me... things I've ready and/or talked about many times with others.

      His approach also has a few critical flaws..

      F
  • Wow. I haven't heard anyone use the term "AI" in a long time.

    If they're trying to evoke the feeling of being dated and discredited, why not also call the company N-ron?

  • Foldiak? (Score:3, Informative)

    by Anonymous Coward on Thursday March 24, 2005 @11:30AM (#12036289)
    I'm surprised that the short summary, from my brief perusal, does not include reference to work by Peter Foldiak (1991, 199?) and Wallis (1996). Both these authors published numerous papers on temporal and spatial coherence. My MSc in 1996 was also on the same topic followed by human research on the same problem. All of the computational work was with unsupervised learning algorithms varying whether the temporal processing was at the input our output stage.

    I guess I'll have to read the original paper. However, the notion of temporal processing has been around for a long time.

    Note: My own human research has yielded reliable data that addresses the acquisition of invariant object recognition.
  • by 14erCleaner ( 745600 ) <FourteenerCleaner@yahoo.com> on Thursday March 24, 2005 @11:32AM (#12036311) Homepage Journal
    I guess building spaceships is old-hat for rich techies now, so he's going to blow his millions on AI. I don't expect anything tangible to come from this.
  • Mentifex. The name alone conjures up flamewars of years past on Usenet.

    The big question in AI is whether an AI "mind" is more likely to spring up from a handful of rules, or whether a top-down design will bring it about. Mentifex was always in the latter camp.

    Those in the former camp, including the Palm founders in the article, always seemed to be on the verge of something, but never seemed to really get any closer to a "mind" than some fuzzy logic.

    We're still a long way off from Number 5 Alive.
    • Yeah, I'm quite surprised that the editors managed to get rid of all the links Mentifex [slashdot.org] undoubtedly made to his AI4U project, or whatever it is.

      For those unfamiliar with him, check out the The Arthur T. Murray/Mentifex FAQ [nothingisreal.com]. This guy is one of the kook legends.

      From the FAQ:

      1.2 Who is Arthur T. Murray and who or what is "Mentifex"?

      Arthur T. Murray, a.k.a. Mentifex, is a notorious kook who makes heavy use of the Internet to promote his theory of artificial intelligence (AI). His writing is characterized
      • Ah yes, one of the originals. That's the problem. The Internet now has plenty of trolls, but there just aren't the good ol' fashioned delusional kooks like Arthur T. Murray, Ed Conrad, Ted Holden, Archimedes Plutonium and George Hammond.
        • With the exception of Murray, are any of the old kooks still active? The only other one I can think of off-hand is Gene Ray (the Time Cube guy).
          • Haven't posted to talk.origins or any of the paleo groups in a few months, but Ed Conrad was still trying to pass-off his rocks as evidence of "Man as old as coal" line. Hammond still haunts physics groups and forums. Ted Holden still shows up, though I get this feeling that this might just be a fake.
  • by gearmonger ( 672422 ) on Thursday March 24, 2005 @11:36AM (#12036358)
    It's good to see that we might actually see some commercializable results come out of his research. Jeff's a smart dude Donna really is an excellent business manager, so I expect interesting things to emerge from this new venture.

    I mean, heck, if it gets us even one step closer to having competent automated tech support, I'm all for it.

  • by Cr0w T. Trollbot ( 848674 ) on Thursday March 24, 2005 @11:38AM (#12036376)
    ...just the way it was in 1970.

    - Crow T. Trollbot

  • by Anita Coney ( 648748 ) on Thursday March 24, 2005 @11:46AM (#12036449) Homepage
    ... that Dr. Otto Octavius is coming out of retirement to run the research department?
  • Belief Propagation (Score:4, Insightful)

    by songbo ( 614466 ) on Thursday March 24, 2005 @11:51AM (#12036495) Journal
    The idea seems simple enough. Create a hierarchical inference structure. Train it on some data. Let the nodes learn what are the most frequent data. This forms the basic alphabet set. Propagate this up the hierachy. Learn the conditional probability distribution. Voila, you have a working visual recognition system. Problem is, the system will be slow, unless you have a processor capable of parallel or vector processing. Try implementing the system on Matlab with a 320x200 image, and see your processor crawl to a halt. Now, imagine doing this on a 320x200 video, and pray! Well, that's why we need a different processor architecture to make this work. But the theory is simple.
  • After reading the Tech Report (note -- not a published paper in a respected journal) its clear that they are not presenting anything new here.

    Its surpising that a) its news and b) they anyone is founding a company based on these ideas since they have to date not been sucessful in solving "the vision problem."

    Firstly, the main ideas that they use have had a long history in visual modelling and statistical pattern recognition. The assertion that visual processing operates so cleanly at "levels" is far from
  • by ClosedSource ( 238333 ) on Thursday March 24, 2005 @12:06PM (#12036653)
    of something as complex as a PDA, try something really simple like AI.
    • Are you kidding?! Once AI is created, Jeff Hawkins will take over the world and rule as our supreme overlord. At that time "proft" will become as meaningless as "justice" or "freedom."

      That is, until the AI gets intelligent enough to kill him. That's when the REAL fun will begin.
  • Cerdibility ? (Score:4, Interesting)

    by shashark ( 836922 ) on Thursday March 24, 2005 @12:10PM (#12036682)

    None of the founders [numenta.com] of Numenta other than Jeff Hawkins have any experience in AI or for that matter have any background in hardcore computer science.

    Dileep George [numenta.com] is an Electrical Engineering graduate, while the CEO Donna Dubinsky is a hardcore salesperson and holds an MBA. Interestingly, the page also mentions that Jeff Hawkins " currently serves as Chief Technology Officer at palmOne, Inc [palmone.com]". Fishy!

    Next Generation AI ? Who are we kidding ?

    • "Dubinsky holds a B.A. from Yale University in History, and an M.B.A. from the Harvard Business School. She currently serves as a director of palmOne and of Intuit Corporation."

      Sounds like a suitable CEO to me. You hire CEOs for their management capabilities. You don't hire them to do your programming.

      "Dileep George was a Graduate Research Fellow at Redwood Neuroscience Institute, and a graduate student in Electrical Engineering at Stanford University. His research interests include neuronal coding, infor
    • Conventional A.I. research seems to be fossilized along "standard problems" not too different from when I took the M.I.T. A.I. course in the 1970s. (That course hasn't changed that much according to the OpenCourseware outlines.)
  • by xtal ( 49134 ) on Thursday March 24, 2005 @12:18PM (#12036755)
    If you are at all interested in your brain, artificial intelligence, and artificial thought - you owe it to yourself to get a copy of this book.

    I've been experimenting with neural networks implemented on FPGAs for awhile as a hobby - not much commercial interest in these systems just yet - but there is a lot of interesting work being done.

    Remember 15 years ago, when people thought it would take decades and decades to sequence the human genome? Then someone came along and figured out a much faster technique. This same kind of thing is starting to happen in artificial intelligence; people from backgrounds OTHER than computational AI and biology are starting to get involved, and the new perspectives have brought new ideas IMHO.

    Anyway, if you're interested in AI, get Hawkin's book 'On Intelligence'. It's damn good. One of the best I've read on the genre, and the references in the book will save you a lot of time delving further.

    • Remember 15 years ago, when people thought it would take decades and decades to sequence the human genome? Then someone came along and figured out a much faster technique. This same kind of thing is starting to happen in artificial intelligence; people from backgrounds OTHER than computational AI and biology are starting to get involved, and the new perspectives have brought new ideas IMHO.

      I think there's a lot of hubris on this board. The brain is a very complex organ. Solving it will take hundreds of

      • The work Hawkins describes has roots in research on perceptrons back in the 1950s.

        Did you even READ the book?

        Most of it speaks about how theories about how the brain classifies and processes information - and spends very little time on existing artificial intelligence constructs such as neural networks. Another good piece of the book details the author's troubles with trying to do academic research into AI, a viewpoint that I share.
        • I have the book on order but have read reviews and this description from the company website:

          An HTM system is not programmed in a traditional sense; instead it is trained. Sensory data is applied to the bottom of the hierarchy and the HTM system automatically discovers the underlying patterns in the sensory input. You might say it "learns" what objects are in the world and how to recognize them.

          Perceptrons were the precursors to more modern notions of neural networks, and as such, they deserve recogni

          • You need to read the book.

            Hawkins does not publish a completely unique algorithm per se. He puts together a number of ideas that until recently were not explained in a clear or concise manner. He spends a good deal of time talking about this specificially, and has an excellent set of references and cites. I've been reading neural network texts and research since the early 90's, and I personally found his insights extremely valuable.

            There is some commonality between neural networks and his concepts, but on
  • by FleaPlus ( 6935 ) on Thursday March 24, 2005 @01:19PM (#12037427) Journal
    As the submission noted, this work will be building on what Hawkins wrote about in his recent book, On Intelligence [wikipedia.org]. The companion web site for the book is here: [onintelligence.org]

    There are also a some reviews of the book:
    http://blogger.iftf.org/Future/000605.html [iftf.org]
    http://www.computer.org/computer/homepage/0105/ran dom/index.htm [computer.org]
    (By Bob Colwell, who was Intel's chief IA32 architect)
    http://www.techcentralstation.com/112204B.html [techcentralstation.com]
    http://www.corante.com/brainwaves/archives/026649. html [corante.com]

    A quote from his book:

    The agenda for this book is ambitious. It describes a comprehensive theory of how the brain works. It describes what intelligence is and how your brain creates it. The theory I present is not a completely new one. Many of the individual ideas you are about to read have existed in some form or another before, but not together in a coherent fashion. This should be expected. It is said that "new ideas" are often old ideas repackaged and reinterpreted. That certainly applies to the theory proposed here, but packaging and interpretation can make a world of difference, the difference between a mass of details and a satisfying theory. I hope it strikes you the way it does many people. A typical reaction I hear is, "It makes sense. I wouldn't have thought of intelligence this way, but now that you describe it to me I can see how it all fits together." With this knowledge most people start to see themselves a little differently. You start to observe your own behavior saying, "I understand what just happened in my head." Hopefully when you have finished this book, you will have new insight into why you think what you think and why you behave the way you behave. I also hope that some readers will be inspired to focus their careers on building intelligent machines based on the principles outlined in these pages. ...

    Weren't neural networks supposed to lead to intelligent machines?
    Of course the brain is made from a network of neurons, but without first understanding what the brain does, simple neural networks will be no more successful at creating intelligent machines than computer programs have been.

    Why has it been so hard to figure out how the brain works?
    Most scientists say that because the brain is so complicated, it will take a very long time for us to understand it. I disagree. Complexity is a symptom of confusion, not a cause. Instead, I argue we have a few intuitive but incorrect assumptions that mislead us. The biggest mistake is the belief that intelligence is defined by intelligent behavior.

    What is intelligence if it isn't defined by behavior?
    The brain uses vast amounts of memory to create a model of the world. Everything you know and have learned is stored in this model. The brain uses this memory-based model to make continuous predictions of future events. It is the ability to make predictions about the future that is the crux of intelligence. I will describe the brain's predictive ability in depth; it is the core idea in the book.

    How does the brain work?
    The seat of intelligence is the neocortex. Even though it has a great number of abilities and powerful flexibility, the neocortex is surprisingly regular in its structural details. The different parts of the neocortex, whether they are responsible for vision, hearing, touch, or language, all work on the same principles. The key to understanding the neocortex is understanding these common principles and, in particular, its hierarchical structure. We will examine the neocortex in sufficient detail to show how its structure captures the structure of the world. This will b
  • zerg (Score:3, Interesting)

    by Lord Omlette ( 124579 ) on Thursday March 24, 2005 @01:25PM (#12037502) Homepage
    I predict that the first AI they produce will work so well, that no one who buys one will ever need a replacement, so the company will spiral into obsolesence while Microsoft et al mkae a mint on AIs that are much easier to develop for...
  • I can see how this might be useful. For example, watching packet traffic to detect port scans.

    So, there might be some value-add for problems where you're trying to detect patterns in large amounts of data.

    I hope it works for them, but I have to say that a lot of this looks like it's been worked on before, with little commercial success.
  • by GeneralEmergency ( 240687 ) on Thursday March 24, 2005 @03:59PM (#12039297) Journal


    I don't want to sound like Chicken Little here and I realize that Jeff's work target falls short of sentience, but I do want the planet to start thinking about "Pre-Sentient AI" in a conservative, cautious way.

    Therefore I propose these Four Rules Of AI Development:

    Rule One:
    AI projects be Air-Gap network isolated and not be allowed to connect to the internet.

    Terminator III's premise is a plausible one. All entities are self-interested and will seek to defend and propagate themselves. Global internet infrastructure could be seriously damaged by a well crafted host of worms.

    Rule Two:
    AI projects will not have access to diagrams of their own design circuitry.

    This is to enable the effectiveness of Rule Three.

    Rule Three:
    All AI projects will have a buffered, hardware access to core thought processes so that the high order thought and planning can be observed with the AI entity's knowledge.

    Rule Four:
    All AI projects will be run on limited time run enabled power supply grids that are not documented design or protocol-wise anywhere on the internet.

    This is to enable containment in worst case scenario situations.

    There. I think I just saved the Planet.

    • "can be observed with the AI entity's knowledge.
      "

      That was supposed to be :

      "can be observed without the AI entity's knowledge."

      Sorry.
    • All entities are self-interested and will seek to defend and propagate themselves.

      Self-interest is not a requirement of an entity. It is merely the requirement of evolutionary progress or reasoned self-improvement. So, it is possible to create a non-self-interested entity that would then fail to self-preserve, self-replicate, or self-improve. The problem is we can't predict whether self-interest would develop or not. Likely it would be a random consequence of its "learning" that may or may not develop

  • by aclarke ( 307017 )
    Aren't acronyms (I know this isn't technically an acronym, but then what IS it?) great? When I talk to my retired-farmer dad about AI, I think I'm talking about artificial intelligence, and HE thinks I'm talking about artificial insemination. I'll let you imagine the conversations.

    And NO, artificial insemination is not what a lot of you /. geeks probably think it is...

  • For anyone who's interested, check out the write-up [mercurynews.com] from the San Jose Mercury News.

Dennis Ritchie is twice as bright as Steve Jobs, and only half wrong. -- Jim Gettys

Working...