Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Supercomputing Science

U.S. Plan For "Thinking Machines" Repository 148

An anonymous reader writes "Information scientists organized by the US's NIST say they will create a "concept bank" that programmers can use to build thinking machines that reason about complex problems at the frontiers of knowledge — from advanced manufacturing to biomedicine. The agreement by ontologists — experts in word meanings and in using appropriate words to build actionable machine commands — outlines the critical functions of the Open Ontology Repository (OOR). More on the summit that produced the agreement here."
This discussion has been archived. No new comments can be posted.

U.S. Plan For "Thinking Machines" Repository

Comments Filter:
  • Shit (Score:3, Funny)

    by Peter_The_Linux_Nerd ( 1292510 ) on Wednesday May 28, 2008 @06:21PM (#23578525)
    Shit, we really are going to have to start watching and learning from the terminator films now.
    • Re: (Score:2, Funny)

      by phtpht ( 1276828 )

      Shit, we really are going to have to start watching and learning from the terminator films now.

      At a geometric rate?

      • by Cybrex ( 156654 )
        Yes, and you'd better hurry. You've got until 2:14 am Eastern time, August 29th. (We'll fudge the year.)
  • Awesome (Score:5, Insightful)

    by geekoid ( 135745 ) <{moc.oohay} {ta} {dnaltropnidad}> on Wednesday May 28, 2008 @06:25PM (#23578571) Homepage Journal
    If computer history tells us anything, they will create more data then we can understand in a short amount of time.
    • Re: (Score:3, Interesting)

      by pilgrim23 ( 716938 )
      the Thinking Machine is a creation of Jacques Futrelle if I recall his name right and is actually Professor Van Dusen. That is the title given to a collection of detective stories of Van Drusen.
        Futrelle died aboard the Titanic.
      • by geekoid ( 135745 )
        That is the third time today I have read a reference to 'Van Drusen'.
        I'll need to do some research into these detective stories.
    • I'm sure the answer will be 42
      • by geekoid ( 135745 )
        Didn't see that one coming~

        So is 42 old, or has it become 'kitsch'.

        ob.Simpson:
        Gunter: You've gone from hip to boring. Why don't you call us when you get to kitsch?
        Cecil: Come on, Gunter, Kilto. If we hurry, we can still catch the heroin craze.
    • Re: (Score:2, Interesting)

      by umghhh ( 965931 )
      why would anybody want to understand this mount of data?

      I wonder sometimes why we humans do things and after all these years spent here I still do not know. Let us take this little idea of building 'thinking' machines. So members of human race are trying to build thinking machines - how splendid - while majority of us cannot even spllel properly not to mention reading with understanding , some of us are arrogant enough to attempt to build a 'thinking' machine. Besides technical challenges in the process -

      • by fugue ( 4373 )
        I think you answered your own question. Humans are a failure. Time for something better!
      • why would anybody want to understand this mount of data?

        Because data can be used to predict the future or get the future to do what you want.

        They use Quants [wikipedia.org] nearing super genius levels in the financial field and some tend to be autistic persons who are really good at math hired by the largest financial firms in the world to attempt to predict market trends.

        Imagine if you would a intelligent machine who could simply process the information given to it and provide something useful to as a prediction as someth
    • Comment removed based on user account deletion
  • by Crayboff ( 1296193 ) on Wednesday May 28, 2008 @06:28PM (#23578605)
    Wow, this can be scary. I hope the US is investing in a primitive non-computerized emergency plan to destroy this project, in case of the uprising. There has to be strict limitations placed on this sort of system, not just 3 rules. This is one time when the lessons learned from fictional books/movies would come in handy. I'm serious too.
    • This is one time when the lessons learned from fictional books/movies would come in handy. I'm serious too.

      Like the bit in Star Wars when Luke Skywalker almost asked Leia out and, well, they would have had kids together and everything OMG! And lucky that C3P0 was such a patsy and ruined it for them. It was almost incestuous!

      Not that I've ever come across that in real life, but definitely brother-sister relationships are a no-no.

      (For example)

      • by Chris Burke ( 6130 ) on Wednesday May 28, 2008 @06:50PM (#23578881) Homepage
        Like the bit in Star Wars when Luke Skywalker almost asked Leia out and, well, they would have had kids together and everything OMG! And lucky that C3P0 was such a patsy and ruined it for them. It was almost incestuous!

        Not that I've ever come across that in real life, but definitely brother-sister relationships are a no-no.


        I know. I'm an only child -- as far as I know. So whenever I get shot down by a woman, I just remember the lesson of Star Wars, and figure that she was probably just my long lost sister so I'm better off anyway.
    • by somersault ( 912633 ) on Wednesday May 28, 2008 @06:45PM (#23578815) Homepage Journal
      Considering computers can't even truly understand the meaning behind stuff like 'do you want fries with that?' (sure you could program a computer to ask that and give the appropriate response.. in fact no understanding is required at all to work in a fast food store, but that's beside the point :p ), I don't think you need to worry so much about limiting their consciousness just yet.
      • by geekoid ( 135745 ) <{moc.oohay} {ta} {dnaltropnidad}> on Wednesday May 28, 2008 @06:50PM (#23578879) Homepage Journal
        You don't need to understand to think.
        Thinking doesn't mean cognition either.
        • Depends on your definitions really ;) I had a heated debate with one of my exes about the semantics of stuff like this before. Was rather stupid in hindsight, people shouldn't necessarily have to have exactly the same concept in their mind for words as long as they understand that other people may be using them slightly differently. I used to try to point out that we meant the same thing but were expressing the ideas differently, which is sometimes true, but sometimes probably just a subtle attempt at manip
          • Re: (Score:2, Funny)

            by Anonymous Coward
            One of your exes? So it passed the Turing test?
            • Yeah she got the prize for best grades in her tri-state area apparently, she must have had to think a coupla times in her life.
      • Then again I'm not particularly worried about the conscious computers. I'm worried when the computer programmed to "find the best way to reduce national crime rate" decides the best way to do so is by triggering a nuclear war to wipe out the population.

        Note a computer that could do that is probably simpler than a computer that can understand "do you want fries with that".
        • Fair point. I think in that case, ask the computer *how*, but don't give it any guns or giant mechanised tanks ;) And it would probably be better to examine the crime rates, taxes, police, health and education spending etc and let the computer examine variations of those, rather than use capital punishment (though that could be a valid method too if it's shown to work well as a deterrent.. :s I don't think it does work well as a deterrent though, does it?)
          • Fair point. I think in that case, ask the computer *how*, but don't give it any guns or giant mechanised tanks ;) And it would probably be better to examine the crime rates, taxes, police, health and education spending etc and let the computer examine variations of those, rather than use capital punishment (though that could be a valid method too if it's shown to work well as a deterrent.. :s I don't think it does work well as a deterrent though, does it?)

            Asking for input, instead of allowing it to act, and limiting the options and variables it can use can help us avoid an undesirable solution.

            But the computers will keep getting smarter, and no matter how many safeguards we devise we're going to have to deal with the fact that it will be making decisions and plans we have no hope of understanding.

            • Seems it would be pretty easy to understand if you were keeping a sensible log of what is happening. It might take a while to review those logs, but I don't see how it's beyond understanding. Computers currently can't do anything that humans don't understand. Once we get them to the same level of 'understanding' as us, then perhaps they will be able to improve upon themselves faster than we can improve upon them. But if done properly, we would still have a record of how they reached the conclusions that the
              • The point where they start improving upon themselves is called the technological singularity.

                Now the definition of "smarter" is tricky when considering computers, by some metrics a wristwatch is smarter than any human alive, and that's part of the issue. But there's already instances where a computer is solving equations of the form Ax=y where no human understands the intricacies of the formula or the full effects of all the different x and y values, they just know it maps well to their real world problem.

                A
                • Okay, fair enough. I at first suspected you were just one of those people who had watched too many movies but not thought about it ;) Having read a lot of Asimov when I was younger, he thought of lots of interesting situations, which hopefully would encourage people to be cautious. I didn't know what the singularity was that people were referring to before. There are probably several different types of singularity that could develop in that case though, as a computer that knows lots about microprocessor and
      • Yeah but what is understanding? When we understand someone's utterance our brains are engaging in a complex, physical, mechanical (yes, algorithmic) process involving a few different areas of our brains, and yet we are said to 'understand' the words spoken. It seems to me if we had a computer with a sufficiently complex algorithm we could accomplish 'understanding' in the same sense. This might be a good time to start coming up with working(we've already had plenty of theoretical) groundrules as to 1) wh
        • Yep, that makes sense. There are different types of understanding too though - like visualising how an object is going to move through 3D space before taking an action, and speech. I think those are usually linked in humans though as they're both part of the left brain, and most humans manifest their best visual-spacial stuff with their right hand (though personally I'm left handed/slightly ambidextrous :P ).
    • Yesterday I spent a long time trying to swat a fly. The little bastard was extremely effective at self preservation. Now most people would argue that a fly does not think, but it is clearly able to perform some sort of precessing.

      Computer thought is probably no more advanced than that of a bug. Mars rovers etc can only executed canned move sequences and don't operate autonomously. Some robots etc are more autonomous, but are still pretty limited when it comes to any biological equivalent.

      As much as people h

      • by mrbluze ( 1034940 ) on Wednesday May 28, 2008 @07:42PM (#23579493) Journal

        Now most people would argue that a fly does not think, but it is clearly able to perform some sort of precessing.

        Not wanting to labour the point too much, but...

        It's no different to a script that moves a clickable picture away from the mouse cursor once it approaches a critical distance such that you can never click on the picture (unless you're faster than the script).

        A fly's compound eye is a highly sensitive movement sensor and the fly will move at anything big that moves, but if you don't move the fly doesn't see you (its brain wouldn't cope with that much information).

        Flies can learn a limited amount but it's limited and I would argue a computer could well behave as a fly and perform a fly's functions. But is the fly thinking? I don't think the fly is consciously deciding anything except that repeated stimuli that 'scare' it result in temporary sensitization to any other movement.

        Bacteria show similar memory behaviour but I wouldn't go so far as to call it 'thought'.

        • Humans can learn a limited amount but it's limited and I would argue a computer could well behave as a human and perform a human's functions. But is the human thinking? I don't think the human is consciously deciding anything except reacting to stimuli by applying previously learnt actions they have stored in their memory.
        • A bug that thinks? I find the very idea offensive
      • by Anonymous Coward on Wednesday May 28, 2008 @09:10PM (#23580389)

        Computer thought is probably no more advanced than that of a bug

        That's the frightening part.

        Next time you find a bidirectional trail of ants in your home, try this little experiment:

        1) Monitor a 6-inch square. For the next 5 minutes, kill every ant entering that square. Use the same piece of paper towel and smear their guts a bit when you squish 'em.
        2) After 5 minutes, stop killing ants. Just watch individual ants for the next 30 minutes.
        3) Go to sleep. Look around the house 24-72 hours later. You'll find a completely different ant trail.

        "A human is smart. A mob of humans is dumb."
        - Men in Black

        Ants don't work like that.
        "An ant is stupid. A colony of ants is smart."

        Ants taught me what the word alien meant.

        • All you did was use the paper towel to wipe away the scent trail the ants were following.

          Ant trails are created by scouts doing a random walk. When they find something tasty hey follow their own trail back to the nest and all the other ants follow that same trail and also strengthen it.

          Occasionally an ant gets lost, starts a random walk and often run into the line again. If this new path is faster it will tend to displace the original line. Once the food is gone the ants will disperse where the food was
    • I, for one believe in this [singinst.org], and welcome my new artifically intelligent overlord.
    • by khallow ( 566160 )
      Well, we better do the same for libraries, universities, religious buildings, markets, and other potential sources of ontology.
    • This has been a long time in the coming and has been bugging the hell out of me. This is where i see a lot of the "Community Contributions" involving Jeff Hawkin's recent endeavors [numenta.com]. If you take a look at some of the details of his models, the fact that DARPA and Lockheed/Martin [cyperus.com] have taken an interest in his work, and his recent projects things start to look scary.

      It is easy to envision the possible uses for his recent mundane technologies" [itpro.co.uk]. Itinerary analysis and keyword triggered speech recognition and rec
      • keyword triggered speech recognition and recording?

        Er...

        Wouldn't you have to be doing the speech reco in the first place to identify the keyword?

        That's a lot of processing unless the surveillance is fairly tightly targeted.

        I don't see it as a threat - Hawkin seems to be a touch overhyped from what I read.

  • I for one would like to welcome our thinking machine overlords...
    Singularity here we come!
    • As my experience with Singularity [emhsoft.com] has shown, "thinking machines" aren't all that good at thinking. Nine times out of ten, they decide to build a datacenter on the moon, and some jerk of a scientist with a telescope goes "Hey! That's a moon base", and before you know it he concludes that this means AI must currently exist, and somehow some strange virus of human design wipes out every single bit of the AI.

      Wait, wait, that was a game. In that case, all hail our thinking machine overlords. Please don't try t
    • Re: (Score:3, Insightful)

      by geekoid ( 135745 )
      Singularity is a myth.
      Like 'heaven' or any other distant time concepts people who can't imagine what's next.

      When they can imagine, then we will need to be careful because at that point we become a competitor.
      Of course symbiont might be a better term, until we automate all the steps to generate power for the machines.
  • by Anonymous Coward
    Just imagine how fast they could post catchphrases! They will hunt down low numbered users. AC is humanity's last hope for survival.
  • from TFA: OOR users, tasked with creating a computer program for manufacturing machines, for example, would be able to search multiple computer languages and formats for the unambiguous words and action commands.

    from my experience, the ambiguous words is the documentation, followed closely by the comments.

    "unambiguous words and action commands"? Is this what "experts in words" call a computer language syntax? now we're going from "you don't need to be no stinkin'
  • Forget about the "reasoning". The agreement is about creating standard ontologies in different fields (contexts). Personally, I think it will be very difficult because first they will have to gather experts in all those fields (may it be biomedicine or business processes) and define a way to express all this knowledge. Of course, OWL is the ontological language to use, but they will need a serious bunch of guidelines to keep the model consistent.
    • You forgot to mention that it will fail for the exact same reasons that Good Old-Fashioned AI has always failed. All the classifications in the ontology, when actually applied to any real-world problem, will turn out to be unexpectedly and hopelessly fragile.
  • Somebody claims to be able to build a ''thinking machine''. All efforts so far have failed. There is reason to believe all efferts in the forseeable future will also fail. It is even possible that all efforts ever will fail, as currently we do not even have theoretical results that would indicate this is possible.

    So why these claims again and again, and (I believe) often against better knowledge by those making the claims? Simple: Funding. This is something people without a clue about information technology
    • Every few years the same thing. Somebody claims to be able to reach India by navigating westward from Europe. All efforts so far have failed.


      So why these claims again and again, and (I believe) often against better knowledge by those making the claims? Simple: Funding. This is something people without a clue about geography, but with money to give away, can relate to.

      • by gweihir ( 88907 )
        Nit comparable: There was indication the earth was round and it was known that India exists, so there was at least a strong possibility this was feasible. With ''computer intelligence'' all current indicators in AI research say ''well be infeasible'', as there is absolutely no hint that it could work.
    • by somersault ( 912633 ) on Wednesday May 28, 2008 @06:58PM (#23578987) Homepage Journal
      What reason do you have to believe that all efforts will fail? A computer powerful enough to simulate all the cells in a brain would presumably be able to do everything a brain can do? Brains are like blank slates then take 25 years of training before they are regarded as fit for specialised jobs - a computer that was capable of forming semantic links and organising them properly would be able to give the illusion of understanding, and in fact can do a passable job in limited domains (thinking about for example medical 'knowledge base' type systems which take symptoms and work out possible causes). It is beyond our current understanding to build a proper thinking computer, but that doesn't mean we shouldn't work towards it. If we did it properly then we really would be able to build computers that could work out logical and more objective conclusions for problems (given enough factual input data to allow it to make unbiased 'decisions').

      Unless you want to say that there is some mystical element to brains, there is nothing precluding the eventual design and building of 'sentient' computers, surely? Beyond our own fear of what would happen if we did such a thing, as evidenced by plenty of 20th century fiction. Building sentient computers could even be regarded as a type of evolution, as they would then be able to improve upon themselves at an exponential rate..
      • I called my cable company the other day and got an automated response that asked questions and responded, not only with words and instructions but also with a modem reset. The computer system could ask questions, determine responses and perform actions. Yes, it was limited, but decades past it would have been considered awe inspiring and doubtless would have been dubbed both a successful artificial intelligence and thinking machine.

        What then is the proper definition of a thinking machine? We already have c

        • by slarrg ( 931336 )
          That's certainly more thinking than I've come to expect from the employees in my cable company.
      • by gweihir ( 88907 )
        What reason do you have to believe that all efforts will fail? A computer powerful enough to simulate all the cells in a brain would presumably be able to do everything a brain can do?

        That is completely unknown. First it is possible that this computer cannot be built. Remember that there is indication the brain uses quantum effects. Second, it may well be impossible to program it, if it can be built. And third (without going religious here), it is possible that the brain alone is not enough. In short: We do
    • by FleaPlus ( 6935 ) on Wednesday May 28, 2008 @07:16PM (#23579177) Journal
      Actually the researchers themselves aren't saying anything at all about "thinking machines" -- that was just added by the blog summary. In fact, if you had read the document describing their plans [cim3.net], you would have seen that it doesn't even include the words "thinking," "AI," or "intelligence." All they want to do is create an Internet-accessible database of ontologies and ways for ontology-related services to interoperate. Your smears of them as "unethical" and "parasites" are completely uncalled for.
  • by clang_jangle ( 975789 ) on Wednesday May 28, 2008 @06:49PM (#23578863) Journal
    The summary isn't terribly clear, but according to TFA:

    The ontology wordsmiths envision an electronic OOR in which diverse collections of concepts (ontologies) such as dictionaries, compendiums of medical terminology, and classifications of products, could be stored, retrieved, and connected to various bodies of information. OOR users, tasked with creating a computer program for manufacturing machines, for example, would be able to search multiple computer languages and formats for the unambiguous words and action commands. Plans call for OOR's inventory to support the most advanced logic systems such as Resource Description Framework, Web Ontology Language and Common Logic, as well as standard Internet languages such as Extensible Markup Language (XML).


    It's merely intended as a convenient resource for programmers.
  • by mangu ( 126918 ) on Wednesday May 28, 2008 @06:49PM (#23578865)
    It seems that computers with a capacity equivalent to human brains will be developed in the next twenty years or so.


    OK. I know, this prediction has been made before, but now it's for real, because the hardware capacity is well within the reach of Moore's law. To build a cluster of processors with the same data-handling capacity of a human brain today is well within the range of a mid-size research grant.


    Unfortunately, they have cried "wolf" too many times now, so most people will doubt this, but it's a reasonable prediction if one calculates how much total raw data-handling capacity the neurons in a human brain have. Now, software is another matter, of course, but given enough hardware, developing the software is a matter of time.

     

    • Now, software is another matter, of course, but given enough hardware, developing the software is a matter of time.

      But we will need much, much better hardware if we intend to program it in 20 years. You only need to look at Vista to see that programmers today don't care or can't program with limited resources, and even when we get the hardware, no programming method has been found to replicate the human mind, meaning that we will need even more hardware to make it work and even more hardware for the futuristic programming methods that will make Vista seem like it is well-coded. You only need to look at speech recog

      • Re: (Score:3, Insightful)

        I think you'd be wrong about that. I suspect we'll get this working with a small but well designed framework running on a low overhead OS, because part of the deal with these things is that so much of it is self-organizing (or at least, organizes itself based on a template). Once we get the model right (and it might be very similar to cockroach-esque models currently working), most of the resources should be directly usable for the e-brain.
      • Re: (Score:2, Interesting)

        by Anonymous Coward
        Speech recognition has improved dramatically in last 20 years. Dragon Naturally Speaking on an inexpensive PC can take dictation faster then most people. In the 80's the best super computers would struggle with a small speaker dependent vocabulary. Better hardware has clearly made a huge difference.

        Better hardware is a necessary yet insufficient requirement for strong AI. There is still a lot to learn about how the human brain works and how to write software to emulate it. However, when you look at
      • by slarrg ( 931336 )

        Slow and error prone seems, to me, to be a large part of the human condition. Especially when you start sharing the information between and among various people. The human mind has fairly simple mechanisms (though they're difficult to study empirically) which mainly consist of networks of neurons. So you end up with a lot of data that is interconnected in very precise networks from which meaning is created. Often, these connections are not consistent in every individual (or perhaps never consistent for any

    • by PPH ( 736903 )

      It seems that computers with a capacity equivalent to human brains will be developed in the next twenty years or so.
      At which time they will spend all of their resources searching for porn on the 'net.
    • by QuantumG ( 50515 ) *
      Couple of years ago a survey was made of AI researchers. The questions were:

      1. Do you think there will be a major advance in general intelligence in the next 20 to 30 years?
      2. Is your research likely to be a contributing factor to this advance in general intelligence?

      The majority of respondents answered: Yes. No.

      So basically, everyone thinks something big is going to happen soon but few to no researchers are actually working on it.

    • ...but given enough hardware, developing the software is a matter of time.

      but guaranteed that time is more than 20 years. I've already lived through multiple 20 year "it must be possible by then" projections.

      it's like the ubiquitous 6 month projection to get a large project to a usable state. This goes all the way back too. No one has a clue, but it just seems like 6 months ought to be long enough to do it.

      to give you an idea of how empty the
      • ... put "reasoning like a human" at the 20 year mark and then, devoid of any thought of what technology might need to be developed, start working backwards with bemchmarks of achievement that approach "reasoning like a human".

        The Stone Age lasted a few hundred thousand years. When we learned how to use metals, the Bronze Age lasted a few thousand years. Then came the Iron Age. We only learned how to make steel in an industrial scale in the nineteenth century, the Steel Age only lasted a hundred years, then

        • Technology accelerates exponentially, it's very risky to extrapolate from the past. We cannot work backwards and expect to get any reasonable predictions for the future.

          no, backwards from 20 years from now to today. what kind of steps would be needed over the next 20 years to get to "reasoning like a human", and when is all this acceleration going to take place, because there sure isn't anything taking place now.

          in other words, there is no basis for a 20 year proj
    • >> To build a cluster of processors with the same
      >> data-handling capacity of a human brain today
      >> is well within the range of a mid-size research grant

      Nope. The brain is hundred billion neurons, connected by 100 trillion synapses. Sure the "clock frequency" is very low, but even taking this into account, those figures far exceed what could be built with today's technology. Not to mention that scientists today have absolutely no clue how major parts of the brain work, so even if hardware
  • When I first read the headline I thought it was referring to Thinking Machines of Danny Hillis fame. You know, the hypercubic CM series. "Do you know anyone who network three connection machines and debug 2 million lines of code for what I bid for this job?"
  • Let's take the example of a simple idea: a pun. This is a word that in a given context can have more than one possible interpretation. One can classify either one or both of the interpretations as the ideas expressed, but that would be incorrect. Often times it is the presence of both meanings that give the pun a new meaning that joins the two contexts.

    It is the interconnections between contexts that generally give new insight into subjects. Repositories of existing concepts can only be used to explore

    • Fortunately, someone else has pointed out that this is not at all about thinking machines, only a repository of word concepts.

      However, the problem remains the same as that with most failed-and-doomed-to-failure AI research: the researchers have an excellent grasp of technology, but are usually operating on a fundamentally flawed model of how consciousness works.

      Essentially, we think about abstract things by repurposing the connections used for understanding the physical world (see Philosophy in the Flesh [amazon.com] an
  • Would be to create a computer-based system for assisted thinking. By that I mean something along the lines of what the visual thesaurus people have created only which would allow people to populate their own interconnections. Something that would allow people to form easy ways of presenting the data they think about as well as as interconnecting it. Currently we are sinking under the weight of the cross-referencing. It takes half-a-lifetime to train someone in some narrow subject because of interwoven n
  • by giampy ( 592646 ) on Wednesday May 28, 2008 @07:28PM (#23579327) Homepage

    The guys at cyc [cyc.com] (look for wikipedia entry too) are already halfway there. Last time i checked there were already something like 5 million facts and rules in the database, and the point where new facts could be gathered automatically from the internet was very close.

    Many years ago i remember the founder (Doug Lenat) saying that practical purpose intelligence could be reached at ten million facts....

    we'll see within the next decade, i guess.

  • What sense is there in trying to encapsulate "concepts" particularly when phrased in language? Both of these are fluid and evolving. Attempting to archive a particular static state is at best a waste. Ontologists above all should know this.

    And maybe that's the point. For centuries ontology has existed primarly to serve itself and secondarily to trade favors with other branches of philosophy. The proposed project has the primary result of providing gainful employment outside the halls of academic philosophy
  • Building a standard "ontological repository" would seem to require establishing a structure within which its objects and relationships can be contained.

    While this might seem to be of benefit to extending the capabilities of some tasks like machine translation into broader fields, I think this might cause problems at the cutting edge, that is: machine reasoning.

    Reasoning about complex problems at the frontier knowledge (to paraphrase TFA) requires identifying new links and relationships between objects. Na

  • by TRAyres ( 1294206 ) on Wednesday May 28, 2008 @08:03PM (#23579715) Homepage

    Lots of people are making posts about this vs. skynet, terminator, etc. But there are some problems with that (overly simplistic and totally misguided) comment.


    There are numerous formal logic solvers, that are able to come to either the correct answer (in the case of deterministic systems, for instance) or to the answer with the highest degree of success. The difference between the two should be made clear: Say if I give the computer that:

    A)All Italians are human. B)All humans are lightbulbs.

    What is the logical conclusion? The answer is that all Italians are lightbulbs. Of course, the premises of such an argument are false, but a computer could work out the formally correct conclusion.


    The problem these people seem to be solving is that there needs to be a unified way to input such propositions, and a properly robust and advanced solver that is generic and agreed upon. Basically this is EXACTLY what is needed in order to move beyond a research stage, where each lab uses its own pet language.


    I mentioned determinism, because the example I gave contained the solution in the premises. What if I said, "My chest hurts. What is the most likely cause of my pain?" An expert system (http://en.wikipedia.org/wiki/Expert_system) can take a probability function and return that the most likely cause is... (whatever, I'm not a doctor!). But what if I had multiple systems? The logic becomes more fuzzy! So there needs to be an efficient way to implement it, AND draw worthwhile conclusions. Such conclusions can be wrong, but they are the best guess (the difference between omniscient and rational, or bounded rational).


    None of these things are relating to some kind of 'skynet' intelligence.


    IF you DID want to get skynet like intelligence, having a useful logic system (like what is planned here) would be the first step, and would allow you to do things like planning, for instance. If I told a robot, "Careful about crossing the street." it would be too costly to try to train it to replicate human thought exactly. But it records and understands language well (at this point), so what can we extract from that language?


    Essentially, this is from the school of thought that we need to play to computer's strengths when thinking about designing human like intelligence, rather than replicating the human thought processes from the ground up (which will happen eventually, either through artificial neurons, or through simulation of increasingly large batches of neurons). On the other hand, if such simulations lead to the conclusion that human level consciousness requires more than the model we have, it will lead to a revolution in neuroscience, because we will require a more complex model.


    I really can't wait to get more into this, and really hope it isn't just bluster.


    Also:

    'Thinking Machines' title is inflammatory and incorrect, if we use the traditional human as the gauge for the term 'thought'. It is a highly formalized and rigorous machine interpretation of human thought that is taking place, and it will not breed human level intelligence.

  • by idlemachine ( 732136 ) on Wednesday May 28, 2008 @08:27PM (#23579975)
    I'm really over this current misuse of "ontology", which is "the branch of metaphysics that addresses the nature or essential characteristics of being and of things that exist; the study of being qua being". Even if you accept the more recent usage of "a structure of concepts or entities within a domain, organized by relationships; a system model" (which I don't), there's still a lot more involved than knowing "appropriate words to build actionable machine commands".

    Putting tags on your del.icio.us links doesn't make you an ontologist any more than using object oriented methodologies makes you a platonist. I think the correct label for those who misappropriate terminology from other domains (for no other seeming reason than to make them sound clever) is "wanker". Hell, call yourselves "wankologists" for all I care, just don't steal from other domains because "tagger" sounds so lame.

  • Does this remind anybody else of the prime number ontological schema talked about in the Baroque Cycle (by Neal Stephenson)?
  • Maybe it's time for MIT and other tech Universities to start a Mentat degree?
  • This really ties in with this article: http://news.slashdot.org/article.pl?sid=08/05/28/2217230 [slashdot.org]

    So, we don't want to fund proper science, or proper education, but we want to build machines that can think for us, so we can concentrate on the important things, like believing that the war in Iraq is about bringing freedom and democracy to the poor people and that the world was created in 6 days (BTW, how can one even talk about days before the creation of Heaven and Earth, and crucially the sun?)

    Not that this k
  • Despite the capitalization, this is not a reference to the Thinking Machines Corporation [wikipedia.org], recently featured [thedailywtf.com] on the DailyWTF.
  • All this "agreement" is about is to have a repository for everybody's "ontology" data. It's like SourceForge, only less useful.

    Most of what goes in is tuples of the form (relation item1 item2); stuff like this:

    (above ceiling floor)
    (above roof ceiling)
    (above floor foundation)
    (in door wall)
    (in window wall)
    ...

    The idea is supposed to be that if you put in enough such "facts", intelligence will somehow emerge. The Cyc crowd has been doing that for twenty years, and it hasn't led to much.

    The class

You know you've landed gear-up when it takes full power to taxi.

Working...