Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Supercomputing Science

U.S. Plan For "Thinking Machines" Repository 148

An anonymous reader writes "Information scientists organized by the US's NIST say they will create a "concept bank" that programmers can use to build thinking machines that reason about complex problems at the frontiers of knowledge — from advanced manufacturing to biomedicine. The agreement by ontologists — experts in word meanings and in using appropriate words to build actionable machine commands — outlines the critical functions of the Open Ontology Repository (OOR). More on the summit that produced the agreement here."
This discussion has been archived. No new comments can be posted.

U.S. Plan For "Thinking Machines" Repository

Comments Filter:
  • Awesome (Score:5, Insightful)

    by geekoid ( 135745 ) <dadinportland&yahoo,com> on Wednesday May 28, 2008 @07:25PM (#23578571) Homepage Journal
    If computer history tells us anything, they will create more data then we can understand in a short amount of time.
  • by GuardianBob420 ( 309353 ) on Wednesday May 28, 2008 @07:30PM (#23578635) Homepage
    I for one would like to welcome our thinking machine overlords...
    Singularity here we come!
  • by gweihir ( 88907 ) on Wednesday May 28, 2008 @07:40PM (#23578745)
    Somebody claims to be able to build a ''thinking machine''. All efforts so far have failed. There is reason to believe all efferts in the forseeable future will also fail. It is even possible that all efforts ever will fail, as currently we do not even have theoretical results that would indicate this is possible.

    So why these claims again and again, and (I believe) often against better knowledge by those making the claims? Simple: Funding. This is something people without a clue about information technology, byt with money to give away, can relate to. Basically the same scam the speech recognition people have been pulling for something like 40 years now. Personally I find this highly unethical. When you confront these people, they typically admit the issue but claime that other good things come from their research. My impression is more that they are parasites indulging themselves at the expense of honest researcher that work on things that are both highly needed and actually have a good chance of producting usable results.
  • by somersault ( 912633 ) on Wednesday May 28, 2008 @07:45PM (#23578815) Homepage Journal
    Considering computers can't even truly understand the meaning behind stuff like 'do you want fries with that?' (sure you could program a computer to ask that and give the appropriate response.. in fact no understanding is required at all to work in a fast food store, but that's beside the point :p ), I don't think you need to worry so much about limiting their consciousness just yet.
  • by mangu ( 126918 ) on Wednesday May 28, 2008 @07:49PM (#23578865)
    It seems that computers with a capacity equivalent to human brains will be developed in the next twenty years or so.


    OK. I know, this prediction has been made before, but now it's for real, because the hardware capacity is well within the reach of Moore's law. To build a cluster of processors with the same data-handling capacity of a human brain today is well within the range of a mid-size research grant.


    Unfortunately, they have cried "wolf" too many times now, so most people will doubt this, but it's a reasonable prediction if one calculates how much total raw data-handling capacity the neurons in a human brain have. Now, software is another matter, of course, but given enough hardware, developing the software is a matter of time.

     

  • by mangu ( 126918 ) on Wednesday May 28, 2008 @07:52PM (#23578919)
    Every few years the same thing. Somebody claims to be able to reach India by navigating westward from Europe. All efforts so far have failed.


    So why these claims again and again, and (I believe) often against better knowledge by those making the claims? Simple: Funding. This is something people without a clue about geography, but with money to give away, can relate to.

  • by geekoid ( 135745 ) <dadinportland&yahoo,com> on Wednesday May 28, 2008 @08:00PM (#23579007) Homepage Journal
    Singularity is a myth.
    Like 'heaven' or any other distant time concepts people who can't imagine what's next.

    When they can imagine, then we will need to be careful because at that point we become a competitor.
    Of course symbiont might be a better term, until we automate all the steps to generate power for the machines.
  • by mrbluze ( 1034940 ) on Wednesday May 28, 2008 @08:42PM (#23579493) Journal

    Now most people would argue that a fly does not think, but it is clearly able to perform some sort of precessing.

    Not wanting to labour the point too much, but...

    It's no different to a script that moves a clickable picture away from the mouse cursor once it approaches a critical distance such that you can never click on the picture (unless you're faster than the script).

    A fly's compound eye is a highly sensitive movement sensor and the fly will move at anything big that moves, but if you don't move the fly doesn't see you (its brain wouldn't cope with that much information).

    Flies can learn a limited amount but it's limited and I would argue a computer could well behave as a fly and perform a fly's functions. But is the fly thinking? I don't think the fly is consciously deciding anything except that repeated stimuli that 'scare' it result in temporary sensitization to any other movement.

    Bacteria show similar memory behaviour but I wouldn't go so far as to call it 'thought'.

  • by TRAyres ( 1294206 ) on Wednesday May 28, 2008 @09:03PM (#23579715) Homepage

    Lots of people are making posts about this vs. skynet, terminator, etc. But there are some problems with that (overly simplistic and totally misguided) comment.


    There are numerous formal logic solvers, that are able to come to either the correct answer (in the case of deterministic systems, for instance) or to the answer with the highest degree of success. The difference between the two should be made clear: Say if I give the computer that:

    A)All Italians are human. B)All humans are lightbulbs.

    What is the logical conclusion? The answer is that all Italians are lightbulbs. Of course, the premises of such an argument are false, but a computer could work out the formally correct conclusion.


    The problem these people seem to be solving is that there needs to be a unified way to input such propositions, and a properly robust and advanced solver that is generic and agreed upon. Basically this is EXACTLY what is needed in order to move beyond a research stage, where each lab uses its own pet language.


    I mentioned determinism, because the example I gave contained the solution in the premises. What if I said, "My chest hurts. What is the most likely cause of my pain?" An expert system (http://en.wikipedia.org/wiki/Expert_system) can take a probability function and return that the most likely cause is... (whatever, I'm not a doctor!). But what if I had multiple systems? The logic becomes more fuzzy! So there needs to be an efficient way to implement it, AND draw worthwhile conclusions. Such conclusions can be wrong, but they are the best guess (the difference between omniscient and rational, or bounded rational).


    None of these things are relating to some kind of 'skynet' intelligence.


    IF you DID want to get skynet like intelligence, having a useful logic system (like what is planned here) would be the first step, and would allow you to do things like planning, for instance. If I told a robot, "Careful about crossing the street." it would be too costly to try to train it to replicate human thought exactly. But it records and understands language well (at this point), so what can we extract from that language?


    Essentially, this is from the school of thought that we need to play to computer's strengths when thinking about designing human like intelligence, rather than replicating the human thought processes from the ground up (which will happen eventually, either through artificial neurons, or through simulation of increasingly large batches of neurons). On the other hand, if such simulations lead to the conclusion that human level consciousness requires more than the model we have, it will lead to a revolution in neuroscience, because we will require a more complex model.


    I really can't wait to get more into this, and really hope it isn't just bluster.


    Also:

    'Thinking Machines' title is inflammatory and incorrect, if we use the traditional human as the gauge for the term 'thought'. It is a highly formalized and rigorous machine interpretation of human thought that is taking place, and it will not breed human level intelligence.

  • by BiggerIsBetter ( 682164 ) on Wednesday May 28, 2008 @09:10PM (#23579789)
    I think you'd be wrong about that. I suspect we'll get this working with a small but well designed framework running on a low overhead OS, because part of the deal with these things is that so much of it is self-organizing (or at least, organizes itself based on a template). Once we get the model right (and it might be very similar to cockroach-esque models currently working), most of the resources should be directly usable for the e-brain.
  • by idlemachine ( 732136 ) on Wednesday May 28, 2008 @09:27PM (#23579975)
    I'm really over this current misuse of "ontology", which is "the branch of metaphysics that addresses the nature or essential characteristics of being and of things that exist; the study of being qua being". Even if you accept the more recent usage of "a structure of concepts or entities within a domain, organized by relationships; a system model" (which I don't), there's still a lot more involved than knowing "appropriate words to build actionable machine commands".

    Putting tags on your del.icio.us links doesn't make you an ontologist any more than using object oriented methodologies makes you a platonist. I think the correct label for those who misappropriate terminology from other domains (for no other seeming reason than to make them sound clever) is "wanker". Hell, call yourselves "wankologists" for all I care, just don't steal from other domains because "tagger" sounds so lame.

  • by cumin ( 1141433 ) on Wednesday May 28, 2008 @09:28PM (#23579985)

    I called my cable company the other day and got an automated response that asked questions and responded, not only with words and instructions but also with a modem reset. The computer system could ask questions, determine responses and perform actions. Yes, it was limited, but decades past it would have been considered awe inspiring and doubtless would have been dubbed both a successful artificial intelligence and thinking machine.

    What then is the proper definition of a thinking machine? We already have computers that can follow complex logic paths to arrive at unexpected results (bugs?) and offer solutions we would not have foreseen on our own. Similar in result to having a conversation with an expert in an unfamiliar field.

    As machines, both hardware and software become more complex and capable, we are already raising the bar for what we consider an artificial intelligence. Doubtless we will continue to do so for quite some time, but when you can talk with a machine built on the ability to work with volumes of processable knowledge such as is being compiled in the OOR, how will we raise the bar?

    Historically, humanity has considered people that they considered unlike themselves to be less than fully human. As the majority of our species progresses toward a more inclusive standard, our language and perception is becoming inadequate to differentiate a human from a very advanced machine. Already most of us consider the issues of race, language, geology, age and affiliation to be irrelevant to defining what makes someone human. Biology is even a wavering standard since we consider people with prosthetics to be people with human rights and human bodies with the inability to think (vegetables) to have none. We are left with the ability to think and biology as the standard, but the definition of thinking is somewhat hazy to say the least.

    I think therefore I am, but what does it mean to say "I think" and how do you define thinking without biology in an external entity?

The moon is made of green cheese. -- John Heywood

Working...