U.S. Plan For "Thinking Machines" Repository 148
An anonymous reader writes "Information scientists organized by the US's NIST say they will create a "concept bank" that programmers can use to build thinking machines that reason about complex problems at the frontiers of knowledge — from advanced manufacturing to biomedicine. The agreement by ontologists — experts in word meanings and in using appropriate words to build actionable machine commands — outlines the critical functions of the Open Ontology Repository (OOR). More on the summit that produced the agreement here."
Shit (Score:3, Funny)
Re: (Score:2, Funny)
At a geometric rate?
Re: (Score:2)
Awesome (Score:5, Insightful)
Re: (Score:3, Interesting)
Futrelle died aboard the Titanic.
Re: (Score:2)
I'll need to do some research into these detective stories.
Re: (Score:3, Funny)
Re: (Score:2)
So is 42 old, or has it become 'kitsch'.
ob.Simpson:
Gunter: You've gone from hip to boring. Why don't you call us when you get to kitsch?
Cecil: Come on, Gunter, Kilto. If we hurry, we can still catch the heroin craze.
Re: (Score:2, Interesting)
I wonder sometimes why we humans do things and after all these years spent here I still do not know. Let us take this little idea of building 'thinking' machines. So members of human race are trying to build thinking machines - how splendid - while majority of us cannot even spllel properly not to mention reading with understanding , some of us are arrogant enough to attempt to build a 'thinking' machine. Besides technical challenges in the process -
Re: (Score:2)
Re: (Score:2)
Because data can be used to predict the future or get the future to do what you want.
They use Quants [wikipedia.org] nearing super genius levels in the financial field and some tend to be autistic persons who are really good at math hired by the largest financial firms in the world to attempt to predict market trends.
Imagine if you would a intelligent machine who could simply process the information given to it and provide something useful to as a prediction as someth
Re: (Score:2)
Ok, humanity is screwed (Score:3, Informative)
Re: (Score:1)
Like the bit in Star Wars when Luke Skywalker almost asked Leia out and, well, they would have had kids together and everything OMG! And lucky that C3P0 was such a patsy and ruined it for them. It was almost incestuous!
Not that I've ever come across that in real life, but definitely brother-sister relationships are a no-no.
(For example)
Re:Ok, humanity is screwed (Score:4, Funny)
Not that I've ever come across that in real life, but definitely brother-sister relationships are a no-no.
I know. I'm an only child -- as far as I know. So whenever I get shot down by a woman, I just remember the lesson of Star Wars, and figure that she was probably just my long lost sister so I'm better off anyway.
Re:Ok, humanity is screwed (Score:4, Insightful)
Re:Ok, humanity is screwed (Score:4, Informative)
Thinking doesn't mean cognition either.
Re: (Score:2)
Re: (Score:2, Funny)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Note a computer that could do that is probably simpler than a computer that can understand "do you want fries with that".
Re: (Score:2)
Re: (Score:2)
Fair point. I think in that case, ask the computer *how*, but don't give it any guns or giant mechanised tanks ;) And it would probably be better to examine the crime rates, taxes, police, health and education spending etc and let the computer examine variations of those, rather than use capital punishment (though that could be a valid method too if it's shown to work well as a deterrent.. :s I don't think it does work well as a deterrent though, does it?)
Asking for input, instead of allowing it to act, and limiting the options and variables it can use can help us avoid an undesirable solution.
But the computers will keep getting smarter, and no matter how many safeguards we devise we're going to have to deal with the fact that it will be making decisions and plans we have no hope of understanding.
Re: (Score:2)
Re: (Score:2)
Now the definition of "smarter" is tricky when considering computers, by some metrics a wristwatch is smarter than any human alive, and that's part of the issue. But there's already instances where a computer is solving equations of the form Ax=y where no human understands the intricacies of the formula or the full effects of all the different x and y values, they just know it maps well to their real world problem.
A
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
What is this "thinking"? (Score:3, Interesting)
Computer thought is probably no more advanced than that of a bug. Mars rovers etc can only executed canned move sequences and don't operate autonomously. Some robots etc are more autonomous, but are still pretty limited when it comes to any biological equivalent.
As much as people h
Re:What is this "thinking"? (Score:4, Insightful)
Not wanting to labour the point too much, but...
It's no different to a script that moves a clickable picture away from the mouse cursor once it approaches a critical distance such that you can never click on the picture (unless you're faster than the script).
A fly's compound eye is a highly sensitive movement sensor and the fly will move at anything big that moves, but if you don't move the fly doesn't see you (its brain wouldn't cope with that much information).
Flies can learn a limited amount but it's limited and I would argue a computer could well behave as a fly and perform a fly's functions. But is the fly thinking? I don't think the fly is consciously deciding anything except that repeated stimuli that 'scare' it result in temporary sensitization to any other movement.
Bacteria show similar memory behaviour but I wouldn't go so far as to call it 'thought'.
Re: (Score:2)
Re: (Score:2)
Re:What is this "thinking"? (Score:4, Interesting)
That's the frightening part.
Next time you find a bidirectional trail of ants in your home, try this little experiment:
1) Monitor a 6-inch square. For the next 5 minutes, kill every ant entering that square. Use the same piece of paper towel and smear their guts a bit when you squish 'em.
2) After 5 minutes, stop killing ants. Just watch individual ants for the next 30 minutes.
3) Go to sleep. Look around the house 24-72 hours later. You'll find a completely different ant trail.
"A human is smart. A mob of humans is dumb."
- Men in Black
Ants don't work like that.
"An ant is stupid. A colony of ants is smart."
Ants taught me what the word alien meant.
Colonies of ants are still dumb! (Score:2)
Ant trails are created by scouts doing a random walk. When they find something tasty hey follow their own trail back to the nest and all the other ants follow that same trail and also strengthen it.
Occasionally an ant gets lost, starts a random walk and often run into the line again. If this new path is faster it will tend to displace the original line. Once the food is gone the ants will disperse where the food was
Friendly AI (Score:2)
Re: (Score:2)
Re: (Score:2)
It is easy to envision the possible uses for his recent mundane technologies" [itpro.co.uk]. Itinerary analysis and keyword triggered speech recognition and rec
Re: (Score:2)
Er...
Wouldn't you have to be doing the speech reco in the first place to identify the keyword?
That's a lot of processing unless the surveillance is fairly tightly targeted.
I don't see it as a threat - Hawkin seems to be a touch overhyped from what I read.
Singularity on the way (Score:2, Insightful)
Singularity here we come!
Re: (Score:1)
Wait, wait, that was a game. In that case, all hail our thinking machine overlords. Please don't try t
Re: (Score:3, Insightful)
Like 'heaven' or any other distant time concepts people who can't imagine what's next.
When they can imagine, then we will need to be careful because at that point we become a competitor.
Of course symbiont might be a better term, until we automate all the steps to generate power for the machines.
I, for one... (Score:1, Funny)
Re: (Score:2)
unambiguous? (Score:2)
from my experience, the ambiguous words is the documentation, followed closely by the comments.
"unambiguous words and action commands"? Is this what "experts in words" call a computer language syntax? now we're going from "you don't need to be no stinkin'
they're just building a big central ontology (Score:1)
You forgot to mention something... (Score:2, Informative)
Every few years the same thing. (Score:1, Insightful)
So why these claims again and again, and (I believe) often against better knowledge by those making the claims? Simple: Funding. This is something people without a clue about information technology
1492 called, they want their arguments back... (Score:3, Insightful)
So why these claims again and again, and (I believe) often against better knowledge by those making the claims? Simple: Funding. This is something people without a clue about geography, but with money to give away, can relate to.
Re: (Score:2)
Re:Every few years the same thing. (Score:5, Informative)
Unless you want to say that there is some mystical element to brains, there is nothing precluding the eventual design and building of 'sentient' computers, surely? Beyond our own fear of what would happen if we did such a thing, as evidenced by plenty of 20th century fiction. Building sentient computers could even be regarded as a type of evolution, as they would then be able to improve upon themselves at an exponential rate..
This calls for a word war (Score:3, Insightful)
I called my cable company the other day and got an automated response that asked questions and responded, not only with words and instructions but also with a modem reset. The computer system could ask questions, determine responses and perform actions. Yes, it was limited, but decades past it would have been considered awe inspiring and doubtless would have been dubbed both a successful artificial intelligence and thinking machine.
What then is the proper definition of a thinking machine? We already have c
Re: (Score:2)
Re: (Score:2)
That is completely unknown. First it is possible that this computer cannot be built. Remember that there is indication the brain uses quantum effects. Second, it may well be impossible to program it, if it can be built. And third (without going religious here), it is possible that the brain alone is not enough. In short: We do
Re: (Score:2)
You could always copy the state of a human brain in its developed state and simulate from that (if you had advanced enough scanners), though that raises even more ethical issues IMO.
I wasn't suggesting that a sentient computer *has* to be built by simulating a brain either. I don't see why e
Re: (Score:2)
It is completely unknown whether that would be a possibility, even if only a theoretical one. At this time we do not know. There are some indications that the brain uses quantum effects, which could prohibit such a scan.
Re: (Score:2)
Re: (Score:2)
Currently researchers think so. i.e. the current models can be simulated. It is not known whether they are exact, or only approximations and there are additional effects. But one big problem is already known: The effort for simulating them is a lot higher than the effort in the quantum system (I think it may be ssquared or worse), i.e. any simulation is vastly inferiour to the real thing. Also there is indeterminism that cannot be simulated to a satisfactory degree.
An
Re: (Score:2)
Re: (Score:2)
You seem to have contracted the brain==computer meme. Here, let me give you a thought to ponder: The brain is analog.
Neurons don't only have an on/off, it-s'_working/ain't_working_status but they also vary in intensity and have degrees of activity.
On top of that, neurons do not act in a vacuum, they interact throught and are influenced by a myriad of other factors which include but are not limited to neurotransmitters, particular nutrients, sugar levels, etc.
One can imagine accounting for each and every
Re: (Score:2)
Re:Every few years the same thing. (Score:5, Informative)
Not about thinking machines (Score:4, Informative)
It's merely intended as a convenient resource for programmers.
Re: (Score:2)
shouldn't someone tell them about Google?
Full Human Equivalence (Score:3, Insightful)
OK. I know, this prediction has been made before, but now it's for real, because the hardware capacity is well within the reach of Moore's law. To build a cluster of processors with the same data-handling capacity of a human brain today is well within the range of a mid-size research grant.
Unfortunately, they have cried "wolf" too many times now, so most people will doubt this, but it's a reasonable prediction if one calculates how much total raw data-handling capacity the neurons in a human brain have. Now, software is another matter, of course, but given enough hardware, developing the software is a matter of time.
Re: (Score:2)
Now, software is another matter, of course, but given enough hardware, developing the software is a matter of time.
But we will need much, much better hardware if we intend to program it in 20 years. You only need to look at Vista to see that programmers today don't care or can't program with limited resources, and even when we get the hardware, no programming method has been found to replicate the human mind, meaning that we will need even more hardware to make it work and even more hardware for the futuristic programming methods that will make Vista seem like it is well-coded. You only need to look at speech recog
Re: (Score:3, Insightful)
Re: (Score:2, Interesting)
Better hardware is a necessary yet insufficient requirement for strong AI. There is still a lot to learn about how the human brain works and how to write software to emulate it. However, when you look at
Re: (Score:2)
Slow and error prone seems, to me, to be a large part of the human condition. Especially when you start sharing the information between and among various people. The human mind has fairly simple mechanisms (though they're difficult to study empirically) which mainly consist of networks of neurons. So you end up with a lot of data that is interconnected in very precise networks from which meaning is created. Often, these connections are not consistent in every individual (or perhaps never consistent for any
Re: (Score:2)
Re: (Score:2)
1. Do you think there will be a major advance in general intelligence in the next 20 to 30 years?
2. Is your research likely to be a contributing factor to this advance in general intelligence?
The majority of respondents answered: Yes. No.
So basically, everyone thinks something big is going to happen soon but few to no researchers are actually working on it.
Re: (Score:2)
but guaranteed that time is more than 20 years. I've already lived through multiple 20 year "it must be possible by then" projections.
it's like the ubiquitous 6 month projection to get a large project to a usable state. This goes all the way back too. No one has a clue, but it just seems like 6 months ought to be long enough to do it.
to give you an idea of how empty the
Accelerating timescales (Score:2)
The Stone Age lasted a few hundred thousand years. When we learned how to use metals, the Bronze Age lasted a few thousand years. Then came the Iron Age. We only learned how to make steel in an industrial scale in the nineteenth century, the Steel Age only lasted a hundred years, then
Re: (Score:2)
no, backwards from 20 years from now to today. what kind of steps would be needed over the next 20 years to get to "reasoning like a human", and when is all this acceleration going to take place, because there sure isn't anything taking place now.
in other words, there is no basis for a 20 year proj
You got that wrong (Score:2)
>> data-handling capacity of a human brain today
>> is well within the range of a mid-size research grant
Nope. The brain is hundred billion neurons, connected by 100 trillion synapses. Sure the "clock frequency" is very low, but even taking this into account, those figures far exceed what could be built with today's technology. Not to mention that scientists today have absolutely no clue how major parts of the brain work, so even if hardware
Re: (Score:2)
We have a pretty good estimate, on an order of magnitude basis. About 100 billion neurons, each with an average of 1000 synapses, firing 100 pulses/second.
Sure, but that's what averages are for. There are also different types of transistors: junction transistors, NPN and PNP, MOS type, N-Channel and P-channel, etc. There are AND gates, NAN
Re: (Score:2)
And what's important to know is that we also know how quickly we can run mathematical models of these things with reasonable accuracy. So, if one presumes Moore's Law holds up, it becomes pretty simple to make a reasonable guess when we'll have sufficient compute power to directly model as many neurons as there are in the human brain in essentially real
Bah! (Score:2)
Re: (Score:2)
Re: (Score:2)
it might be more complicated than that (Score:2)
Let's take the example of a simple idea: a pun. This is a word that in a given context can have more than one possible interpretation. One can classify either one or both of the interpretations as the ideas expressed, but that would be incorrect. Often times it is the presence of both meanings that give the pun a new meaning that joins the two contexts.
It is the interconnections between contexts that generally give new insight into subjects. Repositories of existing concepts can only be used to explore
Re: (Score:2)
However, the problem remains the same as that with most failed-and-doomed-to-failure AI research: the researchers have an excellent grasp of technology, but are usually operating on a fundamentally flawed model of how consciousness works.
Essentially, we think about abstract things by repurposing the connections used for understanding the physical world (see Philosophy in the Flesh [amazon.com] an
a much more productive idea (Score:2)
cyc is already halfway there (Score:5, Interesting)
The guys at cyc [cyc.com] (look for wikipedia entry too) are already halfway there. Last time i checked there were already something like 5 million facts and rules in the database, and the point where new facts could be gathered automatically from the internet was very close.
Many years ago i remember the founder (Doug Lenat) saying that practical purpose intelligence could be reached at ten million facts....
we'll see within the next decade, i guess.
Re: (Score:2)
Isn't the microLenat [catb.org] the fundamental unit of bogosity in quantum bogodynamics?
Re: (Score:2)
people write on the internet
cyc learns from the internet
cyc tells lies
I always had a hard time... (Score:2)
Its name means "boob" in my first language
Logical fallacy? (Score:2)
And maybe that's the point. For centuries ontology has existed primarly to serve itself and secondarily to trade favors with other branches of philosophy. The proposed project has the primary result of providing gainful employment outside the halls of academic philosophy
Bad Idea? (Score:2)
Building a standard "ontological repository" would seem to require establishing a structure within which its objects and relationships can be contained.
While this might seem to be of benefit to extending the capabilities of some tasks like machine translation into broader fields, I think this might cause problems at the cutting edge, that is: machine reasoning.
Reasoning about complex problems at the frontier knowledge (to paraphrase TFA) requires identifying new links and relationships between objects. Na
Intelligence vs. Appropriate Formal Logic (Score:4, Insightful)
Lots of people are making posts about this vs. skynet, terminator, etc. But there are some problems with that (overly simplistic and totally misguided) comment.
There are numerous formal logic solvers, that are able to come to either the correct answer (in the case of deterministic systems, for instance) or to the answer with the highest degree of success. The difference between the two should be made clear: Say if I give the computer that:
A)All Italians are human. B)All humans are lightbulbs.What is the logical conclusion? The answer is that all Italians are lightbulbs. Of course, the premises of such an argument are false, but a computer could work out the formally correct conclusion.
The problem these people seem to be solving is that there needs to be a unified way to input such propositions, and a properly robust and advanced solver that is generic and agreed upon. Basically this is EXACTLY what is needed in order to move beyond a research stage, where each lab uses its own pet language.
I mentioned determinism, because the example I gave contained the solution in the premises. What if I said, "My chest hurts. What is the most likely cause of my pain?" An expert system (http://en.wikipedia.org/wiki/Expert_system) can take a probability function and return that the most likely cause is... (whatever, I'm not a doctor!). But what if I had multiple systems? The logic becomes more fuzzy! So there needs to be an efficient way to implement it, AND draw worthwhile conclusions. Such conclusions can be wrong, but they are the best guess (the difference between omniscient and rational, or bounded rational).
None of these things are relating to some kind of 'skynet' intelligence.
IF you DID want to get skynet like intelligence, having a useful logic system (like what is planned here) would be the first step, and would allow you to do things like planning, for instance. If I told a robot, "Careful about crossing the street." it would be too costly to try to train it to replicate human thought exactly. But it records and understands language well (at this point), so what can we extract from that language?
Essentially, this is from the school of thought that we need to play to computer's strengths when thinking about designing human like intelligence, rather than replicating the human thought processes from the ground up (which will happen eventually, either through artificial neurons, or through simulation of increasingly large batches of neurons). On the other hand, if such simulations lead to the conclusion that human level consciousness requires more than the model we have, it will lead to a revolution in neuroscience, because we will require a more complex model.
I really can't wait to get more into this, and really hope it isn't just bluster.
Also:
'Thinking Machines' title is inflammatory and incorrect, if we use the traditional human as the gauge for the term 'thought'. It is a highly formalized and rigorous machine interpretation of human thought that is taking place, and it will not breed human level intelligence.
Tagging your links doesn't make you an ontologist (Score:4, Insightful)
Putting tags on your del.icio.us links doesn't make you an ontologist any more than using object oriented methodologies makes you a platonist. I think the correct label for those who misappropriate terminology from other domains (for no other seeming reason than to make them sound clever) is "wanker". Hell, call yourselves "wankologists" for all I care, just don't steal from other domains because "tagger" sounds so lame.
Baroque Cycle anyone? (Score:2)
Thumbs up for the Butlerian Jihad tag! (Score:2)
Good grief (Score:2)
So, we don't want to fund proper science, or proper education, but we want to build machines that can think for us, so we can concentrate on the important things, like believing that the war in Iraq is about bringing freedom and democracy to the poor people and that the world was created in 6 days (BTW, how can one even talk about days before the creation of Heaven and Earth, and crucially the sun?)
Not that this k
Not the company (Score:2)
The "ontology" thing is overrated (Score:2)
All this "agreement" is about is to have a repository for everybody's "ontology" data. It's like SourceForge, only less useful.
Most of what goes in is tuples of the form (relation item1 item2); stuff like this:
(above ceiling floor)
...
(above roof ceiling)
(above floor foundation)
(in door wall)
(in window wall)
The idea is supposed to be that if you put in enough such "facts", intelligence will somehow emerge. The Cyc crowd has been doing that for twenty years, and it hasn't led to much.
The class
Re: (Score:2)
University and ACM members ought to be able to download it from here [acm.org].
That's a fine answer you've got there (Score:1)
Re: (Score:1)