Opinion: Artificial Intelligence Hits the Barrier of Meaning (nytimes.com) 217
Machine learning algorithms don't yet understand things the way humans do -- with sometimes disastrous consequences. Melanie Mitchell, a professor of Computer Science at Portland State University, writes: As someone who has worked in A.I. for decades, I've witnessed the failure of similar predictions of imminent human-level A.I., and I'm certain these latest forecasts will fall short as well. The challenge of creating humanlike intelligence in machines remains greatly underestimated. Today's A.I. systems sorely lack the essence of human intelligence: understanding the situations we experience, being able to grasp their meaning. The mathematician and philosopher Gian-Carlo Rota famously asked, "I wonder whether or when A.I. will ever crash the barrier of meaning." To me, this is still the most important question.
The lack of humanlike understanding in machines is underscored by recent cracks that have appeared in the foundations of modern A.I. While today's programs are much more impressive than the systems we had 20 or 30 years ago, a series of research studies have shown that deep-learning systems can be unreliable in decidedly unhumanlike ways. I'll give a few examples. "The bareheaded man needed a hat" is transcribed by my phone's speech-recognition program as "The bear headed man needed a hat." Google Translate renders "I put the pig in the pen" into French as "Je mets le cochon dans le stylo" (mistranslating "pen" in the sense of a writing instrument). Programs that "read" documents and answer questions about them can easily be fooled into giving wrong answers when short, irrelevant snippets of text are appended to the document.
Similarly, programs that recognize faces and objects, lauded as a major triumph of deep learning, can fail dramatically when their input is modified even in modest ways by certain types of lighting, image filtering and other alterations that do not affect humans' recognition abilities in the slightest. One recent study showed that adding small amounts of "noise" to a face image can seriously harm the performance of state-of-the-art face-recognition programs. Another study, humorously called "The Elephant in the Room," showed that inserting a small image of an out-of-place object, such as an elephant, in the corner of a living-room image strangely caused deep-learning vision programs to suddenly misclassify other objects in the image.
The lack of humanlike understanding in machines is underscored by recent cracks that have appeared in the foundations of modern A.I. While today's programs are much more impressive than the systems we had 20 or 30 years ago, a series of research studies have shown that deep-learning systems can be unreliable in decidedly unhumanlike ways. I'll give a few examples. "The bareheaded man needed a hat" is transcribed by my phone's speech-recognition program as "The bear headed man needed a hat." Google Translate renders "I put the pig in the pen" into French as "Je mets le cochon dans le stylo" (mistranslating "pen" in the sense of a writing instrument). Programs that "read" documents and answer questions about them can easily be fooled into giving wrong answers when short, irrelevant snippets of text are appended to the document.
Similarly, programs that recognize faces and objects, lauded as a major triumph of deep learning, can fail dramatically when their input is modified even in modest ways by certain types of lighting, image filtering and other alterations that do not affect humans' recognition abilities in the slightest. One recent study showed that adding small amounts of "noise" to a face image can seriously harm the performance of state-of-the-art face-recognition programs. Another study, humorously called "The Elephant in the Room," showed that inserting a small image of an out-of-place object, such as an elephant, in the corner of a living-room image strangely caused deep-learning vision programs to suddenly misclassify other objects in the image.
So it's basically an old-school overtraining (Score:5, Interesting)
I wonder if these AI vision systems that input millions of images are actually doing a deep learning, or are just canvassing pretty much every image possibility such that any possible live image is just a tiny automated delta calculation away from an answer.
This would explain why tweaking the input in the described ways would throw the AI into a tizzy -- the tweaked input isn't within a tiny delta of any of the millions of categorized images.
Re: (Score:3)
It's a lack of thought. Their algorithms recognize, we do that too as a first pass ... then we reason about what we are seeing, a messy unbounded process completely unlike what the perceptron networks AI researchers keep polishing up every few decades do.
Re:So it's basically an old-school overtraining (Score:4, Informative)
Re: (Score:2)
The problem with any machine vision recognition system is that there is really no way of looking at a still image (or even a still scene with depth) and /knowing/ which Objects are discrete and independent
This is not a problem. If you have enough data, the machine will find the patterns.
And if it has a window, how is that different from a picture on the wall?
A window usually lets in light, whereas a picture doesn't. Also, the perspective is usually different, as well as the type of objects you can see in a picture vs through a window.
It's the elephant in the roadway. (Score:2)
The problem with any machine vision recognition system is that there is really no way of looking at a still image (or even a still scene with depth) and /knowing/ which Objects are discrete and independent
This is not a problem. If you have enough data, the machine will find the patterns.
You are solving the wrong problem with this data abundance. The problem is knowing if the machine learned the right pattern. The surprise is that in many cases it's actually harder to know if the right pattern has been learned than to actually learn the right pattern. Mind bending but there's some quantitative proofs about that called "the no-free-lunch theorem for generalization". The machine will always find a pattern and with enough data it's use of the pattern will defeat your ability to crossvalidate if it learned the right pattern.
One day you will see the queen of hearts and enter a suggestible hypnotic state, put on an elephant costume, and walk across a road and be run down by the Uber that had a pathological classification caused by seeing elephants in the middle of the road.
Re:So it's basically an old-school overtraining (Score:5, Interesting)
There was a military experiment years ago trying to teach a computer to distinguish between friendly and enemy tanks. They showed it thousands of photos of each, and in the test bed, it was very, very accurate. When used under battlefield conditions, however, it went to hell in a handbasket.
Turned out they hadn't taught it to distinguish between US and Russian tanks, they had taught it to distinguish between high quality photos (used for marketing meetings with Congresscritters for funding), and crappy, grainy Polaroids (which was all they had of the Russian tanks).
They'll learn what you teach them, but what you teach them may not have anything to do with what you want them to learn.
Re:So it's basically an old-school overtraining (Score:5, Interesting)
There was a military experiment years ago trying to teach a computer to distinguish between friendly and enemy tanks. They showed it thousands of photos of each, and in the test bed, it was very, very accurate. When used under battlefield conditions, however, it went to hell in a handbasket.
Turned out they hadn't taught it to distinguish between US and Russian tanks, they had taught it to distinguish between high quality photos (used for marketing meetings with Congresscritters for funding), and crappy, grainy Polaroids (which was all they had of the Russian tanks).
They'll learn what you teach them, but what you teach them may not have anything to do with what you want them to learn.
That's a great story and perfectly illustrates the pitfalls of machine learning. I (a mechanical engineer) took a data science class and the main takeaway I got was that machine learning basically fits a curve of predicted behavior based on input variables. The "training" dataset is what you feed it to figure out the curve. Then you test it on a different dataset to make sure it isn't bonkers. Removing or adding one input variable can dramatically change the influence strength or even the sign (+/-) of the other variables in the prediction formula that the process generates. If you have hundreds of input variables it becomes completely impossible for a human to understand all the relationships between the variables in the prediction function. So even if the machine learning software can generated a good predictive function, a human may not be able to understand how that predictive function works if few or none of the input variables are dominant.
Re: So it's basically an old-school overtraining (Score:2)
That's a cute story, but there's no way that any such system was actually used "under battlefield conditions". It seems like you're just retelling a story which is a corruption of a much earlier story, all of which are almost certainly apocryphal. Original story can be seen here:
https://www.jefftk.com/p/detec... [jefftk.com]
Re: (Score:2)
The late great Rick Riolo had a story about this, involving genetic algorithms.
The Air Force gave his team a contract to develop the most fuel efficient drone flight algorithms they could. So they got access to the Air Force's best simulation environment, and set up a genetic algorithm optimization that maximized fuel conservation. A few months later they came back, and discovered that every surviving algorithm had more fuel than it started with. The optimization had found a flaw in the simulation environme
Re: So it's basically an old-school overtraining (Score:2)
Human brain evolved with safety checks against optical illusions.
That would explain alcohol... and paper bags.
It's about compression. Everything in fact is. (Score:4, Interesting)
Entropy. Compression. Same thing. The whole world is thermodynamics and your state of knowledge about the world is also limited by thermodynamics. There will never be a computer that can predict the future of the universe before the universe arrives simply because you can't store a representation of the universe inside the universe itself.
Lossy Compression is therefore how we get around that and be able to compute/think/predict what an approximate future state of the universe is.
What the goal is to align the losses of the compression into the input space which does not exist. For example, if there is no possibly image of a living room of size smaller than an elephant that could contain the elephant then any mapping of images with and without elephants to the same compressed reprensention is a good compression. To say it differently the compresses state is a many to one mapping back to the original state. If for every compressed state there is only one realizable original state then it's invertable. THe images in the original space that could never happen are also mapped to the same compressed state but because they could never exist we lose nothing by ignoring them.
Thus compression and prediction are the same thing.
AI fails when it either over-compresses to a space too small to hold every realizable state. Or it compresses poorly so that in unnessarily conflates two possible real states. For example, the uber car that thought the woman in the road was blowing trash.
On the otherhand, it's often very valuable to overcompress as long as you are tolerant of mistakes on the prediction. That is, the uber car in question was able to do a great job of driving most of the time because it made fact choices that were nearly always good enough. The Cheetah can't just chase the antelope, it needs to try to guess and cut corners a bit. As long as most of it's guesses are good it wins. In the case of the cheetah, a mistake just means a missed meal, which is tolerable. But in the case of the uber driver or an ICBM nuclear missile failsafe system, then our tolerance for error is a bit lower.
Thus a little overcompression is acutally good for generalizing rather than parroting.
A lot of overcompression leads to bad predictions.
Great! (Score:4)
Re: (Score:2)
The question (which the writer didn't ask or answer) is how the machine learning systems can be improved to be more resistant against such simple modifications.
Re: (Score:2)
I think the statement is more that ML systems use the wrong approach to identifying reality and get a very fragile performance as a result.
Re: (Score:3)
I think the statement is more that ML systems use the wrong approach to identifying reality and get a very fragile performance as a result.
Yes, that's the writer's hunch, but nowhere does she show why we need a different approach rather than an improved version of the current one.
Re: (Score:2)
Well, we do not really have a different approach and we do not really know how to improve the existing one either.
Re:Great! (Score:4, Insightful)
Re: (Score:2)
The great question is whether extracting 'meaning' is in some sense simply a deep learning system that is better trained and able to use additional layers to provide context or whether 'meaning' is some categorically new thing that current approaches to machine learning are fundamentally missing.
Well I think it's clearly missing some abstract underlying model. Like if you showed it cats and non-cat statues, could it generate a cat statue? Of you show it cats and dead animals, could it plausibly create a dead cat? If you show it cats and paintings, can it make a painting of a cat? Can it even create a black and white cat from color swatches and cats of other colors? Will it think a human in a cat costume is a cat if it's only seen cats and humans in normal clothes? If you've only shown it pictures o
Re: (Score:2)
Re:Great! (Score:4, Interesting)
The question (which the writer didn't ask or answer) is how the machine learning systems can be improved to be more resistant against such simple modifications.
https://www.quantamagazine.org... [quantamagazine.org]
When human beings see something unexpected, we do a double take. It’s a common phrase with real cognitive implications — and it explains why neural networks fail when scenes get weird.
Most neural networks lack this ability to go backward. It’s a hard trait to engineer. One advantage of feed-forward networks is that they’re relatively straightforward to train — process an image through these six layers and get an answer. But if neural networks are to have license to do a double take, they’ll need a sophisticated understanding of when to draw on this new capacity (when to look twice) and when to plow ahead in a feed-forward way. Human brains switch between these different processes seamlessly; neural networks will need a new theoretical framework before they can do the same.
Re:Great! (Score:5, Insightful)
When human beings see something unexpected, we do a double take
Of course, you first need to see something unexpected. In the famous video of white/black people passing a ball, very few people noticed the gorilla. They never did a double take. https://www.youtube.com/watch?... [youtube.com] This happens all the time in real life.
Re: (Score:3)
Re: (Score:2)
Thanks a lot. You totally screwed that video up for me. You're the type that tells all your friends the endings to books before they read them, don't you?
Re: (Score:2)
Compare:
"Some programs can fail dramatically when"
with
"Some black people can murder people when"
Now tell me again what a great idea posting your comment was!
How different is meaning from (Score:2)
I think the issue is that the AIs have not experienced / perceived / taken in data about enough different kinds of situations, and specifically, have not been aimed at the problem of "what if I am an agent with goals in all these different situation types."
Right now in AI, mostly we are training the "visual cortex" or the "language parsing centre" of the brain.
The algorithms are not being applied to the general agent problem. The low hanging fruit of c
Re: (Score:2)
If you do not understand that understanding is different, then you do not have understanding. Sorry. Does make you part of the larger crowd though.
Re: (Score:2)
All I need to get "understanding" into the AI is to give it a self-model...
Maybe you can enlighten us as to how you plan to set parameters for this AI's "self". Perhaps you could even give us an example of one of your own (very un-snowflakish human) parameters of self. And don't forget that our (very un-snowflakish human) consciousness is itself comprised of subtler levels of consciousness (both lower and higher).
Re: (Score:2)
Qualia of consciousness remains a mystery, but is irrelevant to the implementation of general intelligence.
When we observe other humans, all we can ever do is note their sequence of behaviour; (actions, communication), and from that sequence, and also from previously learned generalizations and examples of same, we infer internal mental states for them such as their situation-models,
Re: (Score:2)
One thing that needs to be crystal clear. We do not have to achieve qualia of consciousness for an AI to understand the world.
My conscious experience probably differs from yours - probably everyone has a different conscious experience. What I know about consciousness is that it's one of 4 pieces of 1 part of what we are. Those pieces are:
Ego
Mind
Intelligence
Consciousness
In order for you to have any piece of any of that, you need at least a little bit of the others.
Re: (Score:2)
Re: (Score:2)
One thing that needs to be crystal clear. We do not have to achieve qualia of consciousness for an AI to understand the world.
That is a completely baseless assumption. In fact, the only entities we have that can understand the world have consciousness and heavily use it in that process. Hence the only reasonable default assumption is that it is needed and everything else would need extraordinary proof. You do not even have regular proof...
Re: (Score:2)
What can I say, I just have a beef with natural stupidity, and there is plenty of that in the AI threads.
Meaning is just better Pattern Recognition (Score:3)
Of course, it will need really good training and algorithms to figure out sentences like "I wrote about the pigs using my pen." but there is no reason to assume that there is some barrier to AI doing that. The compsci department round the corner has colleagues working on text and speech recognition and I'm sure this type of thing is something they are dealing with and I doubt Google translate is that close to state-of-the-art.
Re: (Score:2)
Aaaaaand, fail. You can only recognize patterns if the number of patterns is small enough to be cataloged. That is not the case here. Of course, you can in theory write a book (so not even an active agent) that has all the responses a specific truly exceptionally intelligent person would give to any question imaginable, but that does not mean this person has an internal dictionary where the answers get looked up. The mechanism is fundamentally different and not understood at all at this time.
Re: (Score:3)
literally, i thought that. for a fraction of a second i was sure that some unfortunate head amputee was struggling to make it in t
Re: (Score:2)
maybe, sort of (Score:2)
Of course we need better pattern recognition, but I don't think we get there through larger nets and better training. I suspect we've reached a level of training and individual net size that is already adequate. It is actually very surprising that the individual nets we've created can compete as well as they can with humans because they are doing it without the feedback of thousands of other nets that the human brain has.
What is most needed is not better trained specialized nets. We need many nets trained i
Re: (Score:2)
i.e. more training with better algorithms.
This is the solution to all cognitive problems, both human and machine.
Re: (Score:2)
I doubt Google translate is that close to state-of-the-art.
It's actually new and shiny technology [wikipedia.org], built in collaberation with Stanford. (Personally I think it produces worse results than the older method, but who knows.)
Re: (Score:2)
Douglas Hofstadter made a good point in his essay 'Waking up from the Boolean Dream': Humans don't seem to do patter regcognition the way AI researchers are trying to program computers. A lot happens in the sub-200 millisecond delay between seeing an image and recognising it that we don't even know how it works yet.
When you see a picture of your Grandma, you go 'Grandma' immediately. It is a stretch to say that your visual cortex manages this by comparing a picture of Grandma to pictures of houses, tigers a
Re: (Score:2)
I doubt Google translate is that close to state-of-the-art.
Thing about it. Is there any reason for it to not be, any reason at all?
Hardware, time and data.
Re: (Score:2)
Google, having more hardware than any research department.
Google, having more man-hours to spend than any research department.
Google, having more data than all research departments put together.
Even if we assume all of that is true, it still makes sense for them to release their translation server somewhere between the moment it was better than the old one, and before it was 100% perfect.
Re: (Score:2)
So after the time the university researchers would release theirs.
Pretty much what I have been saying all along... (Score:2)
But the tech-fanatics want their flying cars...
I also should add that there is no indicator at all that machines will ever get there.
Re: (Score:2)
The asphalt lobbyists don't want us to have flying cars. "Roads? Where we're going, we don't need roads."
But more seriously, there is a real desire for fast travel that isn't limited by long waits like in a train schedule or unpredictable travel time like in heavy traffic. If you take the whole "flying car" thing from mid-20th century Popular Science magazines overly literal, we're probably very far away from that. But we are moving towards technology that addresses similar demands for convenience and will
Re: (Score:2)
I have absolutely no issue with that. But it is not the same thing. It is problem-driven. Much if the AI hype is fantasy driven about the new slaves we are all going to get, or alternatively, the overloads that will kill us. And that is nonsense.
Re: (Score:2)
ML/DL/etc got a lot of people really hopefully since they were SO much easier and you could throw hardware at them, plus they produced great marketing and search results,.. but in many ways we are pretty much where we were in the 70s or 80s in terms of actual AI development when it comes to actual intellegence.
Modelling and dissection (Score:2)
I believe at least two things will have to happen. First, the bot will have to generate candidate models of reality and evaluate them against the input for the most viable fit. These models may be physical in some cases, such as a 3D reconstruction of a face or room; conceptual in others, such as social relationship diagrams; and logic/deduction models, perhaps using CYC-like rule bases.
Second, these models and the rules that generated them will need to be comprehensible by 4-year-degree analysts so enoug
Re: (Score:2)
Re: (Score:2)
see http://abstractionphysics.net/ [abstractionphysics.net]
and you are right the tech industry does not like it because it requires the third primary user interface to be given to the users, not withheld.
How to become wealthy, make people need you and done in the tech industry by strapping the enduser, in analogy, with only two of the primary colors needed to paint a rainbow..
Re: (Score:2)
No surprises there (Score:2)
Re: (Score:2)
The only reason for the previous AI winter was the fact that the AI at that time could not be monetized. We are way beyond that now. AI is making profit, and therefore there is continued effort to improve it and make even more money.
Re: (Score:2)
The only reason for the previous AI winter was the fact that the AI at that time could not be monetized. We are way beyond that now. AI is making profit, and therefore there is continued effort to market it and make even more money.
FTFY. "marketability" and "improvement" are not necessarily synonymous.
Re: (Score:2)
Re: (Score:2)
How does some one (Score:2)
Seems to me everything today in what is called AI/Machine Learning is little more than (to simplify) a huge case statement/if-elseif/search engine feeding back possible answers. Where the answers themselves must be evaluated, getting back results that in turn need to be evaluated.
Until you are able to encode conceptualization, feeling, understanding and sense of in relation to hard and soft data you may very well just end up in
Re: (Score:2)
Hope. (Score:2)
Modern AI (Score:2)
Modern AI isn't that much different than the AI I learned in school 25 years ago. There are two things that enable AI to be much more useful now, and often seem more powerful than it is:
1) Processing power
2) Dataset size
Both of those are multiple orders of magnitude greater today than 25 years ago, and that is what enables the kind of "flashy" AI that people get to interact with directly. Things like Siri, and photo albums on our phone that can automatically tag images with search terms (li
It'll never not have problems so long as (Score:2)
...the long-running ethics violation of the tech/software industry continues. see: http://3seas.org/EAD-RFI-respo... [3seas.org]
It should be obvious and in time it will be and then what will be thought of the tech/software industry?
Heck, even (Score:2)
"Recognize speech"
and
"Wreck a nice beach"
can trip up text to speech engines.
Current AI systems are based on the 1970's percept (Score:3)
This is all old school and nothing new. Computers advanced to the point where people realized they could practically use it. Neural networks are what brains use. Biological brains though have networks of networks. Neural networks are like fourier transforms. They identify a signal from noise. They work on corelations though and set data. They are literally educated guessing machines.
A real brain has neural networks that work together in sets. And on top of that there is a genetic cheat sheet for the neural nets; how big they are and how they should feed back into each other. There are even neural nets active in youth that function as trainers or biasing to boot strap brains. An insect has more intelligence than modern implementations. Modern systems are more akin to the pre and post processing that occurs locally in the optic nerve and spinal cord.
The big snake in the grass is the term Intelligence. It is a fuzzy concept in itself that depends on context.
How many decades of failure must pass before (Score:2)
AI researchers finally admit that human intelligence cannot be duplicated by machines?
Currently there's a fundamental assumption that awareness of self (and thus intelligence) is the result of the right mix of brain chemistry and electrical impulses, and therefore a silicon-based machine can be just as good as a carbon-based meat machine. But what if this assumption is.... wrong?
Now don't start ranting at me about the nonexistence of Jeebus and Yaweh, yeah I get it, you hate them. But many (probably majorit
Do they LEARN? (Score:2)
read the defintion before you answer:
https://www.google.com/search?... [google.com]
gain or acquire knowledge of or skill in (something) by study, experience, or being taught.
"they'd started learning French"
A system that 'only' categorizes , sorts, and manipulates data , does not actually relate to it as representational of the real world, in other words it
still has no 'knowledge' of the objects. They don't ACTUALLY learn they are trained. They no more learn any topic the a parrot learns to talk.
Not to say they aren't
Re: (Score:2)
So, translating a sentence from one language to another does not involve knowledge or skill ?
Finally, a comment on AI that I can support (Score:5, Interesting)
The brain is way more complicated than we know.
For example: there are two stable isotopes of lithium. Chemically they are identical, but they do not have the same effect on the brain. One is useful as a drug to treat mental illness and the other is not. This means there is something more subtle about how our brain works than interconnections and electrochemistry.
It is however a worthy challenge because the journey will teach us much about who we really are and how we work.
Re: (Score:3)
For example: there are two stable isotopes of lithium. Chemically they are identical,
No, they are not. For example, one of the methods of separating them is the COLEX process https://en.wikipedia.org/wiki/... [wikipedia.org] which exploits their different chemical properties.
Re: (Score:3)
Its fascinating that something this subtle can have such a profound effect on the brain. Brain chemistry is very complex.
Re: (Score:2)
For example: there are two stable isotopes of lithium. Chemically they are identical, but they do not have the same effect on the brain.
That's a pretty radical assertion, I would sure like to see a reference. I'm googleing lithium isotopes and mental illness, but so far nothing.
Re: (Score:2)
Bwahahahahaha (Score:2)
Seems we finally have real world verification for Searle's Chinese Room [wikipedia.org] situation. Thank you researchers for finally proving a conjecture from thirty years ago that you continually and blindly ignored. Some of you even argued against it. And now look at the egg on your face.
Ha!
Re: (Score:2)
Searle's Chinese Room is one of the stupidest ideas ever proposed.
That said, you're not even applying it correctly. The premise of the Chinese Room is that the room produces behavior indistinguishable from a real Chinese speaker. Any time you can point to a failure of an AI system, it is clearly violating that premise.
UI - Unartificial Intelligence (Score:2)
Isn't it absurd to add 'Unartificial' to real intelligence systems?
How do they work? In real life, in all species, there is an element of inherited knowledge. In humans, this is minimal and we must learn from experience and from our mentors. Generally speaking we learn, as all animals and mammals, by experimenting. What doesn't kill us makes us smarter.
We, all of us from microbes to humans, learn by exploring our world without prejudice, in hopes of finding something beneficial to our survival and welfare.
People (Score:2)
People make the same mistakes. Language is complicated, evolving, and misused constantly. If you told me to type out that sentence, I might assume the guy had a bear for a head also.
"Understanding" (Score:2)
I'd like to see a concise definition of what it means for a machine to "understand" something. It's easy to give examples of machines not "understanding" something, but if a machine suddenly dealt with all those examples correctly, could we then say that it "understands" those situations? Or would we find more examples that it gets wrong and say it still doesn't understand?
People are not perfect at interpreting images, either; it's fairly easy to construct an image that a person gets wrong, for instance u
Re: (Score:3)
Re: (Score:3)
Gross oversimplification.
Re: (Score:3)
But there is no denying we love boobies.
About 140 billion neurons in human brain. Grossly oversimplifying 140 billion ^2 possible interconnects (actual number is lower). We can't even store state information for the synapses (input weights), much less model the chemistry in the synapse.
Re: (Score:3)
Re: (Score:2)
Well, at least this gets admitted by now. Calling it "AI" is still grossly misleading to any non-expert, but I can live with a statement like yours.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
They are taking these baby-steps because attempts at larger steps have failed. It is quite possible this attempt will fail too, but if not the results will be groundbreaking and very, very useful.
However the approach to mimic what humans do is probably not a good one, as common sense is not very common among humans.
Re: (Score:2)
Pardon Me while I Hijack This Useless Thread (Score:2)
If only we had a system that was designed from the ground up to provide some common sense [cyc.com] for AI.
Re: (Score:2)
Your signature is wrong: It will be like it always is:
Wrapped in the flag and carrying a bible: 'murika!
Why can't it be both?
(it can)
Re: (Score:2)
I should add it will also be wearing a black mask.
Re: (Score:2)
Or carrying walmart tiki torches and chanting about Jews and race and other such nonsense.
Re: (Score:2)
Re: (Score:2)
How did the UK eliminate slavery without a Civil War like the USA had?
Re: (Score:3)
By not passing protectionist tariffs that crippled half the states [dailyprogress.com] into law.
Re: (Score:2)
Thanks for the link. I read the article and many of the comments.
What do you think about this one?
William W. Bergen
This long-discredited argument borders on silly, and it is discouraging to see it being revived in our local paper. No country in the history of the world ever split asunder over tariffs, and whatever someone says now, Southerners at the time were clear as to why they fought. As Alexander Stephens, the Confederate Vice President, put it a month before the Civil War began: "Our new Government i
Re:They say... (Score:4, Interesting)
Thanks for the link. I read the article and many of the comments.
What do you think about this one?
The same thing I think about anyone who claims that a major moment in human history boils down to 1 factor - it's bullshit, man.
Yes, slavery was a factor, but not the only factor. Consider the tariffs I linked to, then ask yourself: under those trade rules, how would the Southern states have managed to survive without the use of slave labor? The fact is, they wouldn't have, so in a way the Northern states forced the South to rely on slavery, then punished those states for it.
The fact is, our American Civil War was complicated, both the reasons for it's beginning and continuation (fun fact - Lincoln floated the idea of leaving slavery legal in some states, to preserve the union). What's interesting as an American is that the angle historians take on the conflict tends to be defined by where you get the education: Northern states tend to teach the "Civil War was about slavery" concept, Southern states lean towards the "state's rights" ideology, and Border states (like where I'm from) tend to take a more middle-of-the-road, "both of you are assholes" mentality.
Re: (Score:2)
Reading arguments like these always make me shake my head as it just underscores the fact that 99% of the population are unable to understand the difference between a proximal and an ultimate cause. [wikipedia.org] See also Aristotle's Four Causes. [wikipedia.org]
Re: (Score:2)
Well. It definitely sounds like you paid attention in philosophy class. Congratulations on being a well-rounded person.
I'm afraid though that I am no further enlightened in this particular "debate" by your instructive comments.
Re: (Score:2)
The purpose of AI: TO serve Man (Score:2)
It's a cookbook. And since this whole article is merely about extraction of semantic meaning in ambiguous cases, then I will assert that the phrase "As someone who has worked in A.I. for decades" is literally a statement about the matrix and their occupancy within in.
And please could you take a step back because you are pixelating in my vision and I don't like the reminder that you are not real
Re: (Score:2)
People have worked in robotics and autonomous subsea vehicles for decades. You want a robotic system that can follow an underwater search path along a pipeline or around an oil rig and sound an alert when it finds something anomalous. Otherwise you need a team of operators watching CCTV cameras and manipulating controls, all working shifts for weeks. And one team for every non-autonomous ROV. Cost of hardware is low enough that you can afford ten or so ROV's.
Some sonar systems have a range of 10km, but at a