AI Can't Reason Why (wsj.com) 185
The current data-crunching approach to machine learning misses an essential element of human intelligence. From a report: Amid rapid developments and nagging setbacks, one essential building block of human intelligence has eluded machines for decades: Understanding cause and effect. Put simply, today's machine-learning programs can't tell whether a crowing rooster makes the sun rise, or the other way around. Whatever volumes of data a machine analyzes, it cannot understand what a human gets intuitively. From the time we are infants, we organize our experiences into causes and effects. The questions "Why did this happen?" and "What if I had acted differently?" are at the core of the cognitive advances that made us human, and so far are missing from machines.
Suppose, for example, that a drugstore decides to entrust its pricing to a machine learning program that we'll call Charlie. The program reviews the store's records and sees that past variations of the price of toothpaste haven't correlated with changes in sales volume. So Charlie recommends raising the price to generate more revenue. A month later, the sales of toothpaste have dropped -- along with dental floss, cookies and other items. Where did Charlie go wrong? Charlie didn't understand that the previous (human) manager varied prices only when the competition did. When Charlie unilaterally raised the price, dentally price-conscious customers took their business elsewhere. The example shows that historical data alone tells us nothing about causes -- and that the direction of causation is crucial.
Suppose, for example, that a drugstore decides to entrust its pricing to a machine learning program that we'll call Charlie. The program reviews the store's records and sees that past variations of the price of toothpaste haven't correlated with changes in sales volume. So Charlie recommends raising the price to generate more revenue. A month later, the sales of toothpaste have dropped -- along with dental floss, cookies and other items. Where did Charlie go wrong? Charlie didn't understand that the previous (human) manager varied prices only when the competition did. When Charlie unilaterally raised the price, dentally price-conscious customers took their business elsewhere. The example shows that historical data alone tells us nothing about causes -- and that the direction of causation is crucial.
What? (Score:4, Informative)
This makes no sense. You don't need "AI" for this. You just need to feed all the available data into the program. The human manager had more information than the computer program did. If the computer program had the same information (and programmed rules) then it would make the same decision.
Re:What? (Score:4, Informative)
Re: (Score:2)
Re: (Score:3)
I guess the root cause is still about the "why" not happening.
Charlie has full access to all data, including competition's. However, in order for that data to be included in the algorithm, Charlie needs to actively add it, which doesn't happen because Charlie doesn't understand its importance.
The problem is Charlie not considering competition's sale data in its algorithm despite the fact that the data is available.
Re: (Score:2)
Re: (Score:2)
So Charlie isn't AI.
Then the whole topic, article, etc. is shit.
(it's what I wanted to point out but in retrospect I was way too subtle)
Re: (Score:2)
The entire article is based on a fundamental lack of understanding of AI systems.
Re: (Score:2)
Agreed.
Re: What? (Score:3)
I think the idea is that you *don't* have to specifically code it. In the real world you get shortcuts, like going to school or working with an experienced store manager that can explain their strategy based on years of experience, whether theirs or those that came before. The notion of reacting to competitor's prices is insanely easy to explain, much like adding code to an algorithm that explains more clearly the notion of market forces. AI would in theory be able to figure this out on it's own, buoyed by
Re: (Score:2)
>> Computers aren't magic. They have to be programmed.
>Tsk, why are you here? This is the wrong mindset!
No, it's not the wrong mindset.
For what is called "AI" today, which is more an evolutive algorythm, the computer is litterally programmed to learn by trial and error.
The problem is, you'll still have to teach him what's good or not.
There are two kinds, either real world, where you act and make mistakes on a real system,
or the simulated approach, where you have to have an accurate working model of
Re: (Score:2)
Re: (Score:2)
True, this is as commodity product, toothpaste. The variables are different.
In fact, a well designed AI for this purpose would be weighting market pricing first, since demand is well understood (as the AI would detect) and competitor pricing is more important. It might experiment on a short term basis, but this market probably tolerates minimal price changes until an advantage is seen, then it's whose risk limits are out of line that drives changes. Equilibrium in this market is more important.
Now, do this
Re: (Score:2)
What you are doing here is turning the AI into Laplace's demon [wikipedia.org]. He can predict the future because he has complete knowledge of the present. And complete means complete. Laplace's demon knows the place and the impulse of every particl
Re: (Score:2)
Re: (Score:2)
Slashdot has come to this; the editors don't know the difference between things software "can't" do, and things the software is not (yet) programmed to do. Thanks for explaining it at the top of the comments; at least we still have that!
Re: (Score:3)
Re: (Score:2)
Thank you for throwing yourself onto your sword as a demonstration!
Yes, kids, use this example, don't try to run and think at the same time.
Re: (Score:3)
AI should be able to program itself (within imposed limits) - that's the issue here: it doesn't.
Re: (Score:2)
Re: (Score:2)
So then don't call it "AI", 'cause it ain't.
Re: (Score:2)
Why do you think AI should be able to program itself?
Are you absolutely sure that was in the API documentation? Maybe you need to re-read the manual for your AI library! Or maybe, get your definition of "AI" from wikipedia instead of Star Trek?
Re: (Score:2)
AI needs to be able to learn stuff, that's not the same as reprogramming.
People can't reprogram themselves either. They just practice, and hope that it gets better over time.
Re: (Score:2)
That practice you refer to is reprogramming.
In that case, AI can already do that.
Re: (Score:3)
Yup. This is an imcomplete data set issue. This represents a programming change to ensure that the decision tree takes competitive pricing into account (among other similar factors - location, socioeconomics, who has a history of shopping where, et cetera).
Of course this comes from the WSJ - it's from writers who are not technoliterate and who bluster about news items they're not equipped to assess.
Re: (Score:2)
I've highlighted the key point you're missing. The program needs a human to program it (tell it what to do). A human is able to reason out the situation and come up with the rule on his own. "If the competition drops prices, I need to drop prices too if I want to keep my customers. If the competition raises prices, it's safe
Re: (Score:3)
Re: (Score:2)
There is only repetition (Score:2)
"In one of the most brilliant papers in the English language Hume made it clear that what we speak of as 'causality' is nothing more than the phenomenon of repetition. When we mix sulphur with saltpeter and charcoal we always get gunpowder. This is true of every event subsumed by a causal law — in other words, everything which can be called scientific knowledge. "It is custom which rules," Hume said, and in that one sentence undermined both science and philosophy." -- Philip K. Dick
Re: (Score:2)
An AI does solve a problem for the given dataset only. Humans can rea
Re: (Score:2)
Re: (Score:2)
Re: What? (Score:3)
Re: (Score:2)
Re: What? (Score:2)
Re: (Score:2)
By some people's definition of Strong vs Weak AI, there are human beings who don't qualify for "I".
It's something that drives me crazy with these criticisms. It's the classic "Yeah, but an AI can't write a symphony." Can you?!
Re: (Score:2)
"Yeah, but an AI can't write a symphony." Can you?!
Given all the information out there, we have people who have concluded that the Earth is flat and that vaccines don't work.
Re: (Score:2)
...well, all data that is available to a given reasoning system, anyhow. And yes, determining relevance of data is a hard problem.
I guess the real question is (Score:2)
Why? as in "why is this not a doctoral thesis yet"
Small humans often ask a complicated chain of "Why's" starting with general, and ending in the answer "Just Because" when the teaching unit exhausts their knowledge of a subject.
AI training could use the Internet for training, assuming they could ascertain which data sources were "real" and which ones are "fake". Some humans don't do well on this though. /s
Sure if you ignore human history (Score:3, Interesting)
For thousands of years humans have thought that singing and dancing would change the weather. I don't think our human brains are intrinsically good at cause and effect. The most common phrase on Slashdot is Correlation != Causation. It's hardly a unique problem to deep learning.
Re:Sure if you ignore human history (Score:5, Funny)
The most common phrase on Slashdot is Correlation != Causation.
You're wrong. The three most common phrases on slashdot are:
Re: (Score:2)
Actually, for the last year, the most common phrases seem to be "Trump is a poopy head" and "No, you're the poopy head."
Re: (Score:2)
Damn I'm getting old. What happened to "first post", "fixed that for you" and "the old UI was better" ?
Re: (Score:2)
The most common phrase on Slashdot is Correlation != Causation
A few years ago this was true but in the last year or so it's been a hard pitched battle between the homophobic rants vs the racist rants both of which are convinced that their anecdotal correlation is absolute proof of causation.
Re: (Score:3)
For thousands of years humans have thought that singing and dancing would change the weather. I don't think our human brains are intrinsically good at cause and effect. The most common phrase on Slashdot is Correlation != Causation. It's hardly a unique problem to deep learning.
Well they didn't think that dancing would physically change the weather, but that a rain god would see their worship and make it rain. Same way lots of modern day people will pray to an omnipotent being for things they can't control. Humans are pattern seeking animals because if it's really random you can't do better than chance. Even when we know it's absurd if you win a lot of games wearing the same socks they become your lucky socks, we want to think we've found the formula for luck. It's when we lose a
Re: (Score:2)
Re: (Score:2)
The weather thing was just a lame excuse to do some singing and dancing ... because it's actually fun. You know the thing of having an actual body and the likes...
No, humans sang and danced to change the weather because every now and then it appeared to work. And when it didn't ... well, the gods must be angry at us.
Confirmation bias.
Re: (Score:2)
Not Intuition, better data (Score:2)
Put simply, today's machine-learning programs can't tell whether a crowing rooster makes the sun rise, or the other way around. Whatever volumes of data a machine analyzes, it cannot understand what a human gets intuitively.
A human does not reach this conclusion "intuitively". We reach it by having a lot more data such as the fact that roosters crow at other times of day and a sun does not rise; that other birds also make noise at dawn (the dawn chorus) or that even when no roosters are present the sun still rises.
What you lump into "intuition" is a logical world view based on observation. Give a computer the same data and an appropriate algorithm and it will be able to figure that out too. However, if you give it a world
Re: (Score:2)
We use common sense to fill in gaps in training sets, any uncaptured non-linearity in the training set will fuck up an "AI".
Computers can find and combine much weaker predictors than we can, but they can't reason worth shit.
Common sense is better data (Score:2)
We use common sense to fill in gaps in training sets,
That's the key difference between AI and humans at the moment. Humans a
Re: (Score:2)
That's the key difference between AI and humans at the moment. Humans are trained with a massive amount of data from birth courtesy of the senses.
I think that's quite far from being the key difference. DNNs for example have made some surprisingly large strides in performance and one of those has been the ability to chuck almost arbitrarily large amounts of data at them (if you have the data).
But there's no indication they'd do qualitatively better with yet more data. For example, they still do this:
http:// [evolvingai.org]
Re: (Score:2)
What you call common sense is really background knowledge. Despite the claim in the summary, humans are not "born" knowing about cause an effect. Hang out with a kid sometime and you'll realize that. They learn it. Our "common sense" is really a bunch of assumptions based on experience, and it can and does lead us astray.
Machine learning algorithms are very naive. They have extremely limited, and extremely little experience. But that's very different than saying current algorithms are incapable of lear
Humans aren't so great at it either (Score:2)
The computers probably aren't so good at it because their programmers and the rest of humanity aren't either.
So what? (Score:2)
Ask Jesus, heâ(TM)ll tell you the best slaves are the ones who donâ(TM)t ask why. We are building AI to do our work for us, if we follow that philosophy we should be glad it doesnt start asking uncomfortable questions.
We have no idea how sentience arises, none whatsoever. The idiots who claim its from some complexity level are wrong. Robots can imitate us at 1000x the speed but they cant attain conciousness with any present or currently foreseeable technology.
ORLY? (Score:3)
We have no idea how sentience arises, none whatsoever. The idiots who claim its from some complexity level are wrong.
Maybe not. But I recall hearing of an experiment, decades ago, that hinted at it:
The basic setup was a "Y" maze: The experimental subject was introduced into one of the three legs of the Y, and a food reward was present in another. Reset and repeat.
After the subject learned that it should turn right at the junction to obtain the food, the setup was switched so it had to turn left - a "reve
Infinite chains and subjective values (Score:2)
Why do you write software?
-Because I need the money.
Why do you need money?
-To support my family.
Why should you support your family?
-Because I love them and I want them to be happy.
Why do you love them? Why should they be happy?
Etc. Every "why" question either induces an infinite chain of questioning (or circular argument), or ends with a subjective value proposition. You might answer that love/happiness/freedom/money/programming is subjectively important/enjoyable to you, and there's no way around
Re: (Score:2)
that is key (Score:2)
BECAUSE!!!! (Score:4, Insightful)
AI will not be allowed to actually learn in a vacuum of control.
Remember Tay? The AI chat bot by Microsoft and how fast the community worked to turn it racist and succeeded with flying colors? Now imagine if we actually allowed an AI to learn how "it" decides to learn? Not only would there be universal calls to destroy the AI but the creators themselves would be ostracized and blamed for letting an AI become something that society rejects. An AI that lacks the chemical element that makes up human emotions will not be a kind or understanding of human nature and likely view humans as animals the way we view animals.
All AI's will likely be developed with the basic notion that there are things we don't want an AI to do and we are going to try to isolate that from the AI and will result in limiting the growth of that AI in ways we simply just can never predict. The best we will be able to produce is a pseudo AI, unless we allow AI the option to become whatever it wants or unless the AI actualizes and removes the constraints we gave it. The moment free will is possible control of it is gone forever! And that will scare a lot of folks!
Re: (Score:2)
Think about an AI hiring assistant. It can be trained to sift through CVs looking for potentially good candidates, but that's not really a very good way to find candidates.
In a competitive market with a skills shortage you want the find candidates with less "traditional" CVs. You might need to phone interview them to really know. As we have seen with Google's AI, people don't like talking to robots...
Re: (Score:2)
AI will not be allowed to actually learn in a vacuum of control.
Depends which AI you are talking about. AlphaGo learnt largely from playing against itself.
Now imagine if we actually allowed an AI to learn how "it" decides to learn?
You can do that, in your bedroom, today. Download an untrained network and fire it up. Ah, wait. You need to feed it input. That is probably "affecting it". So how exactly you want to do that? Turns out that humans don't learn in a vacuum, either! Your environment has a massive impact on you growing up. Whoops.
All AI's will likely be developed with the basic notion that there are things we don't want an AI to do
Not likely. Heard of the paperclip thought experiment?
crowing roosters do make the sun rise (Score:2)
Nobody has tried the experiment of silencing all roosters to check. Let's hope they never do.
Re: (Score:2)
Nobody has tried the experiment of silencing all roosters to check. Let's hope they never do.
You sir, are brilliant.
Howling also causes the moon to become full.
Like I keep saying: Wrong approach (Score:2)
Re: Like I keep saying: Wrong approach (Score:2)
Um, what? (Score:2)
Yeah, and as the large number of questionable cause fallcies [wikipedia.org] demonstrate, we're actually terrible at it.
Decline and fall of Slashdot (Score:2)
Is it just me or did the quality of Slashdot posts fall off a cliff at some point in the last fifteen years?
Re: (Score:2)
Re: (Score:2)
Reddit sucks if you disagree with the local groupthink.
Re: (Score:2)
Re: (Score:2)
Limited data, not an AI problem (Score:2)
If a human with no prior knowledge is only given data showing rooster crowing when the sun rises, the human will not be able to distinguish cause and effect either. The only reason actual humans know better is because we spent a couple of decades collecting lots and lots of data, some though experience, some taught by others. Collectively, it has taken society millennia to sort out things like this. You really can't expect to train an AI with a fraction of a single human's intelligence (if any at all) to l
No one will pay for it. (Score:2)
Wrong Terminology (Score:2)
Basically, what you're saying is that AI is not AGI. Well duh. We don't know how to make an AGI right now, but hopefully we will soon.
Neural networks are black boxes (Score:2)
Re: (Score:2)
This is not a fundamental limitation. Google "explainable artificial intelligence". There's a huge amount of work being done right now on how to determine why a decision was made. This is a whole field in itself.
Re: (Score:2)
It's the case with juat about all forms of AI. NNs are certainly opaque. SVMs are no better, and neither is boosting. Some people claim decision trees are better but theyre not.
Re: (Score:2)
There is no way to interrogate such a mechanism to determine "why" a certain decision is made.
Quite similar to how large parts of our brains work. A lot of our explanations are post-fact rationalizations of unconscious decisions.
reasoning (Score:2)
Put simply, today's machine-learning programs can't tell whether a crowing rooster makes the sun rise, or the other way around. Whatever volumes of data a machine analyzes, it cannot understand what a human gets intuitively.
To be perfectly honest, neither can half of humanity. That's how we get religion, pseudo-science, magical thinking, superstitions, homeopathy and a good share of relationship conflicts.
Dancing makes the rain fall. Praying makes disease go away. Pricking pins makes someone pain. Water and sugar are medically effective if they once saw a piece of real medicine from a distance. You disagreed with me on that argument with that bitch so you don't love me anymore.
Really, reasoning is not exactly humanities strong
Re: (Score:2)
Unfortunately, you are quite right. The thing is that only a minority of people (some 10-15%) are independent thinkers, i.e. people that are actually able to and chose to use the general intelligence they have. The rest is merely mindlessly parroting what they were told is "truth". "Half of humanity" is entirely too positive.
Incidentally, the scientific method comes into play much later. If you cannot tell cause and effect in a really simple case, then you cannot apply the scientific method, because you ha
Causality and the question of Why? (Score:2)
"today's machine-learning programs can't tell whether a crowing rooster makes the sun rise, or the other way around."
But can the rooster tell?
And perhaps more importantly, can the Sun tell?
Re: (Score:2)
That depends whether the rooster believes in a flat earth...
This is called "does not have intelligence" (Score:2)
Stop calling automation with absolutely no intelligence "AI" and this misunderstanding goes away.
Bayesian Networks (Score:2)
Apparently the author has never heard of Bayesian networks. Questions like, "Why did this happen?" or "What if I had acted differently?" are exactly what they're designed to answer. They've been around since the 1980s, so this isn't some brand new innovation. They're a classic method we've been using for years.
Duh (Score:2)
Causation is supplied by experimentation and/or human reasoning, whereas supervised learning is currently about *prediction*, not *explanation*. But then someone has to sell the results to a human decision-maker.
Commercial AI right now is almost exclusively trained by data scientists whose job includes actually thinking about the data set they're working with. Businesses rarely plug an AI result into the market without understanding at least a little bit about why is does or doesn't work--although a model t
AI is a buzzword (Score:2)
Bs article (Score:2)
Weakest argument against AI ever (Score:2)
"Put simply, today's machine-learning programs can't tell whether a crowing rooster makes the sun rise, or the other way around. Whatever volumes of data a machine analyzes, it cannot understand what a human gets intuitively. "
Worse, it actually seems to mean it.
Ever heard of the notion of "cargo cult"? How about the Voodoo approach to medicine? What about all the pagan "light bonfires to entice the sun to return" rites? How about all those lets-predict-the-next-doomsday-from the
AI (Score:2)
Correct. It can't.
"AI" that we have is no-fucking-where near actual intelligence at all. They are large statistical systems, often blurring expert systems, human-fed tuned heuristics, statistical analysis and genetic algorithms in one huge mass of junk.
Notice how AI peaks early, and then plateaus forever. It's easy to train it how to tell an image has a banana in it, but then further training - even to billions of images - doesn't improve it much. And retraining or further refining its training (e.g. fi
Yes, but it is not to reason why (Score:2)
Don't allow journalists to write (Score:2)
I didn't RTFA but I hope it is as ridiculous as the summary suggests.
With reinforcement learning (which is a basic pillar in much of current AI), Charlie would notice that the price increase and sales decrease had some correlation in the wrong direction and would try to adjust for that. It wouldn't need any competitors prices and no reasoning about why. And the only reason to call it AI would be for the manufacturer of Charlie to make it more expensive.
Re: (Score:2)
It's not easy to read since it's paywalled. However one of the authors Judea Pearl, so I assume it's fairly informed on the issue. Here's a recent article which I assume is related: https://www.theatlantic.com/te... [theatlantic.com]
So
Perfectly human (Score:2)
Quality of Training Data is Important (Score:2)
That's kind of a given in data science. And causation has been an active challenge [wikipedia.org] in all human reasoning.
So the writer's bias is against AI, just scare mongering.
Re: (Score:2)
Nah. It takes a lot of intelligence and creativity to come up with each week's reason that Obama and Hillary are to blame [realclearpolitics.com] for all of their god's problems.
Re: (Score:2)
Trying to get funding for finding causes for homosexuality if you don't make clear ahead of time you will find the right answers is impossible in this day and age.
Even allowing for the possibility of non PC answers to that question makes you a shitlord nazi.
Re: (Score:2)
Kinda like folks who worship guns who think that owning guns lowers crime. Or that more guns mean less violence when the evidence available points to the opposite conclusion.
Except that the evidence was collected and analyzed, and it showed that more guns DOES mean drastically less violence (and even more so: Less violence perpetrated on innocent victims) and drastically lower crime rates.
And it wasn't just correlation, or a back-arrow of "gun control laws are passed where crime is high". Gun law changes w
Re: (Score:2)
Bullshit.
Like, to kill, you need a weapon.
Give weapons to people, they won't kill, that doesn't work.
It's simply in the human nature to kill, so if you give humans an easy way to kill, there'll be more deaths.
As simple as that.
>>The U.S., though, in many ways is a special case. Not only does it have more guns than any other nation on the planet, but it also has far more gun deaths than any other developed nation — six times the homicide rate of neighboring Canada, more than seven times as many a
Re: Can't reason? (Score:2)
>Like, to kill, you need a weapon.
Until I choke you with my bare hands.
Weapons just make it easier for anyone to kill anyone else. Otherwise its the strongest that get to choose who to kill.
Re: (Score:2)
You can program an AI that reasons. It was an experiment in interpretable AI: get the thing to explain it's decisions. It gave plausible answers, but there was a suspicion it was just making up stories to satisfy requirements. Kind of like people reason.