Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Technology

AI Can't Reason Why (wsj.com) 185

The current data-crunching approach to machine learning misses an essential element of human intelligence. From a report: Amid rapid developments and nagging setbacks, one essential building block of human intelligence has eluded machines for decades: Understanding cause and effect. Put simply, today's machine-learning programs can't tell whether a crowing rooster makes the sun rise, or the other way around. Whatever volumes of data a machine analyzes, it cannot understand what a human gets intuitively. From the time we are infants, we organize our experiences into causes and effects. The questions "Why did this happen?" and "What if I had acted differently?" are at the core of the cognitive advances that made us human, and so far are missing from machines.

Suppose, for example, that a drugstore decides to entrust its pricing to a machine learning program that we'll call Charlie. The program reviews the store's records and sees that past variations of the price of toothpaste haven't correlated with changes in sales volume. So Charlie recommends raising the price to generate more revenue. A month later, the sales of toothpaste have dropped -- along with dental floss, cookies and other items. Where did Charlie go wrong? Charlie didn't understand that the previous (human) manager varied prices only when the competition did. When Charlie unilaterally raised the price, dentally price-conscious customers took their business elsewhere. The example shows that historical data alone tells us nothing about causes -- and that the direction of causation is crucial.

This discussion has been archived. No new comments can be posted.

AI Can't Reason Why

Comments Filter:
  • What? (Score:4, Informative)

    by 110010001000 ( 697113 ) on Friday May 18, 2018 @06:14PM (#56635862) Homepage Journal
    " Charlie didn't understand that the previous (human) manager varied prices only when the competition did"

    This makes no sense. You don't need "AI" for this. You just need to feed all the available data into the program. The human manager had more information than the computer program did. If the computer program had the same information (and programmed rules) then it would make the same decision.
    • Re:What? (Score:4, Informative)

      by NicknameUnavailable ( 4134147 ) on Friday May 18, 2018 @06:17PM (#56635884)
      The computer wouldn't even need that information, this is a basic microeconomics problem and a computer could easily solve it today. Interesting that some "journalist" writing for something named The Wallstreet Journal doesn't know this.
      • The computer would need to know the data of what the competitors are selling the product for in order for it to consider it in its algorithm.
        • I guess the root cause is still about the "why" not happening.
          Charlie has full access to all data, including competition's. However, in order for that data to be included in the algorithm, Charlie needs to actively add it, which doesn't happen because Charlie doesn't understand its importance.

          The problem is Charlie not considering competition's sale data in its algorithm despite the fact that the data is available.

          • In the example, Charlie didn't have full access to the data, but even if he did he would need to be "taught" (a.k.a programmed), just like the store manager was taught by somebody. Computers aren't magic. They have to be programmed.
            • So Charlie isn't AI.
              Then the whole topic, article, etc. is shit.

              (it's what I wanted to point out but in retrospect I was way too subtle)

            • I think the idea is that you *don't* have to specifically code it. In the real world you get shortcuts, like going to school or working with an experienced store manager that can explain their strategy based on years of experience, whether theirs or those that came before. The notion of reacting to competitor's prices is insanely easy to explain, much like adding code to an algorithm that explains more clearly the notion of market forces. AI would in theory be able to figure this out on it's own, buoyed by

        • No it wouldn't. There are solutions to imperfect information.
      • True, this is as commodity product, toothpaste. The variables are different.

        In fact, a well designed AI for this purpose would be weighting market pricing first, since demand is well understood (as the AI would detect) and competitor pricing is more important. It might experiment on a short term basis, but this market probably tolerates minimal price changes until an advantage is seen, then it's whose risk limits are out of line that drives changes. Equilibrium in this market is more important.

        Now, do this

      • by Sique ( 173459 )
        No, the human operator would need that information. The computer works only on the dataset given. And if that dataset does not include competitor's prices, the computer will never be able to infer them. The AI is only trying to find rules in data and to extrapolate from there.

        What you are doing here is turning the AI into Laplace's demon [wikipedia.org]. He can predict the future because he has complete knowledge of the present. And complete means complete. Laplace's demon knows the place and the impulse of every particl

    • Slashdot has come to this; the editors don't know the difference between things software "can't" do, and things the software is not (yet) programmed to do. Thanks for explaining it at the top of the comments; at least we still have that!

      • Lots of young people don't know how software (or computers) works at this point. They think it is magic, and that neural networks are somehow new.
      • AI should be able to program itself (within imposed limits) - that's the issue here: it doesn't.

        • There is no such thing as "AI" and computers need to be programmed. Explicitly.
        • Why do you think AI should be able to program itself?

          Are you absolutely sure that was in the API documentation? Maybe you need to re-read the manual for your AI library! Or maybe, get your definition of "AI" from wikipedia instead of Star Trek?

        • AI needs to be able to learn stuff, that's not the same as reprogramming.

          People can't reprogram themselves either. They just practice, and hope that it gets better over time.

    • Yup. This is an imcomplete data set issue. This represents a programming change to ensure that the decision tree takes competitive pricing into account (among other similar factors - location, socioeconomics, who has a history of shopping where, et cetera).

      Of course this comes from the WSJ - it's from writers who are not technoliterate and who bluster about news items they're not equipped to assess.

    • The human manager had more information than the computer program did. If the computer program had the same information ( and programmed rules ) then it would make the same decision.

      I've highlighted the key point you're missing. The program needs a human to program it (tell it what to do). A human is able to reason out the situation and come up with the rule on his own. "If the competition drops prices, I need to drop prices too if I want to keep my customers. If the competition raises prices, it's safe

      • No, a human needs to be taught too. You don't think Walmart store managers learn when/how to drop their prices??? They are trained by the corporation typically.
        • How do you think that training came into being? A human thought it up in the first place. Just because it wasn't an original thought in 2018 doesn't mean it was never an original thought.
    • "In one of the most brilliant papers in the English language Hume made it clear that what we speak of as 'causality' is nothing more than the phenomenon of repetition. When we mix sulphur with saltpeter and charcoal we always get gunpowder. This is true of every event subsumed by a causal law — in other words, everything which can be called scientific knowledge. "It is custom which rules," Hume said, and in that one sentence undermined both science and philosophy." -- Philip K. Dick

    • by Sique ( 173459 )
      Of course this makes sense. Differently than an AI, humans are able to expand the data set they are working on. In the example given, toothpaste prices for other stores are not available, only the inhouse prices. Thus an AI will never be able to get the competition price rule. And differently than a human, it will never ask for the prices of other stores, as the data world it lives in does not even hint for the existence of other stores.

      An AI does solve a problem for the given dataset only. Humans can rea

    • by jrumney ( 197329 )
      Where the rules are simple and well understood, a custom program is always going to outperform AI (even if the AI comes up with the same solution, it is going to expend a lot more energy doing so). AI is good for problems that are too difficult to write rules for, or where humans themselves can't really separate the causation from correlation. We are currently in the overhyped phase of AI development, give it a few years to find its niches.
  • Why? as in "why is this not a doctoral thesis yet"

    Small humans often ask a complicated chain of "Why's" starting with general, and ending in the answer "Just Because" when the teaching unit exhausts their knowledge of a subject.

    AI training could use the Internet for training, assuming they could ascertain which data sources were "real" and which ones are "fake". Some humans don't do well on this though. /s

  • by im_thatoneguy ( 819432 ) on Friday May 18, 2018 @06:19PM (#56635912)

    For thousands of years humans have thought that singing and dancing would change the weather. I don't think our human brains are intrinsically good at cause and effect. The most common phrase on Slashdot is Correlation != Causation. It's hardly a unique problem to deep learning.

    • by Aighearach ( 97333 ) on Friday May 18, 2018 @06:28PM (#56635960)

      The most common phrase on Slashdot is Correlation != Causation.

      You're wrong. The three most common phrases on slashdot are:

      • You're wrong
      • Yer wrong
      • Your wrong
      • by taustin ( 171655 )

        Actually, for the last year, the most common phrases seem to be "Trump is a poopy head" and "No, you're the poopy head."

      • by Tom ( 822 )

        Damn I'm getting old. What happened to "first post", "fixed that for you" and "the old UI was better" ?

    • The most common phrase on Slashdot is Correlation != Causation

      A few years ago this was true but in the last year or so it's been a hard pitched battle between the homophobic rants vs the racist rants both of which are convinced that their anecdotal correlation is absolute proof of causation.

    • by Kjella ( 173770 )

      For thousands of years humans have thought that singing and dancing would change the weather. I don't think our human brains are intrinsically good at cause and effect. The most common phrase on Slashdot is Correlation != Causation. It's hardly a unique problem to deep learning.

      Well they didn't think that dancing would physically change the weather, but that a rain god would see their worship and make it rain. Same way lots of modern day people will pray to an omnipotent being for things they can't control. Humans are pattern seeking animals because if it's really random you can't do better than chance. Even when we know it's absurd if you win a lot of games wearing the same socks they become your lucky socks, we want to think we've found the formula for luck. It's when we lose a

    • by mspring ( 126862 )
      The weather thing was just a lame excuse to do some singing and dancing ... because it's actually fun. You know the thing of having an actual body and the likes...
      • The weather thing was just a lame excuse to do some singing and dancing ... because it's actually fun. You know the thing of having an actual body and the likes...

        No, humans sang and danced to change the weather because every now and then it appeared to work. And when it didn't ... well, the gods must be angry at us.

        Confirmation bias.

  • Put simply, today's machine-learning programs can't tell whether a crowing rooster makes the sun rise, or the other way around. Whatever volumes of data a machine analyzes, it cannot understand what a human gets intuitively.

    A human does not reach this conclusion "intuitively". We reach it by having a lot more data such as the fact that roosters crow at other times of day and a sun does not rise; that other birds also make noise at dawn (the dawn chorus) or that even when no roosters are present the sun still rises.

    What you lump into "intuition" is a logical world view based on observation. Give a computer the same data and an appropriate algorithm and it will be able to figure that out too. However, if you give it a world

    • We use common sense to fill in gaps in training sets, any uncaptured non-linearity in the training set will fuck up an "AI".

      Computers can find and combine much weaker predictors than we can, but they can't reason worth shit.

      • We use common sense to fill in gaps in training sets,

        ...and "common sense" is simply the extrapolation based on other data that we have been exposed to. Even if we have never seen a particular object before "common sense" tells us it will fall downwards if it is knocked off a table simply because every object we see usually falls downwards. However, we never spend time training a computer with data like this because what use is an algorithm that can tell us that objects fall downwards?

        That's the key difference between AI and humans at the moment. Humans a

        • That's the key difference between AI and humans at the moment. Humans are trained with a massive amount of data from birth courtesy of the senses.

          I think that's quite far from being the key difference. DNNs for example have made some surprisingly large strides in performance and one of those has been the ability to chuck almost arbitrarily large amounts of data at them (if you have the data).

          But there's no indication they'd do qualitatively better with yet more data. For example, they still do this:

          http:// [evolvingai.org]

      • by ceoyoyo ( 59147 )

        What you call common sense is really background knowledge. Despite the claim in the summary, humans are not "born" knowing about cause an effect. Hang out with a kid sometime and you'll realize that. They learn it. Our "common sense" is really a bunch of assumptions based on experience, and it can and does lead us astray.

        Machine learning algorithms are very naive. They have extremely limited, and extremely little experience. But that's very different than saying current algorithms are incapable of lear

  • The computers probably aren't so good at it because their programmers and the rest of humanity aren't either.

  • Ask Jesus, heâ(TM)ll tell you the best slaves are the ones who donâ(TM)t ask why. We are building AI to do our work for us, if we follow that philosophy we should be glad it doesnt start asking uncomfortable questions.

    We have no idea how sentience arises, none whatsoever. The idiots who claim its from some complexity level are wrong. Robots can imitate us at 1000x the speed but they cant attain conciousness with any present or currently foreseeable technology.

    • We have no idea how sentience arises, none whatsoever. The idiots who claim its from some complexity level are wrong.

      Maybe not. But I recall hearing of an experiment, decades ago, that hinted at it:

      The basic setup was a "Y" maze: The experimental subject was introduced into one of the three legs of the Y, and a food reward was present in another. Reset and repeat.

      After the subject learned that it should turn right at the junction to obtain the food, the setup was switched so it had to turn left - a "reve

  • Why do you write software?
    -Because I need the money.
    Why do you need money?
    -To support my family.
    Why should you support your family?
    -Because I love them and I want them to be happy.
    Why do you love them? Why should they be happy?

    Etc. Every "why" question either induces an infinite chain of questioning (or circular argument), or ends with a subjective value proposition. You might answer that love/happiness/freedom/money/programming is subjectively important/enjoyable to you, and there's no way around

  • That is why I have continually claimed there is not AI yet, what we have is task programming. We"learn" because we ask why, how, where... we have a burning drive and our interactions with meatspace during our quest to learn why develops our intelligence through experience. (the "I" part of AI)... Since artificial processes do not have goals beyond those stated by the programmer, they can nor ever will have a "eureka moment", hence there will never be true AI under the current direction of development.
  • BECAUSE!!!! (Score:4, Insightful)

    by SirAstral ( 1349985 ) on Friday May 18, 2018 @06:39PM (#56636046)

    AI will not be allowed to actually learn in a vacuum of control.

    Remember Tay? The AI chat bot by Microsoft and how fast the community worked to turn it racist and succeeded with flying colors? Now imagine if we actually allowed an AI to learn how "it" decides to learn? Not only would there be universal calls to destroy the AI but the creators themselves would be ostracized and blamed for letting an AI become something that society rejects. An AI that lacks the chemical element that makes up human emotions will not be a kind or understanding of human nature and likely view humans as animals the way we view animals.

    All AI's will likely be developed with the basic notion that there are things we don't want an AI to do and we are going to try to isolate that from the AI and will result in limiting the growth of that AI in ways we simply just can never predict. The best we will be able to produce is a pseudo AI, unless we allow AI the option to become whatever it wants or unless the AI actualizes and removes the constraints we gave it. The moment free will is possible control of it is gone forever! And that will scare a lot of folks!

    • by AmiMoJo ( 196126 )

      Think about an AI hiring assistant. It can be trained to sift through CVs looking for potentially good candidates, but that's not really a very good way to find candidates.

      In a competitive market with a skills shortage you want the find candidates with less "traditional" CVs. You might need to phone interview them to really know. As we have seen with Google's AI, people don't like talking to robots...

    • by Tom ( 822 )

      AI will not be allowed to actually learn in a vacuum of control.

      Depends which AI you are talking about. AlphaGo learnt largely from playing against itself.

      Now imagine if we actually allowed an AI to learn how "it" decides to learn?

      You can do that, in your bedroom, today. Download an untrained network and fire it up. Ah, wait. You need to feed it input. That is probably "affecting it". So how exactly you want to do that? Turns out that humans don't learn in a vacuum, either! Your environment has a massive impact on you growing up. Whoops.

      All AI's will likely be developed with the basic notion that there are things we don't want an AI to do

      Not likely. Heard of the paperclip thought experiment?

  • Nobody has tried the experiment of silencing all roosters to check. Let's hope they never do.

    • Nobody has tried the experiment of silencing all roosters to check. Let's hope they never do.

      You sir, are brilliant.

      Howling also causes the moon to become full.

  • They're trying to do an end-run around millions of years of evolution, compressing it into a handful of years, and in the meantime we don't even understand how it is our own brains (or ANY brains for that matter) can do what they do.
  • Whatever volumes of data a machine analyzes, it cannot understand what a human gets intuitively. From the time we are infants, we organize our experiences into causes and effects.

    Yeah, and as the large number of questionable cause fallcies [wikipedia.org] demonstrate, we're actually terrible at it.

  • Is it just me or did the quality of Slashdot posts fall off a cliff at some point in the last fifteen years?

  • If a human with no prior knowledge is only given data showing rooster crowing when the sun rises, the human will not be able to distinguish cause and effect either. The only reason actual humans know better is because we spent a couple of decades collecting lots and lots of data, some though experience, some taught by others. Collectively, it has taken society millennia to sort out things like this. You really can't expect to train an AI with a fraction of a single human's intelligence (if any at all) to l

  • Ok. So I work in AI research. The lab I am in specializes in a type of modeling where you can drill down into the simulation and explain the full chain from root causes to final effects even with feedback loops. And no one is interested in what we do. Why? Well, it is slow! It has the wrong buzzwords! It takes up too much CPU/Memory! Why, this other team can throw a few equations and a neural net at the same problem and get an answer faster! Sure the can't explain why, but oooh look at the glittery ha
  • Basically, what you're saying is that AI is not AGI. Well duh. We don't know how to make an AGI right now, but hopefully we will soon.

  • Assuming AI == neural networks here, this is a known fundamental limitation. A neural network makes the decisions that it does based on weights on the connections between the nodes. These weights are computed in an iterative feedback process that converges toward values that produce the desired results. There is no way to interrogate such a mechanism to determine "why" a certain decision is made.
    • This is not a fundamental limitation. Google "explainable artificial intelligence". There's a huge amount of work being done right now on how to determine why a decision was made. This is a whole field in itself.

    • It's the case with juat about all forms of AI. NNs are certainly opaque. SVMs are no better, and neither is boosting. Some people claim decision trees are better but theyre not.

    • There is no way to interrogate such a mechanism to determine "why" a certain decision is made.

      Quite similar to how large parts of our brains work. A lot of our explanations are post-fact rationalizations of unconscious decisions.

  • by Tom ( 822 )

    Put simply, today's machine-learning programs can't tell whether a crowing rooster makes the sun rise, or the other way around. Whatever volumes of data a machine analyzes, it cannot understand what a human gets intuitively.

    To be perfectly honest, neither can half of humanity. That's how we get religion, pseudo-science, magical thinking, superstitions, homeopathy and a good share of relationship conflicts.

    Dancing makes the rain fall. Praying makes disease go away. Pricking pins makes someone pain. Water and sugar are medically effective if they once saw a piece of real medicine from a distance. You disagreed with me on that argument with that bitch so you don't love me anymore.

    Really, reasoning is not exactly humanities strong

    • by gweihir ( 88907 )

      Unfortunately, you are quite right. The thing is that only a minority of people (some 10-15%) are independent thinkers, i.e. people that are actually able to and chose to use the general intelligence they have. The rest is merely mindlessly parroting what they were told is "truth". "Half of humanity" is entirely too positive.

      Incidentally, the scientific method comes into play much later. If you cannot tell cause and effect in a really simple case, then you cannot apply the scientific method, because you ha

  • "today's machine-learning programs can't tell whether a crowing rooster makes the sun rise, or the other way around."

    But can the rooster tell?

    And perhaps more importantly, can the Sun tell?

  • Stop calling automation with absolutely no intelligence "AI" and this misunderstanding goes away.

  • Apparently the author has never heard of Bayesian networks. Questions like, "Why did this happen?" or "What if I had acted differently?" are exactly what they're designed to answer. They've been around since the 1980s, so this isn't some brand new innovation. They're a classic method we've been using for years.

  • Causation is supplied by experimentation and/or human reasoning, whereas supervised learning is currently about *prediction*, not *explanation*. But then someone has to sell the results to a human decision-maker.

    Commercial AI right now is almost exclusively trained by data scientists whose job includes actually thinking about the data set they're working with. Businesses rarely plug an AI result into the market without understanding at least a little bit about why is does or doesn't work--although a model t

  • It's an umbrella term and as such it doesn't mean much by itself: it groups together a broad collection of different approaches to the problem of finding answers to a question without providing a programmatic solution. Being now a buzzword, it's used as a marketing label to make a product or a company look cool. In most current products, it refers to some glorified interpolation algorithm which requires quite a lot of natural intelligence to be set up and will only provide answers in a narrow domain. While
  • What nonsense is this article (or 'researchers' who wrote it). A human would do exactly the same if it didn't have any more information. If a human wasn't taught about the sun, then it would also might have difficulties of thinking the rooster is responsible for the sunrise (and as a matter of fact, the sun doesn't actually even rise).. and in regard to machine learning, it's still in it's early years.. a human isn't anything special, we're just biochemical computers, nothing more,nithing less...
  • The article says this:

    "Put simply, today's machine-learning programs can't tell whether a crowing rooster makes the sun rise, or the other way around. Whatever volumes of data a machine analyzes, it cannot understand what a human gets intuitively. "

    Worse, it actually seems to mean it.

    Ever heard of the notion of "cargo cult"? How about the Voodoo approach to medicine? What about all the pagan "light bonfires to entice the sun to return" rites? How about all those lets-predict-the-next-doomsday-from the

  • by ledow ( 319597 )

    Correct. It can't.

    "AI" that we have is no-fucking-where near actual intelligence at all. They are large statistical systems, often blurring expert systems, human-fed tuned heuristics, statistical analysis and genetic algorithms in one huge mass of junk.

    Notice how AI peaks early, and then plateaus forever. It's easy to train it how to tell an image has a banana in it, but then further training - even to billions of images - doesn't improve it much. And retraining or further refining its training (e.g. fi

  • It is but to do or die!
  • I didn't RTFA but I hope it is as ridiculous as the summary suggests.

    With reinforcement learning (which is a basic pillar in much of current AI), Charlie would notice that the price increase and sales decrease had some correlation in the wrong direction and would try to adjust for that. It wouldn't need any competitors prices and no reasoning about why. And the only reason to call it AI would be for the manufacturer of Charlie to make it more expensive.

    • I didn't RTFA but I hope it is as ridiculous as the summary suggests.

      It's not easy to read since it's paywalled. However one of the authors Judea Pearl, so I assume it's fairly informed on the issue. Here's a recent article which I assume is related: https://www.theatlantic.com/te... [theatlantic.com]

      With reinforcement learning (which is a basic pillar in much of current AI), Charlie would notice that the price increase and sales decrease had some correlation in the wrong direction and would try to adjust for that.

      So

  • Charlie made the mistake. He didn't understand either. Our advantage is that we have humans point out our mistakes and we correct them permanently. We do this one bad understanding at a time throughout life. But who wants to correct the bad understandings of AI? Oops you killed that person, here's why, don't do that again. But we WILL do that. We will allow self-driving cars to kill people and be corrected for a long, long time. We may even let AI nuke the Earth to learn how not to.
  • That's kind of a given in data science. And causation has been an active challenge [wikipedia.org] in all human reasoning.
    So the writer's bias is against AI, just scare mongering.

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...