Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Technology

'Generative AI Is Still Just a Prediction Machine' (hbr.org) 86

AI tools remain prediction engines despite new capabilities, requiring both quality data and human judgment for successful deployment, according to new analysis. While generative AI can now handle complex tasks like writing and coding, its fundamental nature as a prediction machine means organizations must understand its limitations and provide appropriate oversight, argue Ajay Agrawal (Geoffrey Taber Chair in Entrepreneurship and Innovation at the University of Toronto's Rotman School of Management), Joshua Gans (Jeffrey S. Skoll Chair in Technical Innovation and Entrepreneurship at the Rotman School, and the chief economist at the Creative Destruction Lab), and Avi Goldfarb (Rotman Chair in Artificial Intelligence and Healthcare at the Rotman School) in a piece published on Harvard Business Review. Poor data can lead to errors, while lack of human judgment in deployment can result in strategic failures, particularly in high-stakes situations. An excerpt from the story: Thinking of computers as arithmetic machines is more important than most people intuitively grasp because that understanding is fundamental to using computers effectively, whether for work or entertainment. While video game players and photographers may not think about their computer as an arithmetic machine, successfully using a (pre-AI) computer requires an understanding that it strictly follows instructions. Imprecise instructions lead to incorrect results. Playing and winning at early computer games required an understanding of the underlying logic of the game.

[...] AI's evolution has mirrored this trajectory, with many early applications directly related to well-established prediction tasks and, more recently, AI reframing a wide number of applications as predictions. Thus, the higher value AI applications have moved from predicting loan defaults and machine breakdowns to a reframing of writing, drawing, and other tasks as prediction.

'Generative AI Is Still Just a Prediction Machine'

Comments Filter:
  • If you look into the science behind these models, they are just probably functions. Our brains do not work in this manner. How come a big mac can power our brains, but we need giant power requirements to power this approach based on classical computing paradigms? Something is wrong with this.
    • by bill_mcgonigle ( 4333 ) * on Wednesday November 20, 2024 @11:01AM (#64959875) Homepage Journal

      ```
      How come a big mac can power our brains, but we need giant power requirements to power this approach based on classical computing paradigms?
      ```

      There probably are some similarities in pattern matching mechanisms for transformers to work so well, on a basic linear algebra matrix math level, but we seem to have wet room temperature quantum communication happening on the microtubules inside our neurons (the crystal resonance data is just coming out now; tldr is "terahertz") and even modern qbit processors can't compete.

      We also have thousands to millions of synapse connections between each neuron. The cartoon neuron model is wrong - they are fuzzy like cotton balls, not smooth like a scorpion.

      You're asking, effectively, why ENIAC is huge and inefficient when an RPi0 is $5 and 2W.

      Very few vacuum tubes and relays! ;)

      • Very few vacuum tubes..

        We call them "glow fets" %^)

      • ```How come a big mac can power our brains, but we need giant power requirements to power this approach based on classical computing paradigms?
        ```

        Humans run on about 2100 calories per day. That's 2.4 kW-hr, conveniently close to 100 watts.

        It's apparently a little hard to estimate what LLMs run at, but this article [theverge.com] suggests "Most tasks they tested use a small amount of energy, like 0.002 kWh to classify written samples and 0.047 kWh to generate text".

        So, dividing, for the power it takes to run a human, ChatGPT or equivalent could generate 51 texts. That's probably more text than even the usual slashdot commenter writes per day, so no, the power requir

        • Re: (Score:3, Informative)

          by Dan East ( 318230 )

          the power required for LLMs is similar to the power to run a human.

          That's a ridiculous statement.

          The human brain only uses about 350-500 calories a day. The amount of computing done by the brain daily is insane. Look a the visual and auditory processing alone. Comparing the processing done by a human brain in one day to the AI generation of 51 texts is an absurd comparison. Multiple orders of magnitude off. Try more along the lines of having AI process 500 megapixel images, at around 30 per second, for 12-14 hours straight. That's around 26 terabytes of raw image data pro

          • Correct, the human brain is processing all our senses, keeping our nervous system working without thinking about it, learning in all senses, providing memory in all sense, and providing mechanical control over muscles. The human brain is very efficient in calorie usage. This AI crap seems to have gone horribly wrong with classical computing paradigm.
          • The human brain requires a body to operate so it would be unfair to eliminate that from the equation. I don't know what its efficiency is compared to an LLM the calculation seem quite complicated how many operations and what complexity do we do per interval for example. But LLM are a relatively new invention where as our brains/bodies have been evolving for a long time and are probably optimized for efficiency.

            • AI also has a body, it's the digesting all human culture part. I hear it's energy use compares to a nation state or two.
          • Also the vast majority of brain power is used for autonomic functions and motor skills.

        • You are missing the point that it is just generating. The human power consumption includes learning. Add this to the mix and we'll see.

          > 0.047 kWh to generate text
          What text? It does not say. "I am" will be vastly different than the whole "Macbeth" play.

      • by mspohr ( 589790 )

        Probabalistic AI

        https://youtu.be/hJUHrrihzOQ?f... [youtu.be]

      • We compute differently too. A computer, even in the AI, has dedicated sets of simplistic transistors to do simple arithmetic. And it does this extremely quickly, in nanoseconds, faster than a neuron fires. Human wet brains don't "compute" this way, they process through algorithms. For some stuff, like multiplication tables, it's just data lookup, not computation. For harder stuff humans use the pen and paper, they do long division the long way. Maybe some do it in their heads but it's still very slow a

    • probably a probability ... maybe? ;)

    • by shanen ( 462549 )

      Nice FP question and the best answer I've read so far is A Thousand Brains by Jeff Hawkins.

      My short answer is the human brain as a PoC for solutions around 35W. And they can be mass produced with unskilled labor, too.

    • If you look into the science behind these models, they are just probably functions. Our brains do not work in this manner.

      I think we over estimate our brains. If you think about it carefully, any answer humans generate is also correct with some probability. And, yes, it is a prediction in some way. You predict that you will be able to cross the road before starting crossing. You hope that you remember the math correctly when you solve something. When you help somebody to find something, you hope that you remember where it and predict that you will be able to help.

      In some sense what LLMs do to generate their answers is more s

      • It is provable that current LLMs are not as powerful as human brains. It shows up rather simply in tasks such as counting letters, or matching parentheses. This is basic Computational Theory.
    • The snag is that once you train AI to get all it's power from eating a Big Mac, then it will soon learn that humans are full of calories also.

  • by Anonymous Coward
    Nobody, AFAIK, has claimed otherwise.
    • Oh, boy, you should hear the talk radio people say, "we asked ChatGPT and it agreed with our priors so that proves it!"

      I'm paraphrasing but enough of their audience must not turn it off like me.

      • by evanh ( 627108 )

        Even Slashdot is full of articles about how intelligent AI is.

        • by GoTeam ( 5042081 )

          Even Slashdot is full of articles about how intelligent AI is.

          I'm convinced it's part of a secret recruitment program to find and leverage either the least gullible or the most gullible people on the planet.

      • Oh, boy, you should hear the talk radio people say, "we asked ChatGPT and it agreed with our priors so that proves it!"

        I'm paraphrasing but enough of their audience must not turn it off like me.

        There will always be an audience that eats that up. I suspect they also consult horoscopes.

    • Nobody, AFAIK, has claimed otherwise.

      To the contrary. Except for a fraction of computer-literate people that actually understand what the tech does, everybody thinks otherwise. They refer to large language models as "artitficial intelligence", oblivious to the fact that while it is artificial, it is not intelligent in any real sense of the word.

      • I see no problem with calling it artificial intelligence. The adjective is enough of a qualifier to distinguish it from other kinds of intelligence, or even exclude it from that category, if you want to see it that way.

        Artificial intelligence shows great promise at mimicking what we all understand to be real intelligence. To me, that's enough to place it in the category of intelligence.

      • noun: intelligence

        1. the ability to acquire and apply knowledge and skills

        LLMs are intelligent.
        You're trying to narrow the definition of intelligence to preclude them. Where that gets really funny, is I bet you can't even iron down a definition of intelligence that actually precludes them. You just circularly define it as "intelligent stuff that isn't an LLM"
        • by XXongo ( 3986865 )

          noun: intelligence
          1. the ability to acquire and apply knowledge and skills
          LLMs are intelligent.

          LLMs do not have knowledge in any useful sense of the word "knowledge." Did you actually read the article we're discussing [hbr.org]? Play with a LLM for a while. The fact that they don't actually know what they are languaging about can be very humorous. They are moving around tokens that have no meaning, other than how they fit together to make patterns.

          You're trying to narrow the definition of intelligence to preclude them.

          The opposite-- you're trying to narrow the definition of intelligence to include them. They are not intelligent.

          • LLMs do not have knowledge in any useful sense of the word "knowledge."

            noun: knowledge; plural noun: knowledges

            1. facts, information, and skills acquired by a person through experience or education; the theoretical or practical understanding of a subject.
            noun: understanding
            Wrong again.

            Did you actually read the article we're discussing [hbr.org]?

            Yes, it's pure fucking idiocy.
            The most basic laws of physics are nothing but probabilistic functions that define the evolution of reality, and you're trying to tell me that every thinking thing isn't a prediction machine?

            Play with a LLM for a while.

            I do daily.

            The fact that they don't actually know what they are languaging about can be very humorous.

            Sure, but I'm laughing at you in the same way... maybe y

        • by cowdung ( 702933 )

          Please tell me how LLMs are capable of acquiring knowledge after their initial training.

          Hint: they're not.

          Please tell me how LLMs are capable of acquiring skills after their initial training.

          Hint: they aren't.

          So no, they are not intelligent in the human sense. They don't even remember anything beyond what you send them in the context or what they had from their initial training. So if you ask them tomorrow what they think about yesterday's conversation? They have no clue.

          • Please tell me how LLMs are capable of acquiring knowledge after their initial training.

            1) not relevant.
            2) context window.
            Tell me you have no idea how an LLM works without telling me you have no idea how an LLM works.

            Please tell me how LLMs are capable of acquiring skills after their initial training.

            See above.

            So no, they are not intelligent in the human sense. They don't even remember anything beyond what you send them in the context or what they had from their initial training. So if you ask them tomorrow what they think about yesterday's conversation? They have no clue.

            Find me where it says, in the definition of "intelligence", that they must continue to acquire knowledge and skills.

            I'll wait.

            • by nazrhyn ( 906126 )
              I guess if "acquire knowledge" does not implicitly include any assumption that the acquired knowledge persists, you're right. My personal interpretation of that dictionary definition is (unsurprisingly) a more-human one. Acquired knowledge is expected to persist in some way. In humans, it does not persist perfectly, but it does persist well enough for humans to become experts at stuff.

              An LLM is pretty similar to a virtual machine that has been configured to reset to its initial snapshot after being turne
    • Nobody, AFAIK, has claimed otherwise.

      Except for the AI prophets, and the salespeople that are becoming their disciples, and the managers who are becoming their followers, and the worshiping masses that have started to believe AI truly will save the universe from the plague of humanity. I get at least three conversations a day from bumbling morons telling me that AI is absolutely must have at all levels of every company of that company will get left behind. Half of me thinks the only thing they'll get left behind on is the back-end of the bubbl

      • I get at least three conversations a day from bumbling morons telling me that AI is absolutely must have at all levels of every company of that company will get left behind.

        Your company has lots of morons then. But they're not entirely wrong.

        If you run a business, you must think about how AI will affect it. You think about other factors, don't you?

        Imagine your company in a pre-internet age. What would have happened if you had ignored the rise of the internet? AI is a comparable game-change.

        • I get at least three conversations a day from bumbling morons telling me that AI is absolutely must have at all levels of every company of that company will get left behind.

          Your company has lots of morons then. But they're not entirely wrong.

          If you run a business, you must think about how AI will affect it. You think about other factors, don't you?

          Imagine your company in a pre-internet age. What would have happened if you had ignored the rise of the internet? AI is a comparable game-change.

          That's what really kills me. We did ignore the rise of the internet. In the early days I had the company president tell me we could just print the internet out for people that wanted it. This was 1999-2000. It took nearly six years and someone *OUTSIDE* the company explaining it him before he accepted it was an essential part of the business world.

          And while I do get that there may be uses for AI even as it exists today, training new folks on product information for example, the idea that it's going to sweep

      • Except for the AI prophets, and the salespeople that are becoming their disciples, and the managers who are becoming their followers, and the worshiping masses that have started to believe AI truly will save the universe from the plague of humanity.

        Generally these people distinguish between what AI can currently do, and what it will be able to do in the future. The big predictions are usually phrased somehow to be in the future.

        • Except for the AI prophets, and the salespeople that are becoming their disciples, and the managers who are becoming their followers, and the worshiping masses that have started to believe AI truly will save the universe from the plague of humanity.

          Generally these people distinguish between what AI can currently do, and what it will be able to do in the future. The big predictions are usually phrased somehow to be in the future.

          Yes, but phrased as if it's a *DEFINITE* future. Not phrased as if it *may* happen. This is the difference between practical minds and religious belief.

    • I beg to differ just in the last few days:

      ChatGPT-4 Beat Doctors at Diagnosing Illness, Study Finds https://science.slashdot.org/s... [slashdot.org]
      AI As Good As Doctors At Checking X-Rays https://science.slashdot.org/s... [slashdot.org]

      And people wonder why people don't trust science.

  • by DaveTheDelirious ( 1195377 ) on Wednesday November 20, 2024 @10:43AM (#64959851)
    To summarize the HBR article:

    After investing trillions of dollars on hardware and snarfing up the entire Internet for training:

    (1) Garbage in, garbage out
    (2) It's basically autocorrect turned up to 11.
    (3) Your mileage may vary.
    • by GoTeam ( 5042081 )

      To summarize the HBR article: After investing trillions of dollars on hardware and snarfing up the entire Internet for training: (1) Garbage in, garbage out (2) It's basically autocorrect turned up to 11. (3) Your mileage may vary.

      Don't forget about all the wasted electricity!

    • While this is true, I think this way of looking at it trivializes the sophistication of that "auto-correct."

      It's kind of like saying, "So jet airline travel is still just a form of transportation?" Yes, that's true too, but it's not really just "walking on steroids."

      • While this is true, I think this way of looking at it trivializes the sophistication of that "auto-correct."

        It's kind of like saying, "So jet airline travel is still just a form of transportation?" Yes, that's true too, but it's not really just "walking on steroids."

        Good point. Use of the word "just" is misleading.
        I mean what is polite conversation at a dinner party, other than a group of humans all predicting what should be said next...?

  • by backslashdot ( 95548 ) on Wednesday November 20, 2024 @10:49AM (#64959857)

    Seeds are just a way for plants to reproduce. No fucking shit, Sherlock.

  • by DarkOx ( 621550 ) on Wednesday November 20, 2024 @11:16AM (#64959899) Journal

    This is thing about this, we how the modules are built and trained. We have the source code to the statistics engine that implements attention and runs the model. We have the source the interface and mechanisms doing the feed forward/back to the models. All that is understood.

    When people say they don't understand how the LLMs generate this or that what they mean is the model complexity is to large and the training token relationships to opaque, not that it is some super natural type mystery.

    Yet for some reason a lot of people right up and including the Sam Altman's of the world at least profess to believe that somehow making the model big enough, and tying enough stuff into resource augmented feed forward, an intelligence is going to just spontaneously emerge. Well my challenge to them would be - Identify some theoretical mechanism for that; because just 'when its big enough' if magical thinking. If it is not magical thinking unless perhaps you can offer some specific ideas about just how 'big' and why.

    • by nightflameauto ( 6607976 ) on Wednesday November 20, 2024 @11:51AM (#64960019)

      Yet for some reason a lot of people right up and including the Sam Altman's of the world at least profess to believe that somehow making the model big enough, and tying enough stuff into resource augmented feed forward, an intelligence is going to just spontaneously emerge. Well my challenge to them would be - Identify some theoretical mechanism for that; because just 'when its big enough' if magical thinking. If it is not magical thinking unless perhaps you can offer some specific ideas about just how 'big' and why.

      The Sam Altmans of the world, the AI prophets as I call them, have bought into their own hype. They are also followers of the "greed makes good" philosophy, which purports that as long as you absorb enough of something, it will lead to better things. In this case, they've decided to absorb data and power, and in the process, money, at a rate never before imagined. And they all seem to be of the opinion that more is more and more is always better.

      I keep wondering, and have mentioned in the past, that I think in a lot of ways this AI boom is simply a full-born manifestation of the greed that has been a part of the tech-bro/Silly Valley culture all along, and that they finally found a way to manifest that greed on several planes of existence at once. If we want actual AI, we're going to need a fundamental shift in philosophy on how we go about it. No matter how many books you throw at a monkey, he'll never become a great writer. And no matter how much data you throw at a predictive engine, it will never become intelligent, even with the day-dream of unlimited power. We're not going to see a shift towards any other form of potential AI because the current generation of LLM data aggregation is sucking up all the resources, all the eyeballs, all the brains, and all the potential.

      • I consider them to be similar to scam victims. They want to believe it's true because they've already spent a ton of money on it. They just need to pay a few more "fees" and then they'll get the payout.

        • I consider them to be similar to scam victims. They want to believe it's true because they've already spent a ton of money on it. They just need to pay a few more "fees" and then they'll get the payout.

          Televangelist style. Just a few more dollars and you too could get into heaven!

    • [...] for some reason a lot of people right up and including the Sam Altman's of the world at least profess to believe that somehow making the model big enough, and tying enough stuff into resource augmented feed forward, an intelligence is going to just spontaneously emerge. Well my challenge to them would be - Identify some theoretical mechanism for that; because just 'when its big enough' if magical thinking. If it is not magical thinking unless perhaps you can offer some specific ideas about just how 'big' and why.

      Well, as I see it, increasing the size and sophistication of LLMs or other AI models will result in incremental improvement in some proportion to the increase, but also could lead to a sudden dramatic change in the capability of the model, at least from our perspective of it.

      Your post reminded me of Catastrophe Theory [wikipedia.org] as a possible theoretical mechanism. However, I haven't looked at this topic for awhile, so I'm not sure whether it's relevant.

    • Yeah. It is slightly better than a parrot - it is *contextually* probabilistic. There is quite a capture of patters in the learning process. Many such patterns that we humans missed. And quite ability to convert (transform) input to some representation of meaning. The whole point is that it's not 'intelligent' in any meaning of the word. Or will it ever be. But LLM are helpful and may become part of future real AI.
      But agree - brute forcing it will not make it intelligent. It's missing quality components no

    • The author of the article is just discovering this reality.

    • Identify some theoretical mechanism for that; because just 'when its big enough' if magical thinking.

      Life, the universe, and everything.
      Literally.
      Life is an emergent quality of a ridiculously fucking complex system ruled by a simple set of probabilistic rules.
      Intelligence is just its latest emergent trick (on Earth, anyway)

      When people try to handwave away ANNs as "predictive machines" or "just a model", they ignore the 50 years of research done into human brains, of which the best developed models are Bayesian prediction models.

      Ultimately, what we really have here is the classical QM (heh) problem w

    • Good morning kind sir. I have been seeing you post on here for years. I am sincerely trying to be helpful here, but it may not feel like it.

      Your selection and ordering of words has me concerned for you. Your message and insight are perfectly good; however, there are signs that there may be something negative going on for you. Please go see a doctor or have a close friend/family member evaluate to see if you are functioning at 100%.

      Again, I know this sounds like it may be insulting, but it is not. It is genu

  • So are humans... probably. "Predictive processing" is one of the leading theories of brain function:

    https://en.wikipedia.org/wiki/Predictive_coding [wikipedia.org]
    • by radaos ( 540979 )
      Tried this on Copilot:

      Q.What is the lowest number whose value squared is greater than 10?

      A. The lowest number whose square is greater than 10 is 4.

      No, the answer is ~ -3.162278

      LLM assumed 'number' meant 'integer'. LLM did not consider negative numbers are less than positive ones.

      In other words, the LLM responded with the most likely predicted response from the average person.

      That may sometimes be acceptable for text. It is absolutely unacceptable for mathematics.

      • Excellent point. AI is still not great at the kind of abstract thinking that math demands.

        But consider your question. Without constricting the kind of "number" in question, there is no answer. Constrict it to positive integers, and the answer is 4. My guess is that the AI thought that was what you meant when you said "number."

        Now, if you allow "number" to mean any real number, then the answer is either negative infinity (if you allow negative numbers) or the next number that is just above sqrt(10) if you do

        • by radaos ( 540979 )
          Good point, my original query was: What is the lowest number whose value squared is between 10 and 20? I should have left it that way, which excludes infinity. Trying that now, it still gives the answer as 4. The LLM did not pick up the loophole in the question, which is what I would expect a genuinely useful AI to do.
          • by radaos ( 540979 )
            An answer of: approximately -4.472135 would satisfy this better specified query.
            • An answer of: approximately -4.472135 would satisfy this better specified query.

              Yes, if you accept approximate answers. Otherwise no, for similar reasons to those I described above: no matter what answer you give, I can always find one that's closer to -sqrt(20) and still above it. Gotta love real numbers. [wikipedia.org]

        • But... computing is absolutely great at some abstract thinking that math demands. Ie, Mathematica, Macsyma. Sure, it's not "AI" but at a time Macsyma came out of an AI oriented environment which focused on symbolic computing. It's not "generational AI" but it's on the spectrum of AI. They're doing algebra and calculus, manipulating strings of symbols, often using the same rules that school children were taught. The snag is that the inputs to these programs have to be in a mathematical format, not a con

          • I see what you're saying. But I think you'd agree that when you use Mathematica or Macsyma, the problem has been abstracted already by the human interacting with the program.

            I have used Mathematica for symbolic manipulation, and it is frickin' awesome at it. I also have no problem calling it AI, in a limited form.

      • To me that shows that it behaves like a normal human, but your answer is wrong to since the answer -ne infinity or since infinity is not an actual number, there is no answer you can give because I can always find a smaller number.

        this is the answer I got from Gemini work blocks Chat GPT.

        Me: The lowest number whose square is greater than 10 is 4
        The square root of 10 is 3.1622776601683795. [not really answering question]

        Me: isn't -4 lower than 3.1622776601683795
        Gemini:

        Yes, you're absolutely right! While 4 is the smallest positive integer whose square is greater than 10, -4 is indeed smaller than 3.1622776601683795.

        However, when we're typically discussing the "lowest number" in this context, we're usually referring to the smallest positive number. So, 4 is the correct answer in that specific sense.

        It's a great observation and a good reminder to always consider negative numbers as well!

        That gives at least a very strong impression from the outside that it understood its mistakes and why it did it, just like a human would. But it failed to get how phrased my response in subtle way as to not actually give an

        • Exactly. People need to keep in mind when you ask it a question, you're getting a single pass from the ANN. It hasn't "thought about it".
          Give it some inner dialog (that's why you have that fabulous context window).

          Your average human is going to produce a shitty estimation on its first pass as well.

          I have found that you can help an ANN get much better at any task you give it by providing it some inner monologue.
          Perhaps we just need an LLM providing context for our LLMs... LLMception. Maybe that was the
      • Just ask the AI what plants crave.

      • by allo ( 1728082 )

        As a human I would have assumed the same.
        If you want to do trick questions, tell upfront that you're not asking for the most obvious interpretation. (And then don't expect a LLM to do floating point arithmetic correctly)

  • Crap Generator
    And yes, I'm a fan of AI research and predict that future AI will be useful, but today's AI simply generates crap
    It's kinda like a bullshitter who reads an article on a subject and claims to be an expert

    • It's kinda like a bullshitter who reads an article on a subject and claims to be an expert

      It's kinda like a person? Weird.
      Your words, not mine.

  • What matters is if the machine can do a task better than the average human doing the task. By which means is largely irrelevant. Whatever you call todays AIs, it's clear that they are closing in on fields long thought exclusive to humans. And quite fast. That's the important bit.

  • by ET3D ( 1169851 ) on Wednesday November 20, 2024 @01:00PM (#64960177)

    > Thinking of computers as arithmetic machines is more important than most people intuitively grasp because that understanding is fundamental to using computers effectively

    That's a ridiculous statement. Thinking of computers as arithmetic machines doesn't make any sense to most people, and doesn't help understand software in any way. The article goes on to stress that people need to understand that computers will follow instructions exactly, which is also not really something that relates in any way to using software. Using software is learning to do tasks in a specific way to achieve a specific result. It's the computer expecting the user to work in a specific way. The computer, on the other hand, isn't guaranteed to work in the specified way, because of bugs and oversights by the developers. An advanced user will try to overcome such problems by finding alternate ways to achieve the same results.

    Interestingly, neither this wrong assertion nor the title talking about generative AI being a prediction machine have anything to do with what the article is trying to say, which is that it's a useful tool but needs human judgement. I can imagine that the article writer didn't use that judgement when using AI to write the article.

  • Calling AI "prediction machines" is a vacuous statement, since all of intelligence can be looked at as prediction.

  • That is its fundamental nature.

Keep up the good work! But please don't ask me to help.

Working...