Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Technology

'Generative AI Is Still Just a Prediction Machine' (hbr.org) 28

AI tools remain prediction engines despite new capabilities, requiring both quality data and human judgment for successful deployment, according to new analysis. While generative AI can now handle complex tasks like writing and coding, its fundamental nature as a prediction machine means organizations must understand its limitations and provide appropriate oversight, argue Ajay Agrawal (Geoffrey Taber Chair in Entrepreneurship and Innovation at the University of Toronto's Rotman School of Management), Joshua Gans (Jeffrey S. Skoll Chair in Technical Innovation and Entrepreneurship at the Rotman School, and the chief economist at the Creative Destruction Lab), and Avi Goldfarb (Rotman Chair in Artificial Intelligence and Healthcare at the Rotman School) in a piece published on Harvard Business Review. Poor data can lead to errors, while lack of human judgment in deployment can result in strategic failures, particularly in high-stakes situations. An excerpt from the story: Thinking of computers as arithmetic machines is more important than most people intuitively grasp because that understanding is fundamental to using computers effectively, whether for work or entertainment. While video game players and photographers may not think about their computer as an arithmetic machine, successfully using a (pre-AI) computer requires an understanding that it strictly follows instructions. Imprecise instructions lead to incorrect results. Playing and winning at early computer games required an understanding of the underlying logic of the game.

[...] AI's evolution has mirrored this trajectory, with many early applications directly related to well-established prediction tasks and, more recently, AI reframing a wide number of applications as predictions. Thus, the higher value AI applications have moved from predicting loan defaults and machine breakdowns to a reframing of writing, drawing, and other tasks as prediction.

'Generative AI Is Still Just a Prediction Machine'

Comments Filter:
  • If you look into the science behind these models, they are just probably functions. Our brains do not work in this manner. How come a big mac can power our brains, but we need giant power requirements to power this approach based on classical computing paradigms? Something is wrong with this.
    • by bill_mcgonigle ( 4333 ) * on Wednesday November 20, 2024 @11:01AM (#64959875) Homepage Journal

      ```
      How come a big mac can power our brains, but we need giant power requirements to power this approach based on classical computing paradigms?
      ```

      There probably are some similarities in pattern matching mechanisms for transformers to work so well, on a basic linear algebra matrix math level, but we seem to have wet room temperature quantum communication happening on the microtubules inside our neurons (the crystal resonance data is just coming out now; tldr is "terahertz") and even modern qbit processors can't compete.

      We also have thousands to millions of synapse connections between each neuron. The cartoon neuron model is wrong - they are fuzzy like cotton balls, not smooth like a scorpion.

      You're asking, effectively, why ENIAC is huge and inefficient when an RPi0 is $5 and 2W.

      Very few vacuum tubes and relays! ;)

      • Very few vacuum tubes..

        We call them "glow fets" %^)

      • ```How come a big mac can power our brains, but we need giant power requirements to power this approach based on classical computing paradigms?
        ```

        Humans run on about 2100 calories per day. That's 2.4 kW-hr, conveniently close to 100 watts.

        It's apparently a little hard to estimate what LLMs run at, but this article [theverge.com] suggests "Most tasks they tested use a small amount of energy, like 0.002 kWh to classify written samples and 0.047 kWh to generate text".

        So, dividing, for the power it takes to run a human, ChatGPT or equivalent could generate 51 texts. That's probably more text than even the usual slashdot commenter writes per day, so no, the power requir

      • by mspohr ( 589790 )

        Probabalistic AI

        https://youtu.be/hJUHrrihzOQ?f... [youtu.be]

    • probably a probability ... maybe? ;)

    • by shanen ( 462549 )

      Nice FP question and the best answer I've read so far is A Thousand Brains by Jeff Hawkins.

      My short answer is the human brain as a PoC for solutions around 35W. And they can be mass produced with unskilled labor, too.

  • by Anonymous Coward
    Nobody, AFAIK, has claimed otherwise.
    • Oh, boy, you should hear the talk radio people say, "we asked ChatGPT and it agreed with our priors so that proves it!"

      I'm paraphrasing but enough of their audience must not turn it off like me.

      • by evanh ( 627108 )

        Even Slashdot is full of articles about how intelligent AI is.

        • by GoTeam ( 5042081 )

          Even Slashdot is full of articles about how intelligent AI is.

          I'm convinced it's part of a secret recruitment program to find and leverage either the least gullible or the most gullible people on the planet.

      • Oh, boy, you should hear the talk radio people say, "we asked ChatGPT and it agreed with our priors so that proves it!"

        I'm paraphrasing but enough of their audience must not turn it off like me.

        There will always be an audience that eats that up. I suspect they also consult horoscopes.

    • Nobody, AFAIK, has claimed otherwise.

      To the contrary. Except for a fraction of computer-literate people that actually understand what the tech does, everybody thinks otherwise. They refer to large language models as "artitficial intelligence", oblivious to the fact that while it is artificial, it is not intelligent in any real sense of the word.

    • Nobody, AFAIK, has claimed otherwise.

      Except for the AI prophets, and the salespeople that are becoming their disciples, and the managers who are becoming their followers, and the worshiping masses that have started to believe AI truly will save the universe from the plague of humanity. I get at least three conversations a day from bumbling morons telling me that AI is absolutely must have at all levels of every company of that company will get left behind. Half of me thinks the only thing they'll get left behind on is the back-end of the bubbl

  • To summarize the HBR article:

    After investing trillions of dollars on hardware and snarfing up the entire Internet for training:

    (1) Garbage in, garbage out
    (2) It's basically autocorrect turned up to 11.
    (3) Your mileage may vary.
    • by GoTeam ( 5042081 )

      To summarize the HBR article: After investing trillions of dollars on hardware and snarfing up the entire Internet for training: (1) Garbage in, garbage out (2) It's basically autocorrect turned up to 11. (3) Your mileage may vary.

      Don't forget about all the wasted electricity!

  • by backslashdot ( 95548 ) on Wednesday November 20, 2024 @10:49AM (#64959857)

    Seeds are just a way for plants to reproduce. No fucking shit, Sherlock.

  • by DarkOx ( 621550 ) on Wednesday November 20, 2024 @11:16AM (#64959899) Journal

    This is thing about this, we how the modules are built and trained. We have the source code to the statistics engine that implements attention and runs the model. We have the source the interface and mechanisms doing the feed forward/back to the models. All that is understood.

    When people say they don't understand how the LLMs generate this or that what they mean is the model complexity is to large and the training token relationships to opaque, not that it is some super natural type mystery.

    Yet for some reason a lot of people right up and including the Sam Altman's of the world at least profess to believe that somehow making the model big enough, and tying enough stuff into resource augmented feed forward, an intelligence is going to just spontaneously emerge. Well my challenge to them would be - Identify some theoretical mechanism for that; because just 'when its big enough' if magical thinking. If it is not magical thinking unless perhaps you can offer some specific ideas about just how 'big' and why.

    • Yet for some reason a lot of people right up and including the Sam Altman's of the world at least profess to believe that somehow making the model big enough, and tying enough stuff into resource augmented feed forward, an intelligence is going to just spontaneously emerge. Well my challenge to them would be - Identify some theoretical mechanism for that; because just 'when its big enough' if magical thinking. If it is not magical thinking unless perhaps you can offer some specific ideas about just how 'big' and why.

      The Sam Altmans of the world, the AI prophets as I call them, have bought into their own hype. They are also followers of the "greed makes good" philosophy, which purports that as long as you absorb enough of something, it will lead to better things. In this case, they've decided to absorb data and power, and in the process, money, at a rate never before imagined. And they all seem to be of the opinion that more is more and more is always better.

      I keep wondering, and have mentioned in the past, that I think

  • So are humans... probably. "Predictive processing" is one of the leading theories of brain function:

    https://en.wikipedia.org/wiki/Predictive_coding [wikipedia.org]
  • Crap Generator
    And yes, I'm a fan of AI research and predict that future AI will be useful, but today's AI simply generates crap
    It's kinda like a bullshitter who reads an article on a subject and claims to be an expert

The last person that quit or was fired will be held responsible for everything that goes wrong -- until the next person quits or is fired.

Working...