'Generative AI Is Still Just a Prediction Machine' (hbr.org) 28
AI tools remain prediction engines despite new capabilities, requiring both quality data and human judgment for successful deployment, according to new analysis. While generative AI can now handle complex tasks like writing and coding, its fundamental nature as a prediction machine means organizations must understand its limitations and provide appropriate oversight, argue Ajay Agrawal (Geoffrey Taber Chair in Entrepreneurship and Innovation at the University of Toronto's Rotman School of Management), Joshua Gans (Jeffrey S. Skoll Chair in Technical Innovation and Entrepreneurship at the Rotman School, and the chief economist at the Creative Destruction Lab), and Avi Goldfarb (Rotman Chair in Artificial Intelligence and Healthcare at the Rotman School) in a piece published on Harvard Business Review. Poor data can lead to errors, while lack of human judgment in deployment can result in strategic failures, particularly in high-stakes situations. An excerpt from the story: Thinking of computers as arithmetic machines is more important than most people intuitively grasp because that understanding is fundamental to using computers effectively, whether for work or entertainment. While video game players and photographers may not think about their computer as an arithmetic machine, successfully using a (pre-AI) computer requires an understanding that it strictly follows instructions. Imprecise instructions lead to incorrect results. Playing and winning at early computer games required an understanding of the underlying logic of the game.
[...] AI's evolution has mirrored this trajectory, with many early applications directly related to well-established prediction tasks and, more recently, AI reframing a wide number of applications as predictions. Thus, the higher value AI applications have moved from predicting loan defaults and machine breakdowns to a reframing of writing, drawing, and other tasks as prediction.
[...] AI's evolution has mirrored this trajectory, with many early applications directly related to well-established prediction tasks and, more recently, AI reframing a wide number of applications as predictions. Thus, the higher value AI applications have moved from predicting loan defaults and machine breakdowns to a reframing of writing, drawing, and other tasks as prediction.
Its a probably function (Score:2)
Re:Its a probably function (Score:4, Insightful)
```
How come a big mac can power our brains, but we need giant power requirements to power this approach based on classical computing paradigms?
```
There probably are some similarities in pattern matching mechanisms for transformers to work so well, on a basic linear algebra matrix math level, but we seem to have wet room temperature quantum communication happening on the microtubules inside our neurons (the crystal resonance data is just coming out now; tldr is "terahertz") and even modern qbit processors can't compete.
We also have thousands to millions of synapse connections between each neuron. The cartoon neuron model is wrong - they are fuzzy like cotton balls, not smooth like a scorpion.
You're asking, effectively, why ENIAC is huge and inefficient when an RPi0 is $5 and 2W.
Very few vacuum tubes and relays! ;)
Re: (Score:2)
Very few vacuum tubes..
We call them "glow fets" %^)
How much power? [Re:Its a probably function] (Score:3)
```How come a big mac can power our brains, but we need giant power requirements to power this approach based on classical computing paradigms?
```
Humans run on about 2100 calories per day. That's 2.4 kW-hr, conveniently close to 100 watts.
It's apparently a little hard to estimate what LLMs run at, but this article [theverge.com] suggests "Most tasks they tested use a small amount of energy, like 0.002 kWh to classify written samples and 0.047 kWh to generate text".
So, dividing, for the power it takes to run a human, ChatGPT or equivalent could generate 51 texts. That's probably more text than even the usual slashdot commenter writes per day, so no, the power requir
Re: (Score:2)
Probabalistic AI
https://youtu.be/hJUHrrihzOQ?f... [youtu.be]
It's more likely (Score:2)
probably a probability ... maybe? ;)
Re: (Score:2)
Closest to the joke I was looking for? The story has lots of potential for funny...
Re: (Score:2)
Nice FP question and the best answer I've read so far is A Thousand Brains by Jeff Hawkins.
My short answer is the human brain as a PoC for solutions around 35W. And they can be mass produced with unskilled labor, too.
Re: (Score:2)
Maybe they should use AI to figure out how to improve it.
After 10 years of doing that, we'll be watering our crops with Gatorade...
Re: (Score:2)
Maybe they should use AI to figure out how to improve it.
After 10 years of doing that, we'll be watering our crops with Gatorade...
I don't think AI would be that stupid. We might, though.
[Psst: don't let AI watch the movie.]
Arguing against a straw-man (Score:1)
Re: (Score:2)
Oh, boy, you should hear the talk radio people say, "we asked ChatGPT and it agreed with our priors so that proves it!"
I'm paraphrasing but enough of their audience must not turn it off like me.
Re: (Score:2)
Even Slashdot is full of articles about how intelligent AI is.
Re: (Score:3)
Even Slashdot is full of articles about how intelligent AI is.
I'm convinced it's part of a secret recruitment program to find and leverage either the least gullible or the most gullible people on the planet.
Re: (Score:2)
Oh, boy, you should hear the talk radio people say, "we asked ChatGPT and it agreed with our priors so that proves it!"
I'm paraphrasing but enough of their audience must not turn it off like me.
There will always be an audience that eats that up. I suspect they also consult horoscopes.
Re: (Score:3)
I don't know about you, but a few years ago, if you had told me that a prediction machine can be given a book it has never seen and answer questions about that book, I would have told you it's not a prediction machine.
And of course you would have been wrong.
It is, of course, not predicting anything about the book. It is predicting what a person answering questions about the book would answer.
Arguing against a ubiquitous misconception (Score:3)
Nobody, AFAIK, has claimed otherwise.
To the contrary. Except for a fraction of computer-literate people that actually understand what the tech does, everybody thinks otherwise. They refer to large language models as "artitficial intelligence", oblivious to the fact that while it is artificial, it is not intelligent in any real sense of the word.
Re: (Score:2)
Nobody, AFAIK, has claimed otherwise.
Except for the AI prophets, and the salespeople that are becoming their disciples, and the managers who are becoming their followers, and the worshiping masses that have started to believe AI truly will save the universe from the plague of humanity. I get at least three conversations a day from bumbling morons telling me that AI is absolutely must have at all levels of every company of that company will get left behind. Half of me thinks the only thing they'll get left behind on is the back-end of the bubbl
Just so I understand, it's still just autocorrect? (Score:1)
After investing trillions of dollars on hardware and snarfing up the entire Internet for training:
(1) Garbage in, garbage out
(2) It's basically autocorrect turned up to 11.
(3) Your mileage may vary.
Re: (Score:2)
To summarize the HBR article: After investing trillions of dollars on hardware and snarfing up the entire Internet for training: (1) Garbage in, garbage out (2) It's basically autocorrect turned up to 11. (3) Your mileage may vary.
Don't forget about all the wasted electricity!
Seeds (Score:3)
Seeds are just a way for plants to reproduce. No fucking shit, Sherlock.
Why would it be anything else? (Score:5, Insightful)
This is thing about this, we how the modules are built and trained. We have the source code to the statistics engine that implements attention and runs the model. We have the source the interface and mechanisms doing the feed forward/back to the models. All that is understood.
When people say they don't understand how the LLMs generate this or that what they mean is the model complexity is to large and the training token relationships to opaque, not that it is some super natural type mystery.
Yet for some reason a lot of people right up and including the Sam Altman's of the world at least profess to believe that somehow making the model big enough, and tying enough stuff into resource augmented feed forward, an intelligence is going to just spontaneously emerge. Well my challenge to them would be - Identify some theoretical mechanism for that; because just 'when its big enough' if magical thinking. If it is not magical thinking unless perhaps you can offer some specific ideas about just how 'big' and why.
Re: (Score:2)
Yet for some reason a lot of people right up and including the Sam Altman's of the world at least profess to believe that somehow making the model big enough, and tying enough stuff into resource augmented feed forward, an intelligence is going to just spontaneously emerge. Well my challenge to them would be - Identify some theoretical mechanism for that; because just 'when its big enough' if magical thinking. If it is not magical thinking unless perhaps you can offer some specific ideas about just how 'big' and why.
The Sam Altmans of the world, the AI prophets as I call them, have bought into their own hype. They are also followers of the "greed makes good" philosophy, which purports that as long as you absorb enough of something, it will lead to better things. In this case, they've decided to absorb data and power, and in the process, money, at a rate never before imagined. And they all seem to be of the opinion that more is more and more is always better.
I keep wondering, and have mentioned in the past, that I think
So are humans (Score:2)
https://en.wikipedia.org/wiki/Predictive_coding [wikipedia.org]
I have a different description (Score:2)
Crap Generator
And yes, I'm a fan of AI research and predict that future AI will be useful, but today's AI simply generates crap
It's kinda like a bullshitter who reads an article on a subject and claims to be an expert