'Generative AI Is Still Just a Prediction Machine' (hbr.org) 94
AI tools remain prediction engines despite new capabilities, requiring both quality data and human judgment for successful deployment, according to new analysis. While generative AI can now handle complex tasks like writing and coding, its fundamental nature as a prediction machine means organizations must understand its limitations and provide appropriate oversight, argue Ajay Agrawal (Geoffrey Taber Chair in Entrepreneurship and Innovation at the University of Toronto's Rotman School of Management), Joshua Gans (Jeffrey S. Skoll Chair in Technical Innovation and Entrepreneurship at the Rotman School, and the chief economist at the Creative Destruction Lab), and Avi Goldfarb (Rotman Chair in Artificial Intelligence and Healthcare at the Rotman School) in a piece published on Harvard Business Review. Poor data can lead to errors, while lack of human judgment in deployment can result in strategic failures, particularly in high-stakes situations. An excerpt from the story: Thinking of computers as arithmetic machines is more important than most people intuitively grasp because that understanding is fundamental to using computers effectively, whether for work or entertainment. While video game players and photographers may not think about their computer as an arithmetic machine, successfully using a (pre-AI) computer requires an understanding that it strictly follows instructions. Imprecise instructions lead to incorrect results. Playing and winning at early computer games required an understanding of the underlying logic of the game.
[...] AI's evolution has mirrored this trajectory, with many early applications directly related to well-established prediction tasks and, more recently, AI reframing a wide number of applications as predictions. Thus, the higher value AI applications have moved from predicting loan defaults and machine breakdowns to a reframing of writing, drawing, and other tasks as prediction.
[...] AI's evolution has mirrored this trajectory, with many early applications directly related to well-established prediction tasks and, more recently, AI reframing a wide number of applications as predictions. Thus, the higher value AI applications have moved from predicting loan defaults and machine breakdowns to a reframing of writing, drawing, and other tasks as prediction.
Its a probably function (Score:1)
Re:Its a probably function (Score:4, Insightful)
```
How come a big mac can power our brains, but we need giant power requirements to power this approach based on classical computing paradigms?
```
There probably are some similarities in pattern matching mechanisms for transformers to work so well, on a basic linear algebra matrix math level, but we seem to have wet room temperature quantum communication happening on the microtubules inside our neurons (the crystal resonance data is just coming out now; tldr is "terahertz") and even modern qbit processors can't compete.
We also have thousands to millions of synapse connections between each neuron. The cartoon neuron model is wrong - they are fuzzy like cotton balls, not smooth like a scorpion.
You're asking, effectively, why ENIAC is huge and inefficient when an RPi0 is $5 and 2W.
Very few vacuum tubes and relays! ;)
Re: (Score:2)
Very few vacuum tubes..
We call them "glow fets" %^)
How much power? [Re:Its a probably function] (Score:3)
```How come a big mac can power our brains, but we need giant power requirements to power this approach based on classical computing paradigms?
```
Humans run on about 2100 calories per day. That's 2.4 kW-hr, conveniently close to 100 watts.
It's apparently a little hard to estimate what LLMs run at, but this article [theverge.com] suggests "Most tasks they tested use a small amount of energy, like 0.002 kWh to classify written samples and 0.047 kWh to generate text".
So, dividing, for the power it takes to run a human, ChatGPT or equivalent could generate 51 texts. That's probably more text than even the usual slashdot commenter writes per day, so no, the power requir
Re: How much power? [Re:Its a probably function] (Score:2)
Re:How much power? [Re:Its a probably function] (Score:4, Informative)
the power required for LLMs is similar to the power to run a human.
That's a ridiculous statement.
The human brain only uses about 350-500 calories a day. The amount of computing done by the brain daily is insane. Look a the visual and auditory processing alone. Comparing the processing done by a human brain in one day to the AI generation of 51 texts is an absurd comparison. Multiple orders of magnitude off. Try more along the lines of having AI process 500 megapixel images, at around 30 per second, for 12-14 hours straight. That's around 26 terabytes of raw image data processed daily (however scientists estimate the amount of visual data processed by the brain daily is more like 100 gigabytes). And you're comparing that to 51 "texts" totaling about 5 kilobytes of data lol.
What the human brain (and entire body for that matter) does on the amount of energy it requires is absolutely astounding. Which is why there are so many obese people in this era of food prosperity.
Re: (Score:2)
Re: (Score:3)
The human brain requires a body to operate so it would be unfair to eliminate that from the equation. I don't know what its efficiency is compared to an LLM the calculation seem quite complicated how many operations and what complexity do we do per interval for example. But LLM are a relatively new invention where as our brains/bodies have been evolving for a long time and are probably optimized for efficiency.
Re: (Score:1)
Re: (Score:3)
Also the vast majority of brain power is used for autonomic functions and motor skills.
Re: (Score:2)
Which is why there are so many obese people in this era of food prosperity.
No. The reason why there is so many obese people is because our food is engineered to make us fat. This is so the people can be pointed to while saying, "these people are obviously well paid, look at how FAT they are", all while severely underpaying them.
The obesity epidemic is literally a huge gas lighting project. (but there are truly obese people)
Re: (Score:2)
You are missing the point that it is just generating. The human power consumption includes learning. Add this to the mix and we'll see.
> 0.047 kWh to generate text
What text? It does not say. "I am" will be vastly different than the whole "Macbeth" play.
Re: (Score:2)
Probabalistic AI
https://youtu.be/hJUHrrihzOQ?f... [youtu.be]
Re: (Score:3)
We compute differently too. A computer, even in the AI, has dedicated sets of simplistic transistors to do simple arithmetic. And it does this extremely quickly, in nanoseconds, faster than a neuron fires. Human wet brains don't "compute" this way, they process through algorithms. For some stuff, like multiplication tables, it's just data lookup, not computation. For harder stuff humans use the pen and paper, they do long division the long way. Maybe some do it in their heads but it's still very slow a
It's more likely (Score:2)
probably a probability ... maybe? ;)
Re: (Score:2)
Closest to the joke I was looking for? The story has lots of potential for funny...
Re: (Score:2)
Nice FP question and the best answer I've read so far is A Thousand Brains by Jeff Hawkins.
My short answer is the human brain as a PoC for solutions around 35W. And they can be mass produced with unskilled labor, too.
Re: (Score:2)
If you look into the science behind these models, they are just probably functions. Our brains do not work in this manner.
I think we over estimate our brains. If you think about it carefully, any answer humans generate is also correct with some probability. And, yes, it is a prediction in some way. You predict that you will be able to cross the road before starting crossing. You hope that you remember the math correctly when you solve something. When you help somebody to find something, you hope that you remember where it and predict that you will be able to help.
In some sense what LLMs do to generate their answers is more s
Re: (Score:1)
Re: (Score:2)
The snag is that once you train AI to get all it's power from eating a Big Mac, then it will soon learn that humans are full of calories also.
Re: (Score:2)
Maybe they should use AI to figure out how to improve it.
After 10 years of doing that, we'll be watering our crops with Gatorade...
Re: (Score:2)
Maybe they should use AI to figure out how to improve it.
After 10 years of doing that, we'll be watering our crops with Gatorade...
I don't think AI would be that stupid. We might, though.
[Psst: don't let AI watch the movie.]
Re: (Score:3)
Arguing against a straw-man (Score:1)
Re: (Score:2)
Oh, boy, you should hear the talk radio people say, "we asked ChatGPT and it agreed with our priors so that proves it!"
I'm paraphrasing but enough of their audience must not turn it off like me.
Re: (Score:3)
Even Slashdot is full of articles about how intelligent AI is.
Re: (Score:3)
Even Slashdot is full of articles about how intelligent AI is.
I'm convinced it's part of a secret recruitment program to find and leverage either the least gullible or the most gullible people on the planet.
Re: (Score:2)
Oh, boy, you should hear the talk radio people say, "we asked ChatGPT and it agreed with our priors so that proves it!"
I'm paraphrasing but enough of their audience must not turn it off like me.
There will always be an audience that eats that up. I suspect they also consult horoscopes.
Re: (Score:3, Interesting)
I don't know about you, but a few years ago, if you had told me that a prediction machine can be given a book it has never seen and answer questions about that book, I would have told you it's not a prediction machine.
And of course you would have been wrong.
It is, of course, not predicting anything about the book. It is predicting what a person answering questions about the book would answer.
Arguing against a ubiquitous misconception (Score:2, Insightful)
Nobody, AFAIK, has claimed otherwise.
To the contrary. Except for a fraction of computer-literate people that actually understand what the tech does, everybody thinks otherwise. They refer to large language models as "artitficial intelligence", oblivious to the fact that while it is artificial, it is not intelligent in any real sense of the word.
Re: (Score:3)
I see no problem with calling it artificial intelligence. The adjective is enough of a qualifier to distinguish it from other kinds of intelligence, or even exclude it from that category, if you want to see it that way.
Artificial intelligence shows great promise at mimicking what we all understand to be real intelligence. To me, that's enough to place it in the category of intelligence.
Re: (Score:3)
1. the ability to acquire and apply knowledge and skills
LLMs are intelligent.
You're trying to narrow the definition of intelligence to preclude them. Where that gets really funny, is I bet you can't even iron down a definition of intelligence that actually precludes them. You just circularly define it as "intelligent stuff that isn't an LLM"
Re: (Score:2)
noun: intelligence
1. the ability to acquire and apply knowledge and skills
LLMs are intelligent.
LLMs do not have knowledge in any useful sense of the word "knowledge." Did you actually read the article we're discussing [hbr.org]? Play with a LLM for a while. The fact that they don't actually know what they are languaging about can be very humorous. They are moving around tokens that have no meaning, other than how they fit together to make patterns.
You're trying to narrow the definition of intelligence to preclude them.
The opposite-- you're trying to narrow the definition of intelligence to include them. They are not intelligent.
Re: (Score:2)
LLMs do not have knowledge in any useful sense of the word "knowledge."
noun: knowledge; plural noun: knowledges
1. facts, information, and skills acquired by a person through experience or education; the theoretical or practical understanding of a subject.
noun: understanding
Wrong again.
Did you actually read the article we're discussing [hbr.org]?
Yes, it's pure fucking idiocy.
The most basic laws of physics are nothing but probabilistic functions that define the evolution of reality, and you're trying to tell me that every thinking thing isn't a prediction machine?
Play with a LLM for a while.
I do daily.
The fact that they don't actually know what they are languaging about can be very humorous.
Sure, but I'm laughing at you in the same way... maybe y
Re: (Score:2)
OK, bye.
Re: (Score:2)
I said no such thing. I said that both LLMs and humans are intelligent, and you know it.
What I suggested, is that your own personal intelligence is too dim to know what the word "intelligence" means.
Don't be fucking pathetic. Lose with some dignity.
Re: (Score:2)
Please tell me how LLMs are capable of acquiring knowledge after their initial training.
Hint: they're not.
Please tell me how LLMs are capable of acquiring skills after their initial training.
Hint: they aren't.
So no, they are not intelligent in the human sense. They don't even remember anything beyond what you send them in the context or what they had from their initial training. So if you ask them tomorrow what they think about yesterday's conversation? They have no clue.
Re: (Score:2)
Please tell me how LLMs are capable of acquiring knowledge after their initial training.
1) not relevant.
2) context window.
Tell me you have no idea how an LLM works without telling me you have no idea how an LLM works.
Please tell me how LLMs are capable of acquiring skills after their initial training.
See above.
So no, they are not intelligent in the human sense. They don't even remember anything beyond what you send them in the context or what they had from their initial training. So if you ask them tomorrow what they think about yesterday's conversation? They have no clue.
Find me where it says, in the definition of "intelligence", that they must continue to acquire knowledge and skills.
I'll wait.
Re: (Score:2)
An LLM is pretty similar to a virtual machine that has been configured to reset to its initial snapshot after being turne
Re: (Score:2)
I guess if "acquire knowledge" does not implicitly include any assumption that the acquired knowledge persists, you're right. My personal interpretation of that dictionary definition is (unsurprisingly) a more-human one. Acquired knowledge is expected to persist in some way. In humans, it does not persist perfectly, but it does persist well enough for humans to become experts at stuff.
No, it's not a more human one. You are trying to twist it to exclude what's in front of your face.
Are people with brain damage no longer intelligent because they can't learn additional things, or have retrograde amnesia that resets every day?
An LLM is pretty similar to a virtual machine that has been configured to reset to its initial snapshot after being turned off. It remembers things while it's "on" (the context window) but reverts to its initial state (the training) when starting a new conversation. It is possible to add more to the training, but, in my opinion importantly, it cannot do that autonomously.
Yes- an LLM is an intelligence that doesn't remember.
This isn't a semantic argument- we're discussing the difference in the types of intelligence, and why it's inappropriate to try to mold that word into something that precludes LLMs.
If someone says "they're just li
Re: (Score:2)
1. The capacity for learning which is maybe expressive of intelligence. An LLM can acquire knowledge during its context window as can a person with retrograde amnesia. In a sen
Re: (Score:2)
I feel like 2) is your AGI vs AI, even though the literal meaning of AGI is... not entirely helpful.
It's not my goal to say there's no difference between LLMs as they are now, and living conscious beings, just to say that there's also not as great a difference as some people seem to aggressively assert.
An LLM lives its "life" for single inferences at a time, with nothing but the context window as temporary working memory.
It's clearly a greatly abridged form of intelli
Re: (Score:2)
Context windows are limited in size, they don't persist or change the weights of the model.
So no, LLMs don't acquire knowledge.
Also, they don't acquire skills. They may be capable of doing "one-shot" learning when given an example in their context window, but they won't keep any such skill. They don't change at all after a query.
So by the definition you provide they are not intelligent. In fact, they don't fulfill ANY requirement according to the definition YOU provided.
Re: (Score:3)
Nobody, AFAIK, has claimed otherwise.
Except for the AI prophets, and the salespeople that are becoming their disciples, and the managers who are becoming their followers, and the worshiping masses that have started to believe AI truly will save the universe from the plague of humanity. I get at least three conversations a day from bumbling morons telling me that AI is absolutely must have at all levels of every company of that company will get left behind. Half of me thinks the only thing they'll get left behind on is the back-end of the bubbl
Re: (Score:2)
I get at least three conversations a day from bumbling morons telling me that AI is absolutely must have at all levels of every company of that company will get left behind.
Your company has lots of morons then. But they're not entirely wrong.
If you run a business, you must think about how AI will affect it. You think about other factors, don't you?
Imagine your company in a pre-internet age. What would have happened if you had ignored the rise of the internet? AI is a comparable game-change.
Re: (Score:3)
I get at least three conversations a day from bumbling morons telling me that AI is absolutely must have at all levels of every company of that company will get left behind.
Your company has lots of morons then. But they're not entirely wrong.
If you run a business, you must think about how AI will affect it. You think about other factors, don't you?
Imagine your company in a pre-internet age. What would have happened if you had ignored the rise of the internet? AI is a comparable game-change.
That's what really kills me. We did ignore the rise of the internet. In the early days I had the company president tell me we could just print the internet out for people that wanted it. This was 1999-2000. It took nearly six years and someone *OUTSIDE* the company explaining it him before he accepted it was an essential part of the business world.
And while I do get that there may be uses for AI even as it exists today, training new folks on product information for example, the idea that it's going to sweep
Re: (Score:2)
Except for the AI prophets, and the salespeople that are becoming their disciples, and the managers who are becoming their followers, and the worshiping masses that have started to believe AI truly will save the universe from the plague of humanity.
Generally these people distinguish between what AI can currently do, and what it will be able to do in the future. The big predictions are usually phrased somehow to be in the future.
Re: (Score:2)
Except for the AI prophets, and the salespeople that are becoming their disciples, and the managers who are becoming their followers, and the worshiping masses that have started to believe AI truly will save the universe from the plague of humanity.
Generally these people distinguish between what AI can currently do, and what it will be able to do in the future. The big predictions are usually phrased somehow to be in the future.
Yes, but phrased as if it's a *DEFINITE* future. Not phrased as if it *may* happen. This is the difference between practical minds and religious belief.
Re: (Score:2)
Re: (Score:2)
Not religious belief. It's marketing. Can't outright lie, but if you say it about the future, it's easy to be legally misleading.
Religion is just marketing on eleven with the guardrails missing. Which is damned close to what's happening with the current round of AI prophets and their "marketing."
Re: (Score:2)
I beg to differ just in the last few days:
ChatGPT-4 Beat Doctors at Diagnosing Illness, Study Finds https://science.slashdot.org/s... [slashdot.org]
AI As Good As Doctors At Checking X-Rays https://science.slashdot.org/s... [slashdot.org]
And people wonder why people don't trust science.
Just so I understand, it's still just autocorrect? (Score:4, Insightful)
After investing trillions of dollars on hardware and snarfing up the entire Internet for training:
(1) Garbage in, garbage out
(2) It's basically autocorrect turned up to 11.
(3) Your mileage may vary.
Re: (Score:2)
To summarize the HBR article: After investing trillions of dollars on hardware and snarfing up the entire Internet for training: (1) Garbage in, garbage out (2) It's basically autocorrect turned up to 11. (3) Your mileage may vary.
Don't forget about all the wasted electricity!
Re: (Score:2)
While this is true, I think this way of looking at it trivializes the sophistication of that "auto-correct."
It's kind of like saying, "So jet airline travel is still just a form of transportation?" Yes, that's true too, but it's not really just "walking on steroids."
Re: (Score:1)
While this is true, I think this way of looking at it trivializes the sophistication of that "auto-correct."
It's kind of like saying, "So jet airline travel is still just a form of transportation?" Yes, that's true too, but it's not really just "walking on steroids."
Good point. Use of the word "just" is misleading.
I mean what is polite conversation at a dinner party, other than a group of humans all predicting what should be said next...?
Re: (Score:2)
Like Commander Data deploying his new smalltalk subroutine.
commander data practices smalltalk
Seeds (Score:3)
Seeds are just a way for plants to reproduce. No fucking shit, Sherlock.
Why would it be anything else? (Score:5, Insightful)
This is thing about this, we how the modules are built and trained. We have the source code to the statistics engine that implements attention and runs the model. We have the source the interface and mechanisms doing the feed forward/back to the models. All that is understood.
When people say they don't understand how the LLMs generate this or that what they mean is the model complexity is to large and the training token relationships to opaque, not that it is some super natural type mystery.
Yet for some reason a lot of people right up and including the Sam Altman's of the world at least profess to believe that somehow making the model big enough, and tying enough stuff into resource augmented feed forward, an intelligence is going to just spontaneously emerge. Well my challenge to them would be - Identify some theoretical mechanism for that; because just 'when its big enough' if magical thinking. If it is not magical thinking unless perhaps you can offer some specific ideas about just how 'big' and why.
Re:Why would it be anything else? (Score:5, Insightful)
Yet for some reason a lot of people right up and including the Sam Altman's of the world at least profess to believe that somehow making the model big enough, and tying enough stuff into resource augmented feed forward, an intelligence is going to just spontaneously emerge. Well my challenge to them would be - Identify some theoretical mechanism for that; because just 'when its big enough' if magical thinking. If it is not magical thinking unless perhaps you can offer some specific ideas about just how 'big' and why.
The Sam Altmans of the world, the AI prophets as I call them, have bought into their own hype. They are also followers of the "greed makes good" philosophy, which purports that as long as you absorb enough of something, it will lead to better things. In this case, they've decided to absorb data and power, and in the process, money, at a rate never before imagined. And they all seem to be of the opinion that more is more and more is always better.
I keep wondering, and have mentioned in the past, that I think in a lot of ways this AI boom is simply a full-born manifestation of the greed that has been a part of the tech-bro/Silly Valley culture all along, and that they finally found a way to manifest that greed on several planes of existence at once. If we want actual AI, we're going to need a fundamental shift in philosophy on how we go about it. No matter how many books you throw at a monkey, he'll never become a great writer. And no matter how much data you throw at a predictive engine, it will never become intelligent, even with the day-dream of unlimited power. We're not going to see a shift towards any other form of potential AI because the current generation of LLM data aggregation is sucking up all the resources, all the eyeballs, all the brains, and all the potential.
Re: (Score:2)
I consider them to be similar to scam victims. They want to believe it's true because they've already spent a ton of money on it. They just need to pay a few more "fees" and then they'll get the payout.
Re: (Score:2)
I consider them to be similar to scam victims. They want to believe it's true because they've already spent a ton of money on it. They just need to pay a few more "fees" and then they'll get the payout.
Televangelist style. Just a few more dollars and you too could get into heaven!
Re: (Score:2)
[...] for some reason a lot of people right up and including the Sam Altman's of the world at least profess to believe that somehow making the model big enough, and tying enough stuff into resource augmented feed forward, an intelligence is going to just spontaneously emerge. Well my challenge to them would be - Identify some theoretical mechanism for that; because just 'when its big enough' if magical thinking. If it is not magical thinking unless perhaps you can offer some specific ideas about just how 'big' and why.
Well, as I see it, increasing the size and sophistication of LLMs or other AI models will result in incremental improvement in some proportion to the increase, but also could lead to a sudden dramatic change in the capability of the model, at least from our perspective of it.
Your post reminded me of Catastrophe Theory [wikipedia.org] as a possible theoretical mechanism. However, I haven't looked at this topic for awhile, so I'm not sure whether it's relevant.
Re: (Score:2)
Yeah. It is slightly better than a parrot - it is *contextually* probabilistic. There is quite a capture of patters in the learning process. Many such patterns that we humans missed. And quite ability to convert (transform) input to some representation of meaning. The whole point is that it's not 'intelligent' in any meaning of the word. Or will it ever be. But LLM are helpful and may become part of future real AI.
But agree - brute forcing it will not make it intelligent. It's missing quality components no
Re: (Score:2)
The author of the article is just discovering this reality.
Re: (Score:2)
Identify some theoretical mechanism for that; because just 'when its big enough' if magical thinking.
Life, the universe, and everything.
Literally.
Life is an emergent quality of a ridiculously fucking complex system ruled by a simple set of probabilistic rules.
Intelligence is just its latest emergent trick (on Earth, anyway)
When people try to handwave away ANNs as "predictive machines" or "just a model", they ignore the 50 years of research done into human brains, of which the best developed models are Bayesian prediction models.
Ultimately, what we really have here is the classical QM (heh) problem w
Re: (Score:2)
Good morning kind sir. I have been seeing you post on here for years. I am sincerely trying to be helpful here, but it may not feel like it.
Your selection and ordering of words has me concerned for you. Your message and insight are perfectly good; however, there are signs that there may be something negative going on for you. Please go see a doctor or have a close friend/family member evaluate to see if you are functioning at 100%.
Again, I know this sounds like it may be insulting, but it is not. It is genu
So are humans (Score:2)
https://en.wikipedia.org/wiki/Predictive_coding [wikipedia.org]
Re: (Score:2)
Q.What is the lowest number whose value squared is greater than 10?
A. The lowest number whose square is greater than 10 is 4.
No, the answer is ~ -3.162278
LLM assumed 'number' meant 'integer'. LLM did not consider negative numbers are less than positive ones.
In other words, the LLM responded with the most likely predicted response from the average person.
That may sometimes be acceptable for text. It is absolutely unacceptable for mathematics.
Re: (Score:3)
Excellent point. AI is still not great at the kind of abstract thinking that math demands.
But consider your question. Without constricting the kind of "number" in question, there is no answer. Constrict it to positive integers, and the answer is 4. My guess is that the AI thought that was what you meant when you said "number."
Now, if you allow "number" to mean any real number, then the answer is either negative infinity (if you allow negative numbers) or the next number that is just above sqrt(10) if you do
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
An answer of: approximately -4.472135 would satisfy this better specified query.
Yes, if you accept approximate answers. Otherwise no, for similar reasons to those I described above: no matter what answer you give, I can always find one that's closer to -sqrt(20) and still above it. Gotta love real numbers. [wikipedia.org]
Re: (Score:2)
But... computing is absolutely great at some abstract thinking that math demands. Ie, Mathematica, Macsyma. Sure, it's not "AI" but at a time Macsyma came out of an AI oriented environment which focused on symbolic computing. It's not "generational AI" but it's on the spectrum of AI. They're doing algebra and calculus, manipulating strings of symbols, often using the same rules that school children were taught. The snag is that the inputs to these programs have to be in a mathematical format, not a con
Re: (Score:2)
I see what you're saying. But I think you'd agree that when you use Mathematica or Macsyma, the problem has been abstracted already by the human interacting with the program.
I have used Mathematica for symbolic manipulation, and it is frickin' awesome at it. I also have no problem calling it AI, in a limited form.
Re: (Score:2)
To me that shows that it behaves like a normal human, but your answer is wrong to since the answer -ne infinity or since infinity is not an actual number, there is no answer you can give because I can always find a smaller number.
this is the answer I got from Gemini work blocks Chat GPT.
Me: The lowest number whose square is greater than 10 is 4
The square root of 10 is 3.1622776601683795. [not really answering question]
Me: isn't -4 lower than 3.1622776601683795
Gemini:
Yes, you're absolutely right! While 4 is the smallest positive integer whose square is greater than 10, -4 is indeed smaller than 3.1622776601683795.
However, when we're typically discussing the "lowest number" in this context, we're usually referring to the smallest positive number. So, 4 is the correct answer in that specific sense.
It's a great observation and a good reminder to always consider negative numbers as well!
That gives at least a very strong impression from the outside that it understood its mistakes and why it did it, just like a human would. But it failed to get how phrased my response in subtle way as to not actually give an
Re: (Score:2)
Give it some inner dialog (that's why you have that fabulous context window).
Your average human is going to produce a shitty estimation on its first pass as well.
I have found that you can help an ANN get much better at any task you give it by providing it some inner monologue.
Perhaps we just need an LLM providing context for our LLMs... LLMception. Maybe that was the
Re: (Score:2)
Just ask the AI what plants crave.
Re: (Score:2)
As a human I would have assumed the same.
If you want to do trick questions, tell upfront that you're not asking for the most obvious interpretation. (And then don't expect a LLM to do floating point arithmetic correctly)
I have a different description (Score:2)
Crap Generator
And yes, I'm a fan of AI research and predict that future AI will be useful, but today's AI simply generates crap
It's kinda like a bullshitter who reads an article on a subject and claims to be an expert
Re: (Score:2)
It's kinda like a bullshitter who reads an article on a subject and claims to be an expert
It's kinda like a person? Weird.
Your words, not mine.
It doesn't matter. (Score:2)
What matters is if the machine can do a task better than the average human doing the task. By which means is largely irrelevant. Whatever you call todays AIs, it's clear that they are closing in on fields long thought exclusive to humans. And quite fast. That's the important bit.
Re: (Score:2)
I guess we need to start calling hammers and nails intelligent too then.
Users don't need to think of it this way (Score:3)
> Thinking of computers as arithmetic machines is more important than most people intuitively grasp because that understanding is fundamental to using computers effectively
That's a ridiculous statement. Thinking of computers as arithmetic machines doesn't make any sense to most people, and doesn't help understand software in any way. The article goes on to stress that people need to understand that computers will follow instructions exactly, which is also not really something that relates in any way to using software. Using software is learning to do tasks in a specific way to achieve a specific result. It's the computer expecting the user to work in a specific way. The computer, on the other hand, isn't guaranteed to work in the specified way, because of bugs and oversights by the developers. An advanced user will try to overcome such problems by finding alternate ways to achieve the same results.
Interestingly, neither this wrong assertion nor the title talking about generative AI being a prediction machine have anything to do with what the article is trying to say, which is that it's a useful tool but needs human judgement. I can imagine that the article writer didn't use that judgement when using AI to write the article.
prediction machine = vacuous (Score:1)
Calling AI "prediction machines" is a vacuous statement, since all of intelligence can be looked at as prediction.
Why "still"? (Score:2)
That is its fundamental nature.