Bill Gates Interviews Sam Altman, Who Predicts Fastest Tech Revolution 'By Far' (gatesnotes.com) 106
This week on his podcast Bill Gates asked Sam Altman how his team is doing after his (temporary) ouster, Altman replies "a lot of people have remarked on the fact that the team has never felt more productive or more optimistic or better. So, I guess that's like a silver lining of all of this. In some sense, this was like a real moment of growing up for us, we are very motivated to become better, and sort of to become a company ready for the challenges in front of us."
The rest of their conversation was pre-ouster — but gave fascinating glimpses at the possible future of AI — including the prospect of very speedy improvements. Altman suggests it will be easier to understand how a creative work gets "encoded" in an AI than it would be in a human brain. "There has been some very good work on interpretability, and I think there will be more over time... The little bits we do understand have, as you'd expect, been very helpful in improving these things. We're all motivated to really understand them, scientific curiosity aside, but the scale of these is so vast...." BILL GATES: I'm pretty sure, within the next five years, we'll understand it. In terms of both training efficiency and accuracy, that understanding would let us do far better than we're able to do today.
SAM ALTMAN: A hundred percent. You see this in a lot of the history of technology where someone makes an empirical discovery. They have no idea what's going on, but it clearly works. Then, as the scientific understanding deepens, they can make it so much better.
BILL GATES: Yes, in physics, biology, it's sometimes just messing around, and it's like, whoa — how does this actually come together...? When you look at the next two years, what do you think some of the key milestones will be?
SAM ALTMAN: Multimodality will definitely be important.
BILL GATES: Which means speech in, speech out?
SAM ALTMAN: Speech in, speech out. Images. Eventually video. Clearly, people really want that.... [B]ut maybe the most important areas of progress will be around reasoning ability. Right now, GPT-4 can reason in only extremely limited ways. Also reliability. If you ask GPT-4 most questions 10,000 times, one of those 10,000 is probably pretty good, but it doesn't always know which one, and you'd like to get the best response of 10,000 each time, and so that increase in reliability will be important.
Customizability and personalization will also be very important. People want very different things out of GPT-4: different styles, different sets of assumptions. We'll make all that possible, and then also the ability to have it use your own data. The ability to know about you, your email, your calendar, how you like appointments booked, connected to other outside data sources, all of that. Those will be some of the most important areas of improvement.
Areas where Altman sees potential are healthcare, education, and especially computer programming. "If you make a programmer three times more effective, it's not just that they can do three times more stuff, it's that they can — at that higher level of abstraction, using more of their brainpower — they can now think of totally different things. It's like, going from punch cards to higher level languages didn't just let us program a little faster — it let us do these qualitatively new things. And we're really seeing that...
"I think it's worth always putting it in context of this technology that, at least for the next five or ten years, will be on a very steep improvement curve. These are the stupidest the models will ever be."
He predicts the fastest technology revolution "by far," worrying about "the speed with which society is going to have to adapt, and that the labor market will change." But soon he adds that "We started investing a little bit in robotics companies. On the physical hardware side, there's finally, for the first time that I've ever seen, really exciting new platforms being built there."
And at some point Altman tells Gates he's optimistic that AI could contribute to helping humans get along with each other.
The rest of their conversation was pre-ouster — but gave fascinating glimpses at the possible future of AI — including the prospect of very speedy improvements. Altman suggests it will be easier to understand how a creative work gets "encoded" in an AI than it would be in a human brain. "There has been some very good work on interpretability, and I think there will be more over time... The little bits we do understand have, as you'd expect, been very helpful in improving these things. We're all motivated to really understand them, scientific curiosity aside, but the scale of these is so vast...." BILL GATES: I'm pretty sure, within the next five years, we'll understand it. In terms of both training efficiency and accuracy, that understanding would let us do far better than we're able to do today.
SAM ALTMAN: A hundred percent. You see this in a lot of the history of technology where someone makes an empirical discovery. They have no idea what's going on, but it clearly works. Then, as the scientific understanding deepens, they can make it so much better.
BILL GATES: Yes, in physics, biology, it's sometimes just messing around, and it's like, whoa — how does this actually come together...? When you look at the next two years, what do you think some of the key milestones will be?
SAM ALTMAN: Multimodality will definitely be important.
BILL GATES: Which means speech in, speech out?
SAM ALTMAN: Speech in, speech out. Images. Eventually video. Clearly, people really want that.... [B]ut maybe the most important areas of progress will be around reasoning ability. Right now, GPT-4 can reason in only extremely limited ways. Also reliability. If you ask GPT-4 most questions 10,000 times, one of those 10,000 is probably pretty good, but it doesn't always know which one, and you'd like to get the best response of 10,000 each time, and so that increase in reliability will be important.
Customizability and personalization will also be very important. People want very different things out of GPT-4: different styles, different sets of assumptions. We'll make all that possible, and then also the ability to have it use your own data. The ability to know about you, your email, your calendar, how you like appointments booked, connected to other outside data sources, all of that. Those will be some of the most important areas of improvement.
Areas where Altman sees potential are healthcare, education, and especially computer programming. "If you make a programmer three times more effective, it's not just that they can do three times more stuff, it's that they can — at that higher level of abstraction, using more of their brainpower — they can now think of totally different things. It's like, going from punch cards to higher level languages didn't just let us program a little faster — it let us do these qualitatively new things. And we're really seeing that...
"I think it's worth always putting it in context of this technology that, at least for the next five or ten years, will be on a very steep improvement curve. These are the stupidest the models will ever be."
He predicts the fastest technology revolution "by far," worrying about "the speed with which society is going to have to adapt, and that the labor market will change." But soon he adds that "We started investing a little bit in robotics companies. On the physical hardware side, there's finally, for the first time that I've ever seen, really exciting new platforms being built there."
And at some point Altman tells Gates he's optimistic that AI could contribute to helping humans get along with each other.
Same old story (Score:3)
Re:Same old story (Score:5, Interesting)
I'm old enough to have seen the entirety of the personal computer's history, so I've also seen the hype machine cycle. It's usually bullshit, of course.
But this time the damn thing works, and it's only going to get better.
Re: Same old story (Score:5, Insightful)
I've been doing some probate law work, as it happens, and it has been invaluable--not because it quotes everything perfectly (I go to the source for that) but rather because it's a fantastic devil's advocate, critic, and general index.
Same things goes for coding and even some electronics hardware design. I engage in conversation rather than asking questions, and GPT-4 is terrific at helping to refine ideas or walk through complicated problems.
It is good that GPT gets it wrong (Score:1)
It shows that GPT is actually figuring things out, rather than just looking things up. And the mistakes GPT makes are human like mistakes, amazing.
Politicians aids now regularly use GPT to review information and write policy documents.
It will not be long before GPT et. al. de facto rule us.
Re:It is good that GPT gets it wrong (Score:4, Informative)
You do realize ChatGPT is just a very very very fancy token sequence generator that basically functions like a statistics-based autocomplete? It's pretty much a giant Markov Chain of tokens. it's not "figuring" anything out.
Re: It is good that GPT gets it wrong (Score:1, Informative)
Technically correct, but the purpose of a system is what it does, right? If token completion has the effect of figuring things out, and you're using the system to figure things out, then the specifics of how it accomplishes that outfiguring are largely irrelevant.
Re: (Score:1)
Yes, but it's YOU who is figuring things out, not ChatGPT. As OP said, ChatGPT is just regurgitating a randomised mashup of what humans have said. There is no sense in which ChatGPT is 'figuring out' anything.
Re: (Score:2)
Agreed. It is an expert at taking educated guesses.
It analyzes all of the data it has and does some probability analysis to take it's best guess. It is like someone took a chess playing engine and applied it to everything.
Re: (Score:2)
The fact that it gets specific facts wrong sometimes isn't the point. If you're using it as an encyclopedia, you're doing it wrong.
I've been doing some probate law work, as it happens, and it has been invaluable--not because it quotes everything perfectly (I go to the source for that) but rather because it's a fantastic devil's advocate, critic, and general index.
Same things goes for coding and even some electronics hardware design. I engage in conversation rather than asking questions, and GPT-4 is terrific at helping to refine ideas or walk through complicated problems.
Yeah, this. ChatGPT and LLM's are not expert systems. They offer an OPINION, thats all it is. And thats why any claims that they violate copyright outside of fair use is garbage. You aren't SUPPOSED to be able to use an LLM to spit out its training data verbatim, that isn't what they are for, that's not what they should be able to do.
Re: Same old story (Score:3)
Unless and until you have a final court decision supporting your claim, you're just guessing. And your sweeping generalization does not suggest it's an educated one.
Re: (Score:2)
Copyright does not work that way.
Perhaps you once should read the law.
It has an introduction/preamble and then something like 17 paragraphs.
What is next? I read books and write a thesis and I'm violating copy right?
Re: (Score:3)
Perhaps you once should not rely on your armchair lawyer degree.
Re: (Score:1)
You did not even read the law.
But you might want to point out, which section of the law I violate when I train an AI with two or 100 books of yours and make it accessible online.
What is copied?
Nothing obviously.
So: no copyright violation. As I copy nothing. Facepalm.
Re: (Score:2)
If you recall the case by Blizzard against a World of Warcraft bot program called Glide, the court found in Blizzard's favor that Glide was a tool for copyright infringement because it caused people who used it to violate Blizzard's terms of service. And by violating Blizzard's terms of service it caused the player to infringe on copyr
Re: (Score:1)
Terms of service have nothing to do with copyright.
And training an LLM (large language model, neural network) does not copy anything in the sense of copyright.
You should really read copy right law. It is only two US legal letter pages ...
Re: (Score:2)
Terms of service have nothing to do with copyright.
Read what I wrote again, slowly. I realize that it's a confusing case, but there is a certain twisted sense to it.
You do not come to understand the law just by reading it, and you do not know how it works just by deciding how you think it should work.
Re: (Score:1)
Then quote next time what you want me to read again ...
Copyright works as it works.
As long as you are not copying anything OR make an adaption and then distribute that: you are not violating copyright laws.
Those yes/no Qs are insufficient (Score:2)
No; but they did assess the information, and the resulting downstream token chain can be identical to the original in some cases. However, that's actually fairly difficult to make happen with current engines/models. Very similar to how most humans assess information. Eidetics aside.
However, eidetics bring up an important point: It's not that they have an exact copy of the information that constitutes a fair use violation; it is if they use that information to infringe
Re: Those yes/no Qs are insufficient (Score:2)
Merely copying is the infringement- infringement of the exclusive right to make copies. If the steps are "HTTP/1.1 200 OK" -> "token stream," fine, but we know it isn't. Fair use is an affirmative defense to infringement- that is, it acknowledges the copying- then argues that the conduct is not subject to being enjoined or to liability for
Re: (Score:2)
Clearly, you did not read — or understand — my post. I specifically called out ChatGPT, as it is sold, as a for-profit use of the information, and therefore open to a fair use case. Here it is:
Re: (Score:2)
I was very careful to point out that the courts are often wrong, and deeply so. To any degree that they interfere with the non-commercial, non-interfering use of information, that would be the case. As Thomas Aquinas opined:
As I have posted elsewhere, this horse is well out of the barn, and no matter the enthusiasm engaged if/when the courts slam the barns doors c
Re: Same old story (Score:2)
This is like creating a game using assets from other games without permission and telling players to effectively ignore the plagiarism.
That argument does not stand under current interpretations of copyright law. I will not speak about the lawsuits as they are ongoing, but demanding that people not notice the glaring similarities is wishful thinking at best.
Re: (Score:3)
You seem to be easily impressed....
No, it does not work in general and the quality of what it delivers if it works is not that impressive.
And no, because of model collapse, it is quite possible that it _cannot_ get better, but will get worse over time.
Re: Same old story (Score:1)
Re: (Score:2)
Anyone who isn't impressed has inflated expectations of what is normal for a chatbot. It's a huge step forward. It's not perfect, but it doesn't have to be.
I'm often impressed at what my predictive texting can do on my phone, sometimes its literally generating the whole sentence. Has been for years.
Re: (Score:3)
It actually is not. The step is rather small. The only actual advances are in the natural language interface. The rest, including hallucinations, is a decade or two old, possibly more.
Re: (Score:2, Informative)
My results with GPT-4 are excellent across a range of activities: legal research, electronics design, web/database/embedded programming, and more. It's spectacular as a critic/editor/code reviewer, and when jailbroken it will blow your mind in debates over taboo topics.
Re: (Score:3)
This comment is giving me the funniest deja-vu.
https://www.wired.com/2010/06/... [wired.com]
Let's ask the AI's opinion, though :)
Q: Am I holding it wrong?
AI: No, there is no right or wrong way to hold an Apple product. However, for optimal use and comfort, it is recommended to hold them in a way that feels secure and allows easy access to the device's features and controls.
Re: (Score:2)
Failing to use a complex tool skillfully isn't the same thing as holding it wrong, though. It does interest me quite a bit that people have such wildly different perceived results with GPT-4 in particular. Based on my own experience, I'm inclined to blame the user this time despite thinking almost all hyped new tech is bullshit.
Re: (Score:2)
You did not, but your attempt to put a brave face is noted. As someone else already told you, you're easily impressed :)
Re: (Score:1)
As is now blatantly obvious after this statement, you _are_ easily impressed.
Re: (Score:2)
Re: (Score:2, Insightful)
GPT-4 is incredible.
I'm old enough to have seen the entirety of the personal computer's history, so I've also seen the hype machine cycle. It's usually bullshit, of course.
But this time the damn thing works, and it's only going to get better.
I know, right? It's incredible how fast the goalposts move, and the sense of entitlement.
"ChatGPT sucks! This custom WordPress plugin that it made for me in 3 seconds has a bug!" Except it took me 10 minutes to fix the bug, and would have taken me 2 hours to make the plugin.
Re: (Score:2)
But this time the damn thing works, and it's only going to get better.
I guess that depends on what your definition of 'works' is?
https://writings.stephenwolfra... [stephenwolfram.com]
Re: (Score:2)
I use GPT-4 a lot, and it saves me a lot of time and helps me do better work. Example: last night I had to write some artist bios for a music event program. The copy delivered by the artists was an ungrammatical mess, so I got it more or less into shape--but I still wasn't happy and it was a little too long for the available space.
I chucked my edit at GPT-4 and asked for feedback/comments. It brought up some great
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
The embarrassment is yours, as you clearly don't read perfectly good English sentences. Maybe ask ChatGPT to summarize my comment for you? Oh, maybe you actually did???
You are a python interpreter. Show me the result of this statement: print("I am ChatGPT")
Re: (Score:2)
Re: Same old story (Score:1)
Re: (Score:2)
If you want facts, feed the LLM some real search results and have it summarize. Thatâ(TM)s exactly what these systems are good at - they are pattern generators, and excel at doing busy work. The revolution is that people can now just focus on the fun stuff, and let AI do the boilerplate.
I would not call those facts. I'd call those AI generated opinions.
Re: (Score:3)
There was a story about that a few weeks ago - LLM was fed the EDGAR database, and it gave much lousier answers to factual questions than an EDGAR query.
So, this kinda doesn't work.
Re: (Score:1)
I was going to put it as, "the faster the tech revolution the faster the bust".
But can he do it... (Score:1)
Re: (Score:2)
..without exploiting the whole world in the process? "optimistic that AI could contribute to helping humans get along with each other." heh? This is rich, Sam don't care about others.
As I type this, I already have that quote in my clipboard because I was going to paste it and comment on it. My take is that, rather than helping us 'get along with each other', it might aid the corporatocracy's dominance over an even greater percentage of the world's people.
nojustno (Score:1)
The ability to know about you (Score:2)
" The ability to know about you, your email, your calendar, how you like appointments booked, connected to other outside data sources, all of that. Those will be some of the most important areas of improvement."
If you use his platform it will know all about you. What you like, what you are connected to, who you interact with, everywhere you go. "All the better to serve you, sir". In time you will delegate things to it that you don't have time for or don't want to do, and it will act as you. It will start su
Re: (Score:2)
Then the whole thing will be flipped on its head when we can carry what amounts to a thinking version of the Hitchhiker's Guide around with us.
Re: (Score:2)
That would amount to an augmentation, right? Described in science fiction many times.
Google glasses on steroids. Eventually we'll be much like Borgs. You'll get a cybernetic version of this thing implanted and we're off to the races.
Re: (Score:2)
not science fiction. we already use all sorts of augmentations, all the way from regular prescription glasses to smartphones or smartwatches passing through pacemakers. this would be just a more 'intelligent' augmentation.
then the question is if it is working connected to the internet, à la borg. borg just die if the connection to the hive is severed. what is suggested here is an offline gadget that contains the data model and a cpu capable of exploiting it. i'd say it's indeed possible, and the advant
Re: (Score:2)
*glorified
what? (Score:5, Insightful)
They didn't really say anything at all. They expect progress in the next 5 years. And they think it might be useful for some stuff. Its only up from here, sky is the limit. Typical investor fodder.
Re: (Score:2)
Right. When I saw this
"SAM ALTMAN: A hundred percent. You see this in a lot of the history of technology where someone makes an empirical discovery. They have no idea what's going on, but it clearly works. Then, as the scientific understanding deepens, they can make it so much better."
Why not use AI to speed up the scientific understanding? Because it isn't really intelligent or even artificially intelligent. It just appears to be smart.
This just in (Score:3)
Tech pioneer predicts his invention will revolutionize the industry.
Wow, who would have guessed?
Re: (Score:3)
Tech pioneer being interviewed by one of his big investors... I suspect the main thing we're supposed to learn here is that Bill Gates has a podcast.
I'd rather have a journalist interviewing Mr. Altman. Someone who's not going to just lob softballs; someone who might also try to dig into the management clown show OpenAI went through a little while ago - someone who might get Altman to pull the curtain back a bit on that whole mess.
Re: (Score:3)
Same tech pioneer saying stupid things like "going from punch cards to higher level languages"
Re: (Score:2)
Same tech pioneers that said that 640k is enough for everyone and that there is maybe a market for a dozen computers worldwide, or how about those that predicted that with dot.com, the time of "everyone can have a company on the internet" has come.
Whether some sage prediction actually is true is only obvious in hindsight. There have been a lot of tech predictions that turned out to be utter rubbish. Actually, in hindsight, more than have come true.
Re: (Score:2)
"A lot" of predictions turning out to be rubbish is a significant understatement. Probably more like "nearly all".
Re: (Score:2)
"A lot" already was enough to refute the argument, I didn't want to enter into a dick-waving contest to see if it was really "nearly all".
Cargo Cultism Lives On (Score:5, Insightful)
I never got why people effectively worship tech personalities. Especially when they themselves are not even that tech proficient. Altman is a glorified manager type who dropped out of his CS course in college. He found a few guys who actually had some hard technical skills and now he gets to coast off the talent of hundreds of frankly, more talented individuals then him.
Like do we ignore the shitshow that was his temporary ouster? Or his sleazy WorldCoin startup that wants to scan your retinas in exchange for shitcoins?
And of course he is talking with Bill Gates, the same guy that effectively destroyed competition in the OS market. Of which to this day we feel the consequences.
Re: (Score:3)
I never got why people effectively worship tech personalities. Especially when they themselves are not even that tech proficient. Altman is a glorified manager type who dropped out of his CS course in college. He found a few guys who actually had some hard technical skills and now he gets to coast off the talent of hundreds of frankly, more talented individuals then him.
Like do we ignore the shitshow that was his temporary ouster? Or his sleazy WorldCoin startup that wants to scan your retinas in exchange for shitcoins?
And of course he is talking with Bill Gates, the same guy that effectively destroyed competition in the OS market. Of which to this day we feel the consequences.
He saved the world from OS/2
Re: (Score:1)
Re:Cargo Cultism Lives On (Score:4, Insightful)
This is what has confounded me since I first looked up who the heck Sam Altman was. Turns out he is just a glorified cheerleader. Leadership is necessary, but no way is this guy the engine behind any of these breakthroughs.
Re: (Score:2)
Sam Altman is part of the 'privileged' class. They don't have to have any qualities of note as they assume they have the necessary qualities and proceed as if they do. They also expect us to play along with such notions despite the fact that we can clearly see that the emperor has no clothes.
emperor's new clothes (Score:3, Insightful)
So much BS hype. Let me make this clear. The transformer technology does NOT understand true meaning. It is shallow surface manipulation without creating true deep meaning models. They are not grafting any good cognition to the stat ML. The proof of this is the existence of hallucinations by the AI, which comes about because 1) there is no mind underneath 2) there is no proper analytic mechanism that compares emittable output to reality 3) each of us (humans) apply our cultural belief systems to thoughts. The AI has no culture to which it belongs, and no, the training does not impart any consistent view to the data. So for example you might have both a Christian view of pork, and a Muslim view of pork all mixed together in the training dataset. Now ask the bot whether it would eat pork.
For that matter, ask the AI whether it feels pain. Or can feel emotional pain. Can it understand what a human feels? No. So some measure of its output is not in sync with the real world. Ask an AI to tell you about its happy childhood years. Give the chatbot the Voight-Kampff test. Its answer: (shoots Holden)
There is entirely too much wishful thinking in the AI community which is guided by blind faith.
Re: (Score:2)
billg was never tech proficient. His talent lay in an ability to leach borg-like of other peoples original work. Even ms Basic was a clone of DEC's BASIC-PLUS.
Re: (Score:2)
There is entirely too much wishful thinking in the AI community which is guided by blind faith.
There always has been. And these people seem to be incapable of learning from past failures.
Re: (Score:2)
Long as you use it for that job, no. But not understanding the limits of the chinese room will lead to using it for jobs it shouldn't.
Some cases of misapplication will be immediately apparent and simply ignored. Some will take longer for the consequences to crystallize. Despite being anticipated by the previous sentence, they won't be due to the one before, silently discarded data. The consequences will be a "surprise".
Good thing it only takes 640kB of RAM... (Score:2)
Anyone else bored by AI hype? (Score:5, Insightful)
Re: (Score:2)
Because it can, sort of, do some things sometimes, but not really. But cretins like Altman must get even richer, so hype it is.
Paper looks at expert estimation of AGI arrival (Score:2)
Re:Paper looks at expert estimation of AGI arrival (Score:4, Insightful)
Says more about the "experts" than about AGI arrival. LLMs are _not_ a stepping stone on the way to AGI. Anybody that changed their predictions because of them is a wannabe, not an expert.
Re: (Score:2)
LLMs are _not_ a stepping stone on the way to AGI.
I agree that they will not directly lead to AGI; however, they give us enough mental space to consider other approaches and ideas... so ultimately I disagree with your statement.
So this is cryptocurrency all over again? (Score:2)
This reminds me of all the crypto-hype 10 years ago...how it would revolutionize finance...I didn't under
Bill Knows The Future -He Predicted Covid Pandemic (Score:1)
Then very coincidentally a real coronavirus pandemic would begin soon after.
Re:Bill Knows The Future -He Predicted Covid Pande (Score:4, Insightful)
Yeah it's not like people had been warning about the possibility of a coronavirus-driven pandemic after previous coronavirus epidemics like SARS (2002) [wikipedia.org] or MERS (2012) [wikipedia.org] happened. It's not like the 2009 Swine Flu pandemic [wikipedia.org] reinforced these concerns.
It's not like President Obama created a pandemic response team in 2014 [snopes.com], or that the Obama administration put together a pandemic playbook [pbs.org] out of concern for a possible future pandemic.
Yeah, Gates pulled the idea for that pandemic simulation out of his hat. Pretty fishy...
Yay, another tech "revolution"... (Score:3)
Ever notice that the tech "revolutions" are stuff that never benefits the end user? 15-20 years ago, tech revolutions brought us good cameras, the PDA, mirrorless SLRs, solid video cameras, decent cell phones, and then was able to combine all of that into one device. For the past 5-10 years, what new tech stuff have helped out lives? Any advance really only lined the pockets of capital company execs and billionaires. The only real advance was dockless scooter rentals which was disruptive enough to keep ICU wards full for years, and add another danger element.
When someone tells me there are more tech revolutions, what does that mean to me? More intrusive ads? More devices spitting ads? More devices using new ways to invade privacy and send data where it shouldn't go (Moscow, Beijing, Tehran, Pyongyang)? More insecure crap that cannot be fixed, only worked around? More subscriptions? More facial recognition so if someone says the wrong thing and stores ban them, they can be nailed for trespass if they just walked onto a corner of property? More deepfakes to take school bullying to a new level? More propaganda screaming in from psy-ops groups to demoralize the population even more?
How about stuff that can help people get back some buying power and maybe even revive the American Dream? Without this, everything else is at best a sideshow, at worst bullshit that will not do anything except worsen the quality of life.
Re: (Score:2)
How about stuff that can help people get back some buying power and maybe even revive the American Dream?
For the price of 100,000 display devices capable of displaying arbitrary images at a rapid rate and in a full color spectrum, I can buy 1 shelter against the cold and rain. It will be a nice shelter, but that is only because they don't allow less nice shelters.
Does this seem rational to you?
Best bit at the end... (Score:2)
Final Q & A at the end [youtube.com] had me LOL!
BILL GATES: Remind me what your first computer was?
SAM ALTMAN: (without hesitation) A Mac LC2
BILL GATES: (Gates can't hide his disappointment) Oh,.. nice choice.
SAM ALTMAN: It was a good one. Still have it. It still works.
Probably staged but best laugh I've had today!
Re: (Score:2)
Re: (Score:1)
IIRC the LC was a 68020 though.
Re: (Score:2)
Re: (Score:1)
Ah yes, good point - they did say LC 2!
CEO & investor... (Score:2)
GatesBorg icon? (Score:2)
Where is the GatesBorg icon? And do we need a Sam AltSkynet one, too?
Grumble.
I, for one, do not accept these rehashed overlords!
Multimodal is speech only? (Score:2)
The most interesting part was left out. (Score:1)
GIGO (Score:2)
There are limits that people don’t seem to talk about much.
If an LLM is trained on human language. It won’t automatically surpass the capabilities of human intelligence.
Try training a LLM on what kindergarten kids tell each other. No amount of additional computing power will make it start saying sensible things about economy or physics.
The best we can do is provide a wide variety of scientific papers, but ithe LLM won’t start talking and thinking like a somebody with an IQ of 500.
And if it
Anyway... (Score:2)