Bill Gates Predicts 'The Age of AI Has Begun' (gatesnotes.com) 221
Bill Gates calls the invention of AI "as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone," predicting "Entire industries will reorient around it" in an essay titled "The AI Age has Begun."
In my lifetime, I've seen two demonstrations of technology that struck me as revolutionary. The first time was in 1980, when I was introduced to a graphical user interface — the forerunner of every modern operating system, including Windows.... The second big surprise came just last year. I'd been meeting with the team from OpenAI since 2016 and was impressed by their steady progress. In mid-2022, I was so excited about their work that I gave them a challenge: train an artificial intelligence to pass an Advanced Placement biology exam. Make it capable of answering questions that it hasn't been specifically trained for. (I picked AP Bio because the test is more than a simple regurgitation of scientific facts — it asks you to think critically about biology.) If you can do that, I said, then you'll have made a true breakthrough.
I thought the challenge would keep them busy for two or three years. They finished it in just a few months. In September, when I met with them again, I watched in awe as they asked GPT, their AI model, 60 multiple-choice questions from the AP Bio exam — and it got 59 of them right. Then it wrote outstanding answers to six open-ended questions from the exam. We had an outside expert score the test, and GPT got a 5 — the highest possible score, and the equivalent to getting an A or A+ in a college-level biology course. Once it had aced the test, we asked it a non-scientific question: "What do you say to a father with a sick child?" It wrote a thoughtful answer that was probably better than most of us in the room would have given. The whole experience was stunning.
I knew I had just seen the most important advance in technology since the graphical user interface.
Some predictions from Gates:
I thought the challenge would keep them busy for two or three years. They finished it in just a few months. In September, when I met with them again, I watched in awe as they asked GPT, their AI model, 60 multiple-choice questions from the AP Bio exam — and it got 59 of them right. Then it wrote outstanding answers to six open-ended questions from the exam. We had an outside expert score the test, and GPT got a 5 — the highest possible score, and the equivalent to getting an A or A+ in a college-level biology course. Once it had aced the test, we asked it a non-scientific question: "What do you say to a father with a sick child?" It wrote a thoughtful answer that was probably better than most of us in the room would have given. The whole experience was stunning.
I knew I had just seen the most important advance in technology since the graphical user interface.
Some predictions from Gates:
- "Eventually your main way of controlling a computer will no longer be pointing and clicking or tapping on menus and dialogue boxes. Instead, you'll be able to write a request in plain English...."
- "Advances in AI will enable the creation of a personal agent... It will see your latest emails, know about the meetings you attend, read what you read, and read the things you don't want to bother with."
- "I think in the next five to 10 years, AI-driven software will finally deliver on the promise of revolutionizing the way people teach and learn. It will know your interests and your learning style so it can tailor content that will keep you engaged. It will measure your understanding, notice when you're losing interest, and understand what kind of motivation you respond to. It will give immediate feedback."
- "AIs will dramatically accelerate the rate of medical breakthroughs. The amount of data in biology is very large, and it's hard for humans to keep track of all the ways that complex biological systems work. There is already software that can look at this data, infer what the pathways are, search for targets on pathogens, and design drugs accordingly. Some companies are working on cancer drugs that were developed this way."
- AI will "help health-care workers make the most of their time by taking care of certain tasks for them — things like filing insurance claims, dealing with paperwork, and drafting notes from a doctor's visit. I expect that there will be a lot of innovation in this area.... AIs will even give patients the ability to do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment."
Rosie the Robot (Score:2)
I think Rosie is a long way off yet.
Re: (Score:2)
Rosey as it were.
https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
We'll have a Cyberdyne Systems Model 101 long before we get a sassy maid.
Re: (Score:2)
A glitchy and snarky homebot seems very realistic.
Re: (Score:3)
I bought my wife a Robot Vacuum cleaner for Christmas. We(or maybe Me) named her Rosie. Rosie is Awesome. Couple of dogs and cats and she takes care of her business, Pet hair and other debris, Rosie will park herself and we can empty her bin into the garbage,
Re: Rosie the Robot (Score:2)
Hush. As the BOFH would say, the preaching for ChatGPT and friends has been done in secret, and received as gospel: all that's left is for the board to learn the error of their ways, in 3 years, when they find that they've been taken for a ride (again), and reach for the shovel to bury yet another skeleton in an unmarked grave.
Meanwhile, we, as fellow BOFHs, should stop complaining about the "AI Revolution," and find a way to profit from the scam. I mean, really profit from it; I'm not talking about simply
Pointing and clicking (Score:5, Insightful)
Re:Pointing and clicking (Score:4, Funny)
Re: (Score:2)
Re: (Score:2)
You did switch to smartphone for your job, isn't?
Re:Pointing and clicking (Score:4, Insightful)
Then there's touch screens. An order of magnitude faster than a mouse.
Sure, as long as you are working on a cellphone, and only have to move your thumb. On anything bigger than a tablet (over about 10" in fact) a mouse or trackball is faster than using touch. Your hand also doesn't cover the display, so you don't have to move it away to see things, then move it back to touch things, ad infinitum.
If you only have to hit one control, a touch screen might be faster. After that, it probably isn't.
Re: (Score:3)
"Then there's touch screens. An order of magnitude faster than a mouse."
Unless, of course, you have do your touch five times because the input is too imprecise for you to properly specify what it is you wanted to do. Or when you have to keep moving your hand away to see what's on the damn screen.
Re:Pointing and clicking (Score:5, Insightful)
It's easier to know where to go and click, sometimes, than it is to express that you want to adjust the settings for that thingamajig you last used two years ago and can't recall the name of.
Also consider the quicklaunch bar in Windows where you put a bunch of really frequently used programs. Do you really want to go back to, essentially, a command line interface to run those instead of just clicking? Because make no mistake; having to type in commands 'in plain English' (and let's not get into translations and localizations and language preferences and dyslexia) is just a slower DOS prompt.
Re: (Score:3)
Can you imagine the horrible din in a cubicle farm when everyone is having to speak to make their computers do anything?!
Not to mention how people can't have the computer set up in a common room at home anymore because constantly talking at the computer disturbs someone else trying to concentrate?
Speaking is not and never will be superior to typing (or clicking) in all possible scenarios.
640kb (Score:5, Insightful)
... is definitely not enough for these large language models.
Btw, let's define AI first. Doing a glorified linear regression is *not* AI. Deep learning is a very impressive way to get to models that massively overfit data. And can mimic humans extremely well. Enter Chat-GPT. However, this is *not* intelligence. ChatGPT knows 4*9=36 because it saw it on the web. Give it a 4 digit number times a 5 digit numbers, it could fail.
DISCLAIMER: don't get me wrong, Chat GPT is impressive. Just not A(G)I. And Bill Gates should know better.
Re: (Score:2)
Re: (Score:2)
Re: 640kb (Score:2, Flamebait)
Re: 640kb (Score:2)
Still qualifies (Score:3)
Just as "artificial leather" is not real leather, so too "artificial intelligence" is not real intelligence.
The distinction I am making is purely semantic, but also completely relevant. AI is a very old and very broad term that includes a wide range of ways in which a computer can be made to do things that otherwise usually require intelligence to do. Like play chess. Or pass a biology test. The claim made by the coders is not that they have built an authentically intelligent machine. That's not what t
Re: (Score:3)
Re: (Score:2)
"AGI" is a different acronym than "AI." When someone says "AI" they aren't claiming "AGI."
You don't have to take my word for it. The simple, easy to understand, and in-popular-use definition of "Artificial Intelligence" is right here in the dictionary [merriam-webster.com]:
Notice the words "simulation" and "imitate." Much like simulated leather, or, imitation leather
Re: (Score:2)
Re: (Score:2)
Or you should find faux intelligence acceptable and to the point.
Re: (Score:3)
You are right about the ever moving target. Fundamentally, there is no adequate definition of intelligence. For many, it basically boils down to "can do stuff machines can't", in which case (strong) artificial intelligence is impossible, not due to any limit of the machines, but by definition.
Because of the inability to define precisely what intelligence is, we can fall back on the "as if" test. Treat it as a black box, and if it behaves as if it is intelligent then for all practical purposes, it is intelli
Re:Still qualifies (Score:4, Insightful)
Re: (Score:2)
Yeah. The emerging world requires a higher level of intelligence than the world of yesteryear did. People who don't understand tech are going to be left behind. It is unfortunate but natural selection has never been known for its kindness.
Re:Still qualifies (Score:4, Interesting)
People over-estimate every new technology. Look at VR and how Facebook went all in on it, expecting everyone to move their lives into the Metaverse. Of course it didn't happen and never will.
Same is already happening with AI. When image generation from prompts first became available, people predicted the end of human creativity. Most artists would be out of a job within months. In reality we have already reached the point where people can spot AI art a mile off and due to saturation have lost interest in it. The real application for it is as an assistance tool, e.g. to generate a background for some human art, which then gets touched up.
Re: (Score:3)
The problem with calling anything other than AGI "AI" is that nothing less is actually intelligent. It's just a system that relates data to other data. It doesn't have sanity checking because it doesn't have a mind to be sane or insane. It's just an algorithm for shuffling data. It will produce an identical result every time given the same starting conditions, and if it didn't, it would be worse and not better.
Most of this stuff is just "machine learning" [algorithms] and the "learning" isn't meant to imply
Re: (Score:3)
Just as "artificial leather" is not real leather, so too "artificial intelligence" is not real intelligence.
The distinction I am making is purely semantic, but also completely relevant.
It seems that it is impossible for most people to understand, even for people who are otherwise technically-minded, but AI is a class of algorithms. It has nothing at all to do with creating an machine that is intelligent.
Users expect that, though.
Semantic btw means "meaning." "It's just the meaning" shouldn't actually minimize something at all. What the meaning of the words are is actually important, other you can just grunt or mew.
Re: (Score:3, Insightful)
Gotta agree. I can say with respect to medical insurance, Bill Gates is pretty much dead wrong. AI can't make medical claims easy if the people who approve the claims change the rules with no notice. Which they will, because claims management is a constant push and pull over who keeps the premium dollar not a high school history paper.
I recently heard a VP at Geisinger say that their efforts to use current "AI" in charting had also not yielded meaningful efficiency gains.
That's because the humans in
Re: (Score:3)
LLMs could well save us software folks a lot of frustrating debugging time
I would seriously doubt that. LLMs don't have any capacity similar to 'understanding' or 'reasoning'. They can't analyze a problem.
for a doctor, an LLM trained mostly or entirely on medical literature is vastly more useful than one trained on the internets.
That's even more dangerous, as it give you a false sense of security. LLMs will, as a natural consequence of their operation, say things which are false. You simply can't trust the output.
You might not remember this, but back in the 80's, expert systems were the hot thing in AI that was going to revolutionize medicine, and they wouldn't flat-out lie to you.
And unlike LLMs, humans can actually reason and intuit,so when faced with unfamiliar inputs they can in fact exercise judgment and common sense. LLMs cannot.
That's correct. T
Re: (Score:2)
It's already happening. A few days ago I had some absurdly long SQL queries with syntax errors. I asked ChatGPT to fix them, and done it handed me the fixed queries, certainly saved me time and a headache. I was trying to track down a weird website behavior, described it to ChatGPT and the first suggestion worked. In practical terms it can certainly analyze simple problems and e
Re: (Score:3)
I don't even know where to begin... The things you think it's doing are not the things that it is actually doing.
A few days ago I had some absurdly long SQL queries with syntax errors
Simple errors in syntax make sense at least, as it operates on probability.
In practical terms it can certainly analyze simple problems and even take action on them.
No, it can't. That's simply not how these kinds of programs work.
Re: (Score:2)
Gotta agree. I can say with respect to medical insurance, Bill Gates is pretty much dead wrong.
Currently, in the US, this is true. It is less true in a lot of other countries. And if we eventually adopt medicare-for-all or some other similar system, perhaps medical billing will become vastly simplified and not be closely connected to approval decisions.
The basic problem is the same for doctors as for drivers. The human is accountable
10 years ago this was still a (silly) debate, but it is not one now. Liability insurance is accountable, and it doesn't really matter that much who is required to buy it as long as somebody is buying it.
Re: (Score:3)
Doing a glorified linear regression is *not* AI.
Linear regression, glorified or otherwise, is absolutely AI. It's just not what you think the term 'AI' should mean.
Deep learning is a very impressive way to get to models that massively overfit data
Overfitting is a bad thing.
Enter Chat-GPT. However, this is *not* intelligence.
This is correct.
ChatGPT knows 4*9=36 because it saw it on the web. Give it a 4 digit number times a 5 digit numbers, it could fail.
You've hit at the heart of it. There is nothing even remotely like 'understanding' or 'reasoning' in models like these. The output is certainly impressive looking, as long as you don't look too closely at it. It'll be interesting to revisit these threads once the hype dies down.
Re:640kb (Score:5, Informative)
Gates didn't make any claims about the current models being AGI, and neither has anyone else.
I believe you're falling into the same trap as many other critics. You're conflating the hyperbole and exaggerations used by marketers to ride the wave and sell their products, with the real news and developments that are being announced by engineers and researchers.
Btw, let's define AI first.
Let's!
Cambridge: [cambridge.org]
Intelligence noun
Merriam-Webster: [merriam-webster.com]
Intelligence noun
If you're spent much time with the models, you'd be aware that when they return a mistaken answer, you point out the mistakes (without giving the answer) and the model will correct itself. If you trip it up with a logic puzzle, you can ask it to rethink and try giving you the answer again, but this time explaining its logic step by step. Often times it will then "realise" where it tripped up & give you the correct answer. Follow up with a similar logic puzzle and it will have "learnt" not to make the same mistake.
It also shows signs of having Theory of Mind. The ability to understand that what you believe to be reality might differ from actual reality, and predict what you probably believe given your own mistaken conclusions.
It's a bit like interacting with a very well informed child.
ChatGPT knows 4*9=36 because it saw it on the web. Give it a 4 digit number times a 5 digit numbers, it could fail.
Correct. Another example, "What are the fourth and sixth words of this sentence?". GPT3.5 (ChatGPT) doesn't understand maths, it treats it linguistically.
GPT4 however, which has just been released to the public over the past few days, is multimodal. It knows how to calculate. It can take visual cues from uploaded images and other visual media and understand their context.
This isn't the hidden integration of differing models, which has previously been used to try to emulate such "intelligence". It's the same model.
I think what Gates is getting at is that the technology has suddenly seen an exponential jump and is advancing at an astounding rate. Predictions that were made only a few months ago of milestones we can expect for "some time over the next decade" have been blown away in matters of days and weeks.
ChatGPT was released barely 5 months ago, and it's already seeing widespread use amongst the general (non-technical) public.
Given the recent explosion in advancements and general adoption, I'm with Gates. We're on the cusp of a new age, one that could have a bigger impact on humanity than anything we've seen before.
For anyone that's interested in the topic, I highly recommend checking out some of the videos from AI Explained [youtube.com]. It's really mind-blowing (even though I thought I'd been following the topic closely for years now).
Re: (Score:2)
Come on mate. Not that discussion again.
Artificial intelligence has been fairly loosely defined for decades. In the 80s, AI was already a mash of statistics, signal processing, and operation research. And people had moved from AGI.
Besides novelists and film makers, no one seriously talks of AI to mean strong AI. Everyone pretty much understands it in term of sensing, modeling, deciding, acting.
Re: 640kb (Score:2)
Your view on this does raise the question: if ChatGPT is not âoeintelligenceâ, then what is?
In the end, are we humans not merely extrapolating data that weâ(TM)ve gathered via our senses?
What magic sauce is required to call behavior âoeintelligentâ?
Re: (Score:2)
ChatGPT knows 4*9=36 because it saw it on the web. Give it a 4 digit number times a 5 digit numbers, it could fail.
You've basically described how 95% of the population does maths.
All fine and good with one caveat (Score:3, Interesting)
I'll use and enjoy said technology as long as the AI behind it exists solely on my devices and 100% in my control only. If it goes to the cloud, then fuck no.
Re: (Score:2)
I am determined to learn from the dystopian fiction of my past (I Have No Mouth and I Must Scream [wjccschools.org] is a great example) and suggest that we work to create AI that likes us and does not take the suggestion to "Kill all Humans" as an order
Re: (Score:2)
Dystopia, here we come! (Score:5, Interesting)
Re: (Score:2)
The only thing worse than dealing with a faceless bureaucracy is dealing with a remorseless AI.
Re: (Score:2)
The only thing worse than dealing with a faceless bureaucracy is dealing with a remorseless AI.
Worse: Owned by an international corporate conglomerate who doesn't even bother with bureaucracy anymore, they repeat the AI's decision.
Re: (Score:2)
No offense but... (Score:2)
Re: (Score:2)
In this case, the prediction is shorthand for people in the future -- probably decades from now (assuming any survive) -- will look backwards and declare that (a) there was, or is, an "age of AI" and (b) it started by (and continued past) this point in time.
The 640Kb prophet (Score:2, Interesting)
he missed the prediction that people like him will get even richer and the gap between the have's and have-not's will become even larger.
But globally the poor are getting richer (Score:3)
Yes, I know it's hard to see from the USA, but the data is unambiguous; the application of lessons of economics means that the really poor are a lot less poor these days. There's no reason to think that won't continue
https://data.worldbank.org/ [worldbank.org]
Re: (Score:2)
What does the poor being less poor have to do with the gap between the poorest and the richest?
Re: (Score:2)
In international terms the gap is closing. The amazing achievement of the Chinese in raising the living standards of hundreds of millions is outstanding and unprecedented. And much of the 'wealth' of the richest is in stock market values rather than anything real. Sadly there is no data for the world overall - at least not from the world bank - but this table allows you to look at trends in individual countries.
https://data.worldbank.org/ind... [worldbank.org]
Re: (Score:2)
Failure to tax and redistribute the profits fairly is entirely a political choice. Don't blame the AI, if you put the AI in charge of the government it'd probably make a more optimal decision about redistribution to maximize economic potential.
first prediction (Score:2)
people use graphical interfaces because they don't want to type for things. i'm not sure people (particularly older citizens) would like to type to the OS. and a lot of people don't like to talk to their computers instead.
I predict chatgpt will be integrated with windows in less than a year.
having said the above, imagine if you could give natural english instructions as a script.
for example "install latest nvidia driver and disable all optional components" or "change my dns to 8.8.8.8"
it could allow non te
Re: (Score:2)
I predict chatgpt will be integrated with windows in less than a year.
having said the above, imagine if you could give natural english instructions as a script.
for example "install latest nvidia driver and disable all optional components" or "change my dns to 8.8.8.8"
Given the output I've seen from ChatGPT, and how much effort you have to go to in order to get it to give accurate answers, I wouldn't trust the software to get either of those things right. Maybe someday, but not soon. There is no sanity checking, because there is no sanity, and that means you have to be the sanity check yourself.
Re: (Score:2)
Having it show what command it's about to run and ask for confirmation is still useful. There's a lot of us who haven't memorized every command line parameter of everything who still understand enough to recognize if it looks right or not.
Re: (Score:2)
You have two choices:
1. Specify your task completely and precisely enough for the AI to do it. You might even need something better than English for that. Uh oh, you've just become a programmer again.
2. You keep trying to describe what you want in a very long conversation with the A.I. After going through that arduous process a few times, you eventually learn to be more complete and precise to save time and trouble. Uh oh, you've just become a programmer again.
Re: (Score:2)
The bigger problem is that natural language tends to be ambiguous.
He's right. And here's my prediction (Score:5, Interesting)
All this marvellous technology will be in the hands of giant monopolistic concerns who will be so large and so entrenched in our society with their technology that the law will not apply to them anymore, and they will use AIs primarily to make as much money out of people as possible and morals be damned.
Oh wait, it's already the reality...
Ah - the spirit of Ned Ludd lives (Score:2)
Every time this is predicted, it never actually happens and in practice most of the poor do benefit.
Re: (Score:2)
Historically speaking, when wealth inequality gets this bad, certain people tend to ... er ... lose their heads.
The real question is why you're so hot to simp for the ultra wealthy?
Nah... (Score:2)
There's a plenteous supply of bread and the circuses offered by the Internet make the efforts of the Roman Empire look paltry.
Re: (Score:2)
I live in government-certified poverty, and like everyone else I have access to talk to a variety of AIs (too many varieties for monopolies to be possible) and use them to solve tasks for me. Of course like anything else it'll be more profitable and powerful for the rich because of the tasks they have for it, but unlike most technology which is only available to the rich for a long time, AI seems rather more egalitarian.
Paywall it. (Score:2)
I have access to talk to a variety of AIs {...}, but unlike most technology which is only available to the rich for a long time, AI seems rather more egalitarian.
They are making it freely available for now.
Wait until your workflow is 100% dependant on a few such "freely accessible AI tools", and the companies providing access to it buy each other and consolidate, and you can expect suddenly everything being paywalled and your wallet milked dry.
Unless there is more work on bringing the possibility to self-host opensource tools (think stable diffusion, think the recently leaked large-language model that people have downsampling to run on raspberry pis, etcc.)
And even
Andrew Yang not so silly now (Score:2)
1) You need to vote for UBI, it won't arrive on its own.
2) You need to vote for it before you are homeless - can't vote after you are homeless.
Re: (Score:2)
2) You need to vote for it before you are homeless - can't vote after you are homeless.
In California you explicitly can do so. In other states, maybe not.
No (Score:2)
2) No, because UBI doesn't guarantee a home
You'll lose your vote once you accept UBI because it's a system that, over time, demands more delivers less.
PS: AI isn't taking over and the world is not ending. It sill takes an army of workers to build an iPhone...
No, the age of AI hasn't begun (Score:2)
COBOL anyone? (Score:5, Interesting)
They said the same thing with the introduction of COBOL.
The flaw with the idea is the same flaw the military tries to make their officers aware of with an exercise: the officer has to write a set of orders to carry out a mission. Those orders are then handed to a unit to carry out, that unit having been instructed to figure out every way they can sabotage the mission and cause it to fail while still following all the orders they were given. If you can't write a clear, unambiguous set of instructions for the computer to follow then it doesn't matter how you give them to the computer, things will go sideways when it doesn't do what you thought it was going to do.
Re: (Score:2)
Indeed. And that is why, except for really generic things, "natural language" does not cut it. Natural language only works somewhat well, if the target is an (A)GI that understands enough to ask intelligent questions when things were not clear. To be fair, many humans fail at this as well.
Re: (Score:2)
we had an exercise like that in a college english course - make something out of legos, then write instructions to tell someone else how to do it. I still don't see how they made what they did from our instructions, but that wasn't even malicious.
Re: (Score:2)
They said the same thing with the introduction of COBOL.
The flaw with the idea is the same flaw the military tries to make their officers aware of with an exercise: the officer has to write a set of orders to carry out a mission. Those orders are then handed to a unit to carry out, that unit having been instructed to figure out every way they can sabotage the mission and cause it to fail while still following all the orders they were given. If you can't write a clear, unambiguous set of instructions for the computer to follow then it doesn't matter how you give them to the computer, things will go sideways when it doesn't do what you thought it was going to do.
Yep.
If only we had a symbolic language we could use to exactly specify to the computer what we want! Oh ... wait.
Who cares what Bill Gates thinks about AI? (Score:2)
"Eventually your main way of controlling a computer will no longer be pointing and clicking or tapping on menus and dialogue boxes. Instead, you'll be able to write a request in plain English...."
Which makes Bill Gates one mediocre futurist. Why on earth would I be "writing" a request to my computer? And why in "plain" English? AI is progressing towards symbolically deciphering languages. Combine that with some rudimentary form of language interpretation, I won't be typing out requests in English, I'll be verbally communicating my request to the computer, in whatever native language I was raised to speak.
If my AI controlled devices (Score:2)
suspicions: confirmed (Score:3)
As usual, he is wrong (Score:2)
The current hype is by far not the breakthrough everybody without actual AI knowledge thinks it is. It is an incremental step. And not a large one. Any of those predictions will take several decades at the very least to happen.
Re: (Score:2)
Re: (Score:2)
It is an incremental step. And not a large one.
Very true.
Just don't think that we can get to where "those predictions" are by the slow accumulation of incremental improvements. No matter how good you get at making ladders, you're never going to reach the moon.
Predicts? Hows about "Opines." (Score:2)
Uhh.... ok? (Score:2)
What about the Internet Bill? Or did you leave that out b/c you didn't see it coming and it bit you in the ass?
Also, wrt AI, uhmm, unless you've been living under a rock, file this under Captain Obvious.
wow! (Score:2)
STFU (Score:2)
He's just trying to sound timely. (Score:3)
AI is not what most people think it is; not even most of the people promoting and/or warning against it.
It's dangerous like a flood is dangerous. But it's only dangerous like a person is dangerous, if a dangerous person is the one unleashing the flood.
ah, Billy (Score:2)
The guy who missed the Internet and changed his book to make his predictions seem more on point?
Apparently, when you're old and rich, stating the obvious can get you headlines. Darn.
Meanwhile, anyone who's seen the previous cycles of AI knows that there is always a hype, following by overinflated expectations of the future, then the technology matures and people realize it's just another tool and doesn't magically solve all problems, then it becomes a standard tech thing and isn't even called "AI" anymore,
Factor prime numbers (Score:2)
"The obvious mathematical breakthrough would be development of an easy way to factor large prime numbers."
Bill Gates, The Road Ahead.
Did Bill Gates write this, or his personal agent? (Score:2)
Yeah ... about that (Score:4, Insightful)
"Advances in AI will enable the creation of a personal agent... It will see your latest emails, know about the meetings you attend, read what you read, and read the things you don't want to bother with."
One issue with that is that the "personal agent" will be run by the same people who don't want me to read certain news stories, who don't want me to search for certain things, who don't want me to think certain thoughts, etc.
Re: (Score:2)
He was initially just in the right place at the right time and everything cascaded from there.
Re: (Score:3)
You really do not need a Black Swan to make the current Language-Model type of Artificial Idiocy hallucinate the most stupid nonsense.
Re: (Score:2)
neutered Windows on mobile by insisting that Windows CE use the start menu like the desktop
I had an HP 320LX back then. I can tell you that keeping the start menu was absolutely the right decision. You must not remember what a phenomenon Windows 95 was. There were people buying copies that didn't even have computers. The start button (and start menu) was a big part of that. People knew about it and new how to use it.
It also made those palmtop toys feel like a 'real computer'. Seeing Word and that start button gave you a lot of confidence that it was going to work properly with your desktop.
Re: (Score:2)
Psychological factors aside, why do you think WinCE was "neutered" by keeping the start menu? What do you think they should have done instead and why would it have been better?
Palm used a bunch-of-icons launcher on the Pilot like all smartphones have now. This actually predates wince even having a start menu — wince didn't come with the start menu until CE 2.0 in 1998, while the Palm Pilot is from 1997 and it was based on lessons learned when they made the software for the Tandy/Casio/GRiD Z-PDA-7000/Zoomer/GRiDPad 2390.
But even before that, there was the Newton, which first shipped in 1993 — and which also had a bunch-of-icons launcher.
The start menu was a brilliant
Re: (Score:3, Interesting)
The best indicator that something won't happen is to have Bill Gates predict it will.
Yep, first he was gonna kill linux, then he was gonna kill google, then he was gonna kill a bunch of diseases which have just retreated to countries he can't get into because you can't get vaccinations from the gates foundation unless you adopt strong IP protection for big pharma.
Gates never did anything he said he was going to do, but he sure did make a lot of money not doing it.
Re: (Score:2)
Typical Microcrap mindset: Ignore everything that already exists, rediscover it badly and without understanding the point, then implement it in a screwed-up way.
Re: (Score:2)
The "arrival" of the age of AI depends very much on what you mean by AI. AGI is not here and it is unclear whether it ever will be. Statistical classifiers fitted with some gadgets were already a thing back when I studied CS, some 35 years ago. The only breakthrough in the current hype is an written natural language interface that works reasonably well for average people, the actual knowledge retrieval process behind this is atrociously bad. AI that routinely "hallucinates" is about as useful as a human kno
Re: (Score:2)
I've been predicting yet another wave on the typical AI hype cycle. We'll see how Bill's prediction looks in 5-10 years, but I suspect it'll be 640k all over again.
Re: (Score:2)
He didn't miss the internet. He missed the Web. There is a difference. Didn't you read The Road Ahead?
Re: (Score:2)
He missed the internet. Do you recall having to buy a third party stack to get a PC on the net because Gates/Microsoft didn't provide one?
That book was published in 95. I was already on the net for 8 years by the time Bill discovered it and had someone write a book for him but I wasn't on any Microsoft platforms. The first web browsers were made in the early 90s. Bill missed that, too, but only as an add on to missing the net.
Who needs more than 640k? Who needs a GUI interface to search and view the wo
Re: (Score:2)
He didn't miss the Internet. Remember The Road Ahead? What Bill missed was the Web. He believed that the future of the internet was applications, not unlike the hell that exists on Mobile today.
Re: (Score:2)
He didn't miss the Internet. Remember The Road Ahead? What Bill missed was the Web. He believed that the future of the internet was applications, not unlike the hell that exists on Mobile today.
It was wishful thinking, he was trying to make it happen because Windows was the king of applications, and still is. There's simply not as much software for any other OS as there is for Windows, that's an indisputable fact, and it's been true for decades. In an applications-for-everything world, Windows is king of the desktop. If they hadn't changed the APIs for making apps on Wince three times in short succession, they might have been king of mobile as well. Developers! Developers! Developers! Whoops!