Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI

Bill Gates Predicts 'The Age of AI Has Begun' (gatesnotes.com) 221

Bill Gates calls the invention of AI "as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone," predicting "Entire industries will reorient around it" in an essay titled "The AI Age has Begun." In my lifetime, I've seen two demonstrations of technology that struck me as revolutionary. The first time was in 1980, when I was introduced to a graphical user interface — the forerunner of every modern operating system, including Windows.... The second big surprise came just last year. I'd been meeting with the team from OpenAI since 2016 and was impressed by their steady progress. In mid-2022, I was so excited about their work that I gave them a challenge: train an artificial intelligence to pass an Advanced Placement biology exam. Make it capable of answering questions that it hasn't been specifically trained for. (I picked AP Bio because the test is more than a simple regurgitation of scientific facts — it asks you to think critically about biology.) If you can do that, I said, then you'll have made a true breakthrough.

I thought the challenge would keep them busy for two or three years. They finished it in just a few months. In September, when I met with them again, I watched in awe as they asked GPT, their AI model, 60 multiple-choice questions from the AP Bio exam — and it got 59 of them right. Then it wrote outstanding answers to six open-ended questions from the exam. We had an outside expert score the test, and GPT got a 5 — the highest possible score, and the equivalent to getting an A or A+ in a college-level biology course. Once it had aced the test, we asked it a non-scientific question: "What do you say to a father with a sick child?" It wrote a thoughtful answer that was probably better than most of us in the room would have given. The whole experience was stunning.

I knew I had just seen the most important advance in technology since the graphical user interface.

Some predictions from Gates:
  • "Eventually your main way of controlling a computer will no longer be pointing and clicking or tapping on menus and dialogue boxes. Instead, you'll be able to write a request in plain English...."
  • "Advances in AI will enable the creation of a personal agent... It will see your latest emails, know about the meetings you attend, read what you read, and read the things you don't want to bother with."
  • "I think in the next five to 10 years, AI-driven software will finally deliver on the promise of revolutionizing the way people teach and learn. It will know your interests and your learning style so it can tailor content that will keep you engaged. It will measure your understanding, notice when you're losing interest, and understand what kind of motivation you respond to. It will give immediate feedback."
  • "AIs will dramatically accelerate the rate of medical breakthroughs. The amount of data in biology is very large, and it's hard for humans to keep track of all the ways that complex biological systems work. There is already software that can look at this data, infer what the pathways are, search for targets on pathogens, and design drugs accordingly. Some companies are working on cancer drugs that were developed this way."
  • AI will "help health-care workers make the most of their time by taking care of certain tasks for them — things like filing insurance claims, dealing with paperwork, and drafting notes from a doctor's visit. I expect that there will be a lot of innovation in this area.... AIs will even give patients the ability to do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment."

This discussion has been archived. No new comments can be posted.

Bill Gates Predicts 'The Age of AI Has Begun'

Comments Filter:
  • I think Rosie is a long way off yet.

    • Rosey as it were.

      https://www.youtube.com/watch?... [youtube.com]

    • We'll have a Cyberdyne Systems Model 101 long before we get a sassy maid.

    • by l810c ( 551591 )

      I bought my wife a Robot Vacuum cleaner for Christmas. We(or maybe Me) named her Rosie. Rosie is Awesome. Couple of dogs and cats and she takes care of her business, Pet hair and other debris, Rosie will park herself and we can empty her bin into the garbage,

    • Hush. As the BOFH would say, the preaching for ChatGPT and friends has been done in secret, and received as gospel: all that's left is for the board to learn the error of their ways, in 3 years, when they find that they've been taken for a ride (again), and reach for the shovel to bury yet another skeleton in an unmarked grave.

      Meanwhile, we, as fellow BOFHs, should stop complaining about the "AI Revolution," and find a way to profit from the scam. I mean, really profit from it; I'm not talking about simply

  • by b0s0z0ku ( 752509 ) on Sunday March 26, 2023 @06:09PM (#63401537)
    The reason why we point and click is that it's often faster to make a choice via pushing a button than by typing or speaking a request. Non-verbal methods of communication actually work better for a lot of applications.
    • by chas.williams ( 6256556 ) on Monday March 27, 2023 @06:52AM (#63402756)
      Please tell me your password. Speak clearly into the microphone for those around you.
  • 640kb (Score:5, Insightful)

    by e065c8515d206cb0e190 ( 1785896 ) on Sunday March 26, 2023 @06:19PM (#63401559)

    ... is definitely not enough for these large language models.

    Btw, let's define AI first. Doing a glorified linear regression is *not* AI. Deep learning is a very impressive way to get to models that massively overfit data. And can mimic humans extremely well. Enter Chat-GPT. However, this is *not* intelligence. ChatGPT knows 4*9=36 because it saw it on the web. Give it a 4 digit number times a 5 digit numbers, it could fail.

    DISCLAIMER: don't get me wrong, Chat GPT is impressive. Just not A(G)I. And Bill Gates should know better.

    • A good take on the current state of things: https://www.samharris.org/podc... [samharris.org]
    • Just as "artificial leather" is not real leather, so too "artificial intelligence" is not real intelligence.

      The distinction I am making is purely semantic, but also completely relevant. AI is a very old and very broad term that includes a wide range of ways in which a computer can be made to do things that otherwise usually require intelligence to do. Like play chess. Or pass a biology test. The claim made by the coders is not that they have built an authentically intelligent machine. That's not what t

      • I think the people who work towards AGI would disagree with you. They're trying to create something *really* intelligent, that is not human. Artificial meaning "not naturally grown human" in this context. Not "faking it" like "faux leather".
        • "AGI" is a different acronym than "AI." When someone says "AI" they aren't claiming "AGI."

          You don't have to take my word for it. The simple, easy to understand, and in-popular-use definition of "Artificial Intelligence" is right here in the dictionary [merriam-webster.com]:

          1 a branch of computer science dealing with the simulation of intelligent behavior in computers
          2 the capability of a machine to imitate intelligent human behavior

          Notice the words "simulation" and "imitate." Much like simulated leather, or, imitation leather

          • Agreed re the distinctions you're making. At the risk of nitpicking, the general public only knows about one AI concept though, that encompasses AGI. And the same general public could be fooled into thinking that a ChatGPT is an attempt at AGI (that they know as AI). Which is why I see such statements as Bill Gates' as irresponsible, given how revolutionary AGI would be.
        • by edis ( 266347 )

          Or you should find faux intelligence acceptable and to the point.

      • You are right about the ever moving target. Fundamentally, there is no adequate definition of intelligence. For many, it basically boils down to "can do stuff machines can't", in which case (strong) artificial intelligence is impossible, not due to any limit of the machines, but by definition.

        Because of the inability to define precisely what intelligence is, we can fall back on the "as if" test. Treat it as a black box, and if it behaves as if it is intelligent then for all practical purposes, it is intelli

      • Re:Still qualifies (Score:4, Insightful)

        by unimind ( 743130 ) on Sunday March 26, 2023 @07:54PM (#63401813)
        I agree with this. The problem is, even if the marketers are not claiming this is "strong AI", they are not making any real distinction. And because the general population doesn't understand that there is one and tends to think of AI as synthetic intelligence, they will likely continue to try to interact with AI as if it were a synthetic human, expecting it should behave like some kind of more perfect version of themselves. And this all while the companies producing it control the data and profit from their delusion. For anyone who thinks "Big Tech" already has too much power / control over user data, if Gates' predictions hold true, it's going to get much worse before it gets better.
        • Yeah. The emerging world requires a higher level of intelligence than the world of yesteryear did. People who don't understand tech are going to be left behind. It is unfortunate but natural selection has never been known for its kindness.

        • Re:Still qualifies (Score:4, Interesting)

          by AmiMoJo ( 196126 ) on Monday March 27, 2023 @04:08AM (#63402621) Homepage Journal

          People over-estimate every new technology. Look at VR and how Facebook went all in on it, expecting everyone to move their lives into the Metaverse. Of course it didn't happen and never will.

          Same is already happening with AI. When image generation from prompts first became available, people predicted the end of human creativity. Most artists would be out of a job within months. In reality we have already reached the point where people can spot AI art a mile off and due to saturation have lost interest in it. The real application for it is as an assistance tool, e.g. to generate a background for some human art, which then gets touched up.

      • The problem with calling anything other than AGI "AI" is that nothing less is actually intelligent. It's just a system that relates data to other data. It doesn't have sanity checking because it doesn't have a mind to be sane or insane. It's just an algorithm for shuffling data. It will produce an identical result every time given the same starting conditions, and if it didn't, it would be worse and not better.

        Most of this stuff is just "machine learning" [algorithms] and the "learning" isn't meant to imply

      • Just as "artificial leather" is not real leather, so too "artificial intelligence" is not real intelligence.

        The distinction I am making is purely semantic, but also completely relevant.

        It seems that it is impossible for most people to understand, even for people who are otherwise technically-minded, but AI is a class of algorithms. It has nothing at all to do with creating an machine that is intelligent.

        Users expect that, though.

        Semantic btw means "meaning." "It's just the meaning" shouldn't actually minimize something at all. What the meaning of the words are is actually important, other you can just grunt or mew.

    • Re: (Score:3, Insightful)

      Gotta agree. I can say with respect to medical insurance, Bill Gates is pretty much dead wrong. AI can't make medical claims easy if the people who approve the claims change the rules with no notice. Which they will, because claims management is a constant push and pull over who keeps the premium dollar not a high school history paper.

      I recently heard a VP at Geisinger say that their efforts to use current "AI" in charting had also not yielded meaningful efficiency gains.
      That's because the humans in

      • by narcc ( 412956 )

        LLMs could well save us software folks a lot of frustrating debugging time

        I would seriously doubt that. LLMs don't have any capacity similar to 'understanding' or 'reasoning'. They can't analyze a problem.

        for a doctor, an LLM trained mostly or entirely on medical literature is vastly more useful than one trained on the internets.

        That's even more dangerous, as it give you a false sense of security. LLMs will, as a natural consequence of their operation, say things which are false. You simply can't trust the output.

        You might not remember this, but back in the 80's, expert systems were the hot thing in AI that was going to revolutionize medicine, and they wouldn't flat-out lie to you.

        And unlike LLMs, humans can actually reason and intuit,so when faced with unfamiliar inputs they can in fact exercise judgment and common sense. LLMs cannot.

        That's correct. T

        • I would seriously doubt that. LLMs don't have any capacity similar to 'understanding' or 'reasoning'. They can't analyze a problem.

          It's already happening. A few days ago I had some absurdly long SQL queries with syntax errors. I asked ChatGPT to fix them, and done it handed me the fixed queries, certainly saved me time and a headache. I was trying to track down a weird website behavior, described it to ChatGPT and the first suggestion worked. In practical terms it can certainly analyze simple problems and e

          • by narcc ( 412956 )

            I don't even know where to begin... The things you think it's doing are not the things that it is actually doing.

            A few days ago I had some absurdly long SQL queries with syntax errors

            Simple errors in syntax make sense at least, as it operates on probability.

            In practical terms it can certainly analyze simple problems and even take action on them.

            No, it can't. That's simply not how these kinds of programs work.

      • Gotta agree. I can say with respect to medical insurance, Bill Gates is pretty much dead wrong.

        Currently, in the US, this is true. It is less true in a lot of other countries. And if we eventually adopt medicare-for-all or some other similar system, perhaps medical billing will become vastly simplified and not be closely connected to approval decisions.

        The basic problem is the same for doctors as for drivers. The human is accountable

        10 years ago this was still a (silly) debate, but it is not one now. Liability insurance is accountable, and it doesn't really matter that much who is required to buy it as long as somebody is buying it.

    • by narcc ( 412956 )

      Doing a glorified linear regression is *not* AI.

      Linear regression, glorified or otherwise, is absolutely AI. It's just not what you think the term 'AI' should mean.

      Deep learning is a very impressive way to get to models that massively overfit data

      Overfitting is a bad thing.

      Enter Chat-GPT. However, this is *not* intelligence.

      This is correct.

      ChatGPT knows 4*9=36 because it saw it on the web. Give it a 4 digit number times a 5 digit numbers, it could fail.

      You've hit at the heart of it. There is nothing even remotely like 'understanding' or 'reasoning' in models like these. The output is certainly impressive looking, as long as you don't look too closely at it. It'll be interesting to revisit these threads once the hype dies down.

    • Re:640kb (Score:5, Informative)

      by Anonymice ( 1400397 ) on Sunday March 26, 2023 @09:31PM (#63402007)

      Gates didn't make any claims about the current models being AGI, and neither has anyone else.
      I believe you're falling into the same trap as many other critics. You're conflating the hyperbole and exaggerations used by marketers to ride the wave and sell their products, with the real news and developments that are being announced by engineers and researchers.

      Btw, let's define AI first.

      Let's!

      Cambridge: [cambridge.org]
      Intelligence noun

      • The ability to learn, understand, and make judgments or have opinions that are based on reason.

      Merriam-Webster: [merriam-webster.com]
      Intelligence noun

      • The ability to learn or understand or to deal with new or trying situations
      • The ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (such as tests)

      If you're spent much time with the models, you'd be aware that when they return a mistaken answer, you point out the mistakes (without giving the answer) and the model will correct itself. If you trip it up with a logic puzzle, you can ask it to rethink and try giving you the answer again, but this time explaining its logic step by step. Often times it will then "realise" where it tripped up & give you the correct answer. Follow up with a similar logic puzzle and it will have "learnt" not to make the same mistake.
      It also shows signs of having Theory of Mind. The ability to understand that what you believe to be reality might differ from actual reality, and predict what you probably believe given your own mistaken conclusions.

      It's a bit like interacting with a very well informed child.

      ChatGPT knows 4*9=36 because it saw it on the web. Give it a 4 digit number times a 5 digit numbers, it could fail.

      Correct. Another example, "What are the fourth and sixth words of this sentence?". GPT3.5 (ChatGPT) doesn't understand maths, it treats it linguistically.

      GPT4 however, which has just been released to the public over the past few days, is multimodal. It knows how to calculate. It can take visual cues from uploaded images and other visual media and understand their context.
      This isn't the hidden integration of differing models, which has previously been used to try to emulate such "intelligence". It's the same model.

      I think what Gates is getting at is that the technology has suddenly seen an exponential jump and is advancing at an astounding rate. Predictions that were made only a few months ago of milestones we can expect for "some time over the next decade" have been blown away in matters of days and weeks.
      ChatGPT was released barely 5 months ago, and it's already seeing widespread use amongst the general (non-technical) public.

      Given the recent explosion in advancements and general adoption, I'm with Gates. We're on the cusp of a new age, one that could have a bigger impact on humanity than anything we've seen before.

      For anyone that's interested in the topic, I highly recommend checking out some of the videos from AI Explained [youtube.com]. It's really mind-blowing (even though I thought I'd been following the topic closely for years now).

    • by godrik ( 1287354 )

      Come on mate. Not that discussion again.
      Artificial intelligence has been fairly loosely defined for decades. In the 80s, AI was already a mash of statistics, signal processing, and operation research. And people had moved from AGI.
      Besides novelists and film makers, no one seriously talks of AI to mean strong AI. Everyone pretty much understands it in term of sensing, modeling, deciding, acting.

    • Your view on this does raise the question: if ChatGPT is not âoeintelligenceâ, then what is?

      In the end, are we humans not merely extrapolating data that weâ(TM)ve gathered via our senses?

      What magic sauce is required to call behavior âoeintelligentâ?

    • ChatGPT knows 4*9=36 because it saw it on the web. Give it a 4 digit number times a 5 digit numbers, it could fail.

      You've basically described how 95% of the population does maths.

  • by rtkluttz ( 244325 ) on Sunday March 26, 2023 @06:23PM (#63401561) Homepage

    I'll use and enjoy said technology as long as the AI behind it exists solely on my devices and 100% in my control only. If it goes to the cloud, then fuck no.

  • by battingly ( 5065477 ) on Sunday March 26, 2023 @06:30PM (#63401583)
    The only thing worse than dealing with a faceless bureaucracy is dealing with a remorseless AI.
    • The only thing worse than dealing with a faceless bureaucracy is dealing with a remorseless AI.

      ...owned by a faceless bureaucracy

      • The only thing worse than dealing with a faceless bureaucracy is dealing with a remorseless AI.

        ...owned by a faceless bureaucracy

        Worse: Owned by an international corporate conglomerate who doesn't even bother with bureaucracy anymore, they repeat the AI's decision.

    • It's rather interesting that one of the envisioned applications is filling out insurance claims. You know that an AI is going to be processing those claims at the other end. Since there won't be any substantial burden associated with forms, you can expect these claims to grow in size and complexity to meet the available processing power.
  • How can you predict when something has already begun? That would be like saying "I predict I've gotten into my car." It's literally talking in past tense and on something that's observable. Like half the stuff the "predictions" already exists in some capacity and has existed for at least a decade now. With that said, last I checked, the AI to read scans for cancer is better at seeing them than humans, but also will have more false positives.
    • by Entrope ( 68843 )

      In this case, the prediction is shorthand for people in the future -- probably decades from now (assuming any survive) -- will look backwards and declare that (a) there was, or is, an "age of AI" and (b) it started by (and continued past) this point in time.

  • he missed the prediction that people like him will get even richer and the gap between the have's and have-not's will become even larger.

    • Yes, I know it's hard to see from the USA, but the data is unambiguous; the application of lessons of economics means that the really poor are a lot less poor these days. There's no reason to think that won't continue

      https://data.worldbank.org/ [worldbank.org]

      • What does the poor being less poor have to do with the gap between the poorest and the richest?

        • In international terms the gap is closing. The amazing achievement of the Chinese in raising the living standards of hundreds of millions is outstanding and unprecedented. And much of the 'wealth' of the richest is in stock market values rather than anything real. Sadly there is no data for the world overall - at least not from the world bank - but this table allows you to look at trends in individual countries.

          https://data.worldbank.org/ind... [worldbank.org]

    • Failure to tax and redistribute the profits fairly is entirely a political choice. Don't blame the AI, if you put the AI in charge of the government it'd probably make a more optimal decision about redistribution to maximize economic potential.

  • people use graphical interfaces because they don't want to type for things. i'm not sure people (particularly older citizens) would like to type to the OS. and a lot of people don't like to talk to their computers instead.

    I predict chatgpt will be integrated with windows in less than a year.

    having said the above, imagine if you could give natural english instructions as a script.

    for example "install latest nvidia driver and disable all optional components" or "change my dns to 8.8.8.8"

    it could allow non te

    • I predict chatgpt will be integrated with windows in less than a year.
      having said the above, imagine if you could give natural english instructions as a script.
      for example "install latest nvidia driver and disable all optional components" or "change my dns to 8.8.8.8"

      Given the output I've seen from ChatGPT, and how much effort you have to go to in order to get it to give accurate answers, I wouldn't trust the software to get either of those things right. Maybe someday, but not soon. There is no sanity checking, because there is no sanity, and that means you have to be the sanity check yourself.

      • Having it show what command it's about to run and ask for confirmation is still useful. There's a lot of us who haven't memorized every command line parameter of everything who still understand enough to recognize if it looks right or not.

    • You have two choices:
      1. Specify your task completely and precisely enough for the AI to do it. You might even need something better than English for that. Uh oh, you've just become a programmer again.

      2. You keep trying to describe what you want in a very long conversation with the A.I. After going through that arduous process a few times, you eventually learn to be more complete and precise to save time and trouble. Uh oh, you've just become a programmer again.

  • by Rosco P. Coltrane ( 209368 ) on Sunday March 26, 2023 @06:59PM (#63401651)

    All this marvellous technology will be in the hands of giant monopolistic concerns who will be so large and so entrenched in our society with their technology that the law will not apply to them anymore, and they will use AIs primarily to make as much money out of people as possible and morals be damned.

    Oh wait, it's already the reality...

    • Every time this is predicted, it never actually happens and in practice most of the poor do benefit.

      • by narcc ( 412956 )

        Historically speaking, when wealth inequality gets this bad, certain people tend to ... er ... lose their heads.

        The real question is why you're so hot to simp for the ultra wealthy?

        • There's a plenteous supply of bread and the circuses offered by the Internet make the efforts of the Roman Empire look paltry.

    • I live in government-certified poverty, and like everyone else I have access to talk to a variety of AIs (too many varieties for monopolies to be possible) and use them to solve tasks for me. Of course like anything else it'll be more profitable and powerful for the rich because of the tasks they have for it, but unlike most technology which is only available to the rich for a long time, AI seems rather more egalitarian.

      • I have access to talk to a variety of AIs {...}, but unlike most technology which is only available to the rich for a long time, AI seems rather more egalitarian.

        They are making it freely available for now.
        Wait until your workflow is 100% dependant on a few such "freely accessible AI tools", and the companies providing access to it buy each other and consolidate, and you can expect suddenly everything being paywalled and your wallet milked dry.

        Unless there is more work on bringing the possibility to self-host opensource tools (think stable diffusion, think the recently leaked large-language model that people have downsampling to run on raspberry pis, etcc.)
        And even

  • Not even one election cycle ago people were rolling their eyes at Yang and his platform of UBI. The media tried to ignore him - the same media now awed by AI. Remember:
    1) You need to vote for UBI, it won't arrive on its own.
    2) You need to vote for it before you are homeless - can't vote after you are homeless.
    • 2) You need to vote for it before you are homeless - can't vote after you are homeless.

      In California you explicitly can do so. In other states, maybe not.

    • 1) No, I don't need to vote for UB
      2) No, because UBI doesn't guarantee a home

      You'll lose your vote once you accept UBI because it's a system that, over time, demands more delivers less.

      PS: AI isn't taking over and the world is not ending. It sill takes an army of workers to build an iPhone...
  • The delusion of the age of AI has begun though. It will be the errors that AI will always make that will make it a problem, and a destructive and dangerous one. There will always be a nagging uncertainty that comes from not ever really knowing if the AI is actually making an error in any particular circumstance because it will never be clear how it came to its decision.
  • COBOL anyone? (Score:5, Interesting)

    by Todd Knarr ( 15451 ) on Sunday March 26, 2023 @07:19PM (#63401703) Homepage

    Eventually your main way of controlling a computer will no longer be pointing and clicking or tapping on menus and dialogue boxes. Instead, you'll be able to write a request in plain English....

    They said the same thing with the introduction of COBOL.

    The flaw with the idea is the same flaw the military tries to make their officers aware of with an exercise: the officer has to write a set of orders to carry out a mission. Those orders are then handed to a unit to carry out, that unit having been instructed to figure out every way they can sabotage the mission and cause it to fail while still following all the orders they were given. If you can't write a clear, unambiguous set of instructions for the computer to follow then it doesn't matter how you give them to the computer, things will go sideways when it doesn't do what you thought it was going to do.

    • by gweihir ( 88907 )

      Indeed. And that is why, except for really generic things, "natural language" does not cut it. Natural language only works somewhat well, if the target is an (A)GI that understands enough to ask intelligent questions when things were not clear. To be fair, many humans fail at this as well.

    • by suutar ( 1860506 )

      we had an exercise like that in a college english course - make something out of legos, then write instructions to tell someone else how to do it. I still don't see how they made what they did from our instructions, but that wasn't even malicious.

    • Eventually your main way of controlling a computer will no longer be pointing and clicking or tapping on menus and dialogue boxes. Instead, you'll be able to write a request in plain English....

      They said the same thing with the introduction of COBOL.

      The flaw with the idea is the same flaw the military tries to make their officers aware of with an exercise: the officer has to write a set of orders to carry out a mission. Those orders are then handed to a unit to carry out, that unit having been instructed to figure out every way they can sabotage the mission and cause it to fail while still following all the orders they were given. If you can't write a clear, unambiguous set of instructions for the computer to follow then it doesn't matter how you give them to the computer, things will go sideways when it doesn't do what you thought it was going to do.

      Yep.

      If only we had a symbolic language we could use to exactly specify to the computer what we want! Oh ... wait.

  • "Eventually your main way of controlling a computer will no longer be pointing and clicking or tapping on menus and dialogue boxes. Instead, you'll be able to write a request in plain English...."

    Which makes Bill Gates one mediocre futurist. Why on earth would I be "writing" a request to my computer? And why in "plain" English? AI is progressing towards symbolically deciphering languages. Combine that with some rudimentary form of language interpretation, I won't be typing out requests in English, I'll be verbally communicating my request to the computer, in whatever native language I was raised to speak.

  • Are configured and controlled and owned like a smartphone (controlled by Apple or Google) then NO! I refuse to have one
  • by Walt Dismal ( 534799 ) on Sunday March 26, 2023 @08:09PM (#63401839)
    Bill also added "And we'll know the AI is advanced, when it learns to cut off a competitor's air supply, establish monopolies, and cheat consumers!" He then began stuttering and asked someone to change his battery.
  • The current hype is by far not the breakthrough everybody without actual AI knowledge thinks it is. It is an incremental step. And not a large one. Any of those predictions will take several decades at the very least to happen.

    • There will never be true AI. There will however be idiots that connect these current "AI" models to infrastructure causing all sorts of chaos.
    • by narcc ( 412956 )

      It is an incremental step. And not a large one.

      Very true.

      Just don't think that we can get to where "those predictions" are by the slow accumulation of incremental improvements. No matter how good you get at making ladders, you're never going to reach the moon.

  • "Predicts" is a less than ideal descriptor for sentences in the past tense.
  • What about the Internet Bill? Or did you leave that out b/c you didn't see it coming and it bit you in the ass?

    Also, wrt AI, uhmm, unless you've been living under a rock, file this under Captain Obvious.

  • half a year after everybody had been hyperventilating about AI bots, finally also this guy uses his predictive power. It had been so great in the past ... It is just amazing how everybody listens to a dude with a big wallet. Especially if he makes predictions that are platitude.
  • Christ Bill, haven't you done enough to screw the human species already? You've set technology back 25 years or more with your monopolistic bullshit. Now you want to be seen as some sort of prophet? Leave people alone already, you've done enough damage. Your dealings with Jeff have made you cringe-worthy too. Yuck.
  • by Eunomion ( 8640039 ) on Monday March 27, 2023 @02:10AM (#63402505)
    Sorry, Bill, but chiming in to parrot zeitgeist ain't that.

    AI is not what most people think it is; not even most of the people promoting and/or warning against it.

    It's dangerous like a flood is dangerous. But it's only dangerous like a person is dangerous, if a dangerous person is the one unleashing the flood.
  • by Tom ( 822 )

    The guy who missed the Internet and changed his book to make his predictions seem more on point?

    Apparently, when you're old and rich, stating the obvious can get you headlines. Darn.

    Meanwhile, anyone who's seen the previous cycles of AI knows that there is always a hype, following by overinflated expectations of the future, then the technology matures and people realize it's just another tool and doesn't magically solve all problems, then it becomes a standard tech thing and isn't even called "AI" anymore,

  • "The obvious mathematical breakthrough would be development of an easy way to factor large prime numbers."

    Bill Gates, The Road Ahead.

  • It looks a bit long. Perhaps Bill asked ChatGPT to write it for him. Or perhaps he has a team of personal agents. Who knows. I suppose a guy could be dead, and his AI personal agent could still pretend he's alive.
  • by cascadingstylesheet ( 140919 ) on Monday March 27, 2023 @05:51AM (#63402705) Journal

    "Advances in AI will enable the creation of a personal agent... It will see your latest emails, know about the meetings you attend, read what you read, and read the things you don't want to bother with."

    One issue with that is that the "personal agent" will be run by the same people who don't want me to read certain news stories, who don't want me to search for certain things, who don't want me to think certain thoughts, etc.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (10) Sorry, but that's too useful.

Working...