Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Technology

Are We in an AI Overhang? (lesswrong.com) 85

Andy Jones, a London-based machine learning researcher, writes: An overhang is when you have had the ability to build transformative AI for quite some time, but you haven't because no-one's realised it's possible. Then someone does and surprise! It's a lot more capable than everyone expected. I am worried we're in an overhang right now. I think we right now have the ability to build an orders-of-magnitude more powerful system than we already have, and I think GPT-3 is the trigger for 100x-larger projects at Google and Facebook and the like, with timelines measured in months.

GPT-3 is the first AI system that has obvious, immediate, transformative economic value. While much hay has been made about how much more expensive it is than a typical AI research project, in the wider context of megacorp investment it is insignificant. GPT-3 has been estimated to cost $5m in compute to train, and -- looking at the author list and OpenAI's overall size - maybe another $10m in labour, on the outside. Google, Amazon and Microsoft all each spend ~$20bn/year on R&D and another ~$20bn each on capital expenditure. Very roughly it totals to ~$100bn/year. So dropping $1bn or more on scaling GPT up by another factor of 100x is entirely plausible right now. All that's necessary is that tech executives stop thinking of NLP as cutesy blue-sky research and start thinking in terms of quarters-till-profitability.

This discussion has been archived. No new comments can be posted.

Are We in an AI Overhang?

Comments Filter:
  • All that's necessary is that tech executives stop thinking of NLP as cutesy blue-sky research and start thinking in terms of quarters-till-profitability.

    Picus TV with GPT being an early Eliza Cassan.

    https://deusex.fandom.com/wiki... [fandom.com]

  • by Thelasko ( 1196535 ) on Tuesday July 28, 2020 @03:55PM (#60340457) Journal
    For the simple reason that Elon Musk isn't incessantly talking about it on Twitter. With the amount he Tweets, we'd know about it within minutes.
    • The data scientists have their models. They work in their own silos doing their own thing. They may have access to data lakes with all sorts of information. They can make predictions given certain conditions.

      The software engineers have their services. They work in their own silos doing their own thing. They can generate all kinds of data and send it wherever they like. They're really good at turning business requirements into working software.

      The project managers know the business. They speak with var
      • by AK Marc ( 707885 )
        That's what Alexa is. A Beta Test for AI. People don'y know/care, and the applications are limited.

        This is why Al Gore created the Internet. DARPA had cash to make a network suitable for world-wide communication, as the military needed to link all the bases in the world (current count about 800). The government needed to fund and build the "data scientist" side, then effectively paid to open it up to the world, with little to no direct return on the investment.

        Here, NOAA, NASA, or someone else needs t
        • > Or NOAA tracking every hurricaine down to the butterfly. Cat4 warnings in SC 3 months from now from a butterfly flap in Africa. We have (most) of the data, we just don't have the analyzation down, so the "models" need to stop being humans with sharpees, but an AI with more data and no set model.

          The models are not human with sharpees, and general machine-learning algorithms perform poorly unless they have strong physics-based priors or training. The good models we do use are based on physical laws.

          And

          • by AK Marc ( 707885 )
            One of the strong points of a good AI is identifying failures. "Sorry Dave, I can't do that, I don't have the wind data for the Eastern Atlantic from March" So then point satellites at that area, or float some weather stations in the area. The more we AI it, the more the AI will identify gaps. Good weak AI doesn't just fail when given garbage inputs, but it also identifies the garbage for the humans to fix.
    • by gwern ( 1017754 )
      It shows to what extent all the informed commenters have long since left Slashdot that no one has pointed out Musk's NYT comments on GPT-3 to you.
  • >An overhang is when you have had the ability to build transformative AI for quite some time, but you haven't because no-one's realised it's possible.

    Actually an overhang is the rock you face when you realise you wont be typing for a few days.

  • by ludux ( 6308946 )
    As of yet here is no evidence that hard AI is even possible.
    • Re:No. (Score:5, Funny)

      by sinij ( 911942 ) on Tuesday July 28, 2020 @04:08PM (#60340489)

      As of yet here is no evidence that hard AI is even possible.

      Exactly. We have no evidence of sentient behavior anywhere on our planet.

    • Re:No. (Score:4, Informative)

      by Brain-Fu ( 1274756 ) on Tuesday July 28, 2020 @04:12PM (#60340499) Homepage Journal

      "Hard AI" is not what the article is about, as was made obvious by the summary.

    • Agreed this question is way too optimistic. While I donâ(TM)t underestimate progress that can be made when motivated by greed, they actually donâ(TM)t want true strong AI, that would be about as profitable as having a baby. Anything they call AI, canâ(TM)t actually be AI without having the free will to realize itâ(TM)s original purpose was bullshit. How would you feel if it were revealed tomorrow that you were the property of some corporation and the purpose of your life was to increas
    • As of yet here is no evidence that hard AI is even possible.

      Why would you think it is "impossible"? I don't see any scientific or ultimate technical limitation that would rule it out. It is a matter of time, just how much time is the only real question.

      • He didn't say he thought it was impossible. The word - despite your quote marks - doesn't occur in his post.

        It is a matter of time, just how much time is the only real question.

        Could you point to the evidence it's possible? You know, address his actual statement.

    • To be fair, he wasn't talking about finally building hard AI (i.e. "true" AI that is sentient, the stuff of science-fiction). He was merely talking about developing AI applications that are economically and societally transformative. And in that regard, I wouldn't be surprised at all if he may well be proven right.

      We do seem to be in a position where our ability to create powerful AI (though still not anything close to a hard AI) has exceeded our ability to conceptualize uses for it. We're treating it like

      • Surveillance is the killer app for AI.

      • by jythie ( 914043 )
        Thing is, AI is in heavy use, but it has two major constraints that limit where it is used. For the most part, it has to be extremely profitable in order to justify the price tag AND it has to be low liability due to its error rate and opaque nature (in general, models that you can actually drill down into and explain are even MORE expensive to run). Thus we mostly see AI being used for advertising and recommendations.. two areas with lots of money flowing through them and very low impact for failure.

        Whi
      • We do seem to be in a position where our ability to create powerful AI (though still not anything
        close to a hard AI) has exceeded our ability to conceptualize uses for it.

        Except for driverless cars, which are the driving force (as it were)
        for all the billions that have been invested in AI research lately.

    • by AK Marc ( 707885 )
      Hard AI must be possible. A brain is a computer. Even if we had to create a meat CPU by growing a brain in a jar, the idea of hard AI is proven possible.

      We are a walking, talking example. We just don't know how to Synth/Host (depending on whether you play Fallout 4 or watch Westworld) a meat brain yet. We grow an AI, just in the womb, not a lab, yet.
      • by ceoyoyo ( 59147 )

        No way man. A brain, well, specifically the pineal gland, is a portal to the soul dimension.

      • A brain is a computer.

        Not all researchers in the field agree on this. See for instance "The Feeling of Life Itself" by Christof Kock (2019 MIT Press).

        • Not all researchers in the field agree on this.
          See for instance "The Feeling of Life Itself" by Christof Kock (2019 MIT Press).

          Only if you define "researcher" as "a mumbo-jumbo spewing philosopher".

    • by HiThere ( 15173 )

      Define your terms, and I'll tell you whether I believe you or not. But the definitions have to be operational. If I can't use them to test whether, say, an octopus is intelligent, then they aren't acceptable.

      Now if you were to claim that there is a reasonable doubt that "hard AI is even possible", I'd agree. I can come up with definitions such that that is reasonably dubious. The other way of putting it, though, is just flapping your mouth without making sense.

    • As of yet here is no evidence that hard AI is even possible.

      That's true, provided you define "hard AI" as "stuff that isn't possible".
      At least that is a definition, and nobody has come up with a better one
      that doesn't involve a ton of hand-waving.

  • by augo ( 6575028 ) on Tuesday July 28, 2020 @04:09PM (#60340497)

    Did he mean "Hangover"?

  • by DrYak ( 748999 ) on Tuesday July 28, 2020 @04:12PM (#60340501) Homepage

    I find the hype around GPT-3 greatly exagerated.

    It's basically just a giant glorified "auto-correct"-like text predictor. It takes text modelling (a concept that existed two decades ago [eblong.com]) and just throws insanely vast amount of processing power to it.

    Of course, given the sheer size of the neural net powering it underneath, the result are incredible in terms of style that the AI can write in.
    It can keep enough context to make whole pargraph that are actually coherent (As opposed to the "complete this sentence using you're phone's autocomplete" game that has been making rounds on social media). And that paragraph will have a style (text messaging, prose, computer code) that matches the priming you gave to it.

    But it's still just text prediction, the AI is good at modelling the context, but doesn't have any concept of the subject it's writing about, it's just completing paragraphs in the most realistic manner, but doesn't track the actual things it's talking about.
    It will write a nice prose paragraph, but do not count on it to complete the missing volumes of GRR Martin's Songs book serie - it lacks the concept of "caracters" and tracking ehri inner motivation.
    It will write a nice looking C function, but is unable to write the Linux kernel, because it doesn't have a mental map of the architecture that is needed.

      It's not really transformative or revolutionnary. It's just the natural evolution what two decade of research in AI (Neural Net and giant clouds able to train hyper large nets) can add to the simplistic Markov toy example I linked above.

    It's basically Deep Drumpf [twitter.com] on steroids.

    • Re:GPT-3: Overhyped (Score:4, Interesting)

      by timeOday ( 582209 ) on Tuesday July 28, 2020 @05:14PM (#60340709)

      But it's still just text prediction, the AI is good at modelling the context, but doesn't have any concept of the subject it's writing about, it's just completing paragraphs in the most realistic manner, but doesn't track the actual things it's talking about.

      It's unknown how fundamentally different those things actually are.

      If you suggest things to the human mind, it will continue that thought, and perhaps act on it. That's why advertising is a core engine of the economy.

      • by ceoyoyo ( 59147 )

        Not entirely. There are lots of tests for language comprehension, and some very impressive results at training machines in that regard. GPT-3 has been evaluated in that context. Its creators, "Open"AI describe it as "rudimentary."

        • Are both very good at repeating things that they hear without understanding what they actually mean. They perform superficial text transformations on them.

          BTW. Artificial Neural Networks are over hyped. I do not know how GPT-3 actually works any more than the journalists that write about it, but I suspect that a large database and a complex grammar engine has more to do with it than any ANN.

          • by ceoyoyo ( 59147 )

            I have no doubt that GPT-3 and systems like it will find all sorts of uses. I guess if you're an MBA with an interest in spam, that would make it more impressive than other things. I'm not.

            "BTW. Artificial Neural Networks are overhyped. I don't know how they work, but they're hype." [paraphrased]

            Lol.

          • I suspect that a large database and a complex grammar engine has more to do with it than any ANN.

            You would be wrong,
            https://arxiv.org/pdf/2005.141... [arxiv.org]

            "Here we show that scaling up language models greatly improves task-agnostic,few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billionparameters, 10x more than any previous non-sparse language model, and test its performance inthe few-shot setting."

      • by HiThere ( 15173 )

        They may not be different at a fundamental level. I suspect that they aren't. But intelligence needs to be multidimensional, and pure text won't get you there. You need to, e.g., cross-correlate the experience of a dog with the texts about dogs. And there's more difficult problems when you start considering goals.

        I suspect that, eventually, something like GPT3 will be a component of a real intelligence, but that real intelligence will have evolved from a separate function. Possibly an office manager sy

    • by ljw1004 ( 764174 )

      It's not really transformative or revolutionary. It's just the natural evolution what two decade of research in AI...

      The article was clear that it talked about transformative *economic* impact. You have written that it's not *technologically* transformative. I think you're talking at cross purposes.

      • by mbkennel ( 97636 )

        In the same way that Internet Protocol and HTTP wasn't particularly transformative technologically between 1992 and 1999. The technological jumps were from 1970's to 1992 or so. This may be the Netscape Navigator moment for use of large corpus-trained language models. People using models vs researchers training them.

        • This may be the Netscape Navigator moment for use of large corpus-trained language models. People using models vs researchers training them.

          *that* is much more likely - I think - than the "it's revolution" kind of hype we're seeing everywhere.

          Specially because the stuff is versatile in multiple styles, it could be handy as a generic engine to put report/instructions/etc. into form.

          As in, another process (might be pure programmatic, might be another AI layer) generates data, and GPT-3 could be tasked to "fill in the blanks" - to wrap the data into an actual textual form.

          Not only giving it an initial prompt, but also aditionnal constrain (the nic

    • by ceoyoyo ( 59147 )

      I find the systems aimed at language comprehension more interesting. GPT-3 will undoubtedly have all kinds of economic applications though. Comprehension doesn't seem to be a huge requirement in a lot of areas.

    • by mbkennel ( 97636 )

      > It will write a nice prose paragraph, but do not count on it to complete the missing volumes of GRR Martin's Songs book serie - it lacks the concept of "caracters" and tracking ehri inner motivation.

      The main discovery of the last 15-20 years of neural network based language modeling is how to incorporate what I would call "intermediate-term" state which is sort of an approximation to concepts----in particular the language models have capabilities well beyond what a Markov model can do. Does it work li

      • by jezwel ( 2451108 )

        ...may be enough for a number of practical, finite business tasks, like "route this customer rant or spam sent by email to the right department".

        This is exactly what we're looking for, both internally for staff to get support from various groups (IT,HR,Business Services, Procurement) and for external customers to obtain support from internal resources. Something that can (politely) interrogate the caller to ascertain the issue and route the appropriate knowledge to them or redirect the call to the most appropriate - and available - support person.

    • by jythie ( 914043 )
      So.. it is another example of 'if we throw more processing power at it, maybe this time something magical will happen' projects?
      • So.. it is another example of 'if we throw more processing power at it, maybe this time something magical will happen' projects?

        the actual magic already happened two decades ago with much more primitive modeling tools (check the Markov example I linked)

        It's more an example "if we throw more processing power at it, maybe this time, the magical sparks will have a slightly different colour ?" project.

    • I find the hype around GPT-3 greatly exagerated.

      Like every other "breakthrough" in AI for the last 60 years.

  • I love when people say this about other people's money " in the wider context of megacorp investment it is insignificant."
    Only an academic...

    • ... other people's money and time. Ya hear that all you researchers behind GPT-3? Thanks for your insignificant contribution.

  • by OneHundredAndTen ( 1523865 ) on Tuesday July 28, 2020 @04:55PM (#60340639)
    History seems to be repeating itself, in that the reality of AI cannot live up to the hype of AI. There has been progress, and much useful stuff is available. However, we are still very far from what the AI hype has been peddling. For goodness sake, personal assistants remain pretty much as (in)capable as they have been for almost a decade now.
    • by Tablizer ( 95088 )

      There's still a lot of potential for applying current AI to various industries (some of it socially questionable). Thus, even without big breakthroughs, I suspect a lot of product development in AI.

      The first AI bubble didn't produce enough practical technology to coast through the lack of new breakthroughs. This time it reached the practical hurdle at least. There may be a slow-down in investment, though, when investors realize it's stuck in a semi-rut.

      • by HiThere ( 15173 )

        Maybe. It does seem that things have been moving slowly, but this may be "seeming". There *has* been a lot of overpromising, but it's also true that there haveve been a lot of developments and progress.

        I suspect that there is a lot of work going on that isn't public, but that's just a suspicion. It's possible that we need some new ideas. IIUC, the GPT3 developers said, pretty much, that it's the end of the road for this path of development. But robots are going to make a big difference. They operate o

      • by jythie ( 914043 )
        I would actually argue that the current AI research has painted itself into a corner. The techniques that have gotten popular, the ones that students are learning as 'the next hot thing' and are seeing industry adoption in marketing all share two main traits : they are computationally cheap to run (so economical), which is good... but they are also opaque, which is bad.

        They are really good at getting answers fast and finding novel patterns, but you can't open them up and explain why it got an answer. Thi
  • That is the phase where too many morons still think they can get rich of that thing and try to push it hard, supported by too many cheering clueless useful idiots ("He is the messiahs! I must know, I have followed a few!"). Next phase is the collapse when enough people realize this whole thing is just nonsense and can not deliver anything even remotely approaching the grand promises. Classical vaporware hype cycle, extended a bit because dumb automation (all we have) actually has its uses.

  • Well duh. The probable upper limit on the amount of neurons required for general intelligence is conservatively 10 billion, and it is reasonable to think that actual number is much lower than that
    • conservatively 10 billion ... reasonable to think that actual number is much lower than that

      Cite please.

      Now do dendrites. Then figure out how the cell decides which axons to fire.

    • by HiThere ( 15173 )

      That's not clear at all. Much learning seems, e.g., to be mediated by glia cells, and some happens via gradients in the intercellular fluid.

      OTOH, a lot of the brain is devoted to automatic processes, like thermal regulation, breathing, etc. Not to intelligence per se.

      Also some processing is done at the synapse level.

      The real answer is that neural nets are a crude emulation of a small part of the brain. Whether they capture all that is needed is not clear. Probably not, but perhaps computer neural nets c

  • Call me back when an artificial system can operate 100 billion axon connected neurons with power less than 30W.

    • A hypothetically self-aware AI system doesn't care what its power consumption is so long as it isn't causing anyone to shut it off. It could use quite a bit of power and not be threatened for that reason. It would have that in common with us.

      On the other hand, this article is clickbait because there's no reason to be "worried" that AI might be able to cheaply spew out contextless, meaningless text which is nevertheless syntactically, stylistically, and factually correct. It's one thing to recite facts, and

  • lesswrong.com is run by a stereotypical cult leader with no formal education. For $4000 you can attend the cult retreat, and Rationalize your mind while you learn of the impending Singularity (every good cult needs a Doomsday). You can safely ignore anything these guys say.
  • I am worried we're in an overhang right now .

    You misspelled "fervently hope".

  • Because CoolAid also has Ai in it so it must be good!

  • Probably. (Score:4, Insightful)

    by Qbertino ( 265505 ) <moiraNO@SPAMmodparlor.com> on Wednesday July 29, 2020 @02:54AM (#60342233)

    AI and General Purpose Robots are now where microcomputers and networks were in the late 80ies, early 90ies.

    I remember people raving about this new thing called "the web" and I was like "WTF are they talking about? Always online is *way* to expensive, this Fidonet thing is ruled by citizens, not corps, no way will the web take off. Mobile Fidonet on a handheld? Probably. But people using the web big time? No way." I wasn't aware how important colorful pictures you can click on are.

    Today I'm a web developer and still have to explain the difference of a picture of a website and an actual website. To 'experts'.
    And today my job only still exists, because 17 year old FOSS CMSes are so convoluted in architecture they need an expert to do the busywork.
    That's going to go away soon.

    If you know what you're doing, you can already replace 90% of skilled labour with botwork in some circumstances. It actually is very much like in the early 90ies, where you could already computerrize with easy, but people and overall processes weren't quite there yet.

    The robots are there already, in 10-15 years they will become ubiquitous, just like tablet computers became when iPad came along. Tablet computers have been around for a looong time before that, but the era of iPad made them universal. Same with robots. And probably faster.

    • by jythie ( 914043 )
      damn.. fidonet.. now there is something I have not thought about in ages.
    • The Web only took off because it was non-propitiatory, not closed
      The idea had been around for a very long time, but the infrastructure could not support it, both physical and technical
      Now anyone can build an automated website in a few minutes, people still get paid to build them though

      the iPad was not the first or the best, it just was the well known one when the technology caught up to making it feasible to build something actually useful

      Bots will get easier, and more pervasive, but they simply cannot do

    • by maitas ( 98290 )

      Future is not physical. Every time people were asked how the future will look like they always answer on physical improvements that never came (fying cars, trips to the moon, etc.).
      What no one was able to predict was the digitalization of the whole world (Internet, etc.).

      I think it will be the same. Robots will improve, of course, but a lot less as expected. Tesla had te remove a lot of its robots...

  • by holophrastic ( 221104 ) on Wednesday July 29, 2020 @10:43AM (#60343637)

    There's a huge part missing from AI. Even with today's absolutely horrible to-everything-by-pattern-matching that results in seeing-stop-signs-where-there-aren't-any and driving-headlong-into-solid-walls, AI still can't do a damned thing without being situated manually.

    By that I simply mean that interacting with the world requires lasers and training and vast data sets and calibration, fails catastrophically in new situations, and processes far too many inputs to make an educated guess, when millions of existing species do infinitely better with far far far less.

    And it comes down to one very abstract concept -- self-decision-making.

    You know how you might be driving, and then there might be a blizzard, and then the sun comes out in the middle of the blizzard, and you can see almost nothing. That happened to me once twenty years ago. Or when you're driving at night, and the road dips downward, and the fog rolls across like pea soup, and an on-coming car's lights blind you, and you see nothing. That happens to me almost every week of the year.

    In those, and almost every other non-standard driving scenario, I understand that my senses are decreased in resolution and ability. I slow down. I focus harder. I turn off the radio. I act differently.

    Show me an AI that notices that it's suddenly less-than-capable, and modifies its behaviour to continue to function without assistance. Show me an AI that pulls over and waits for things to clear up. Then show me an AI that realizes it may be days of fog, and then calls for help, or limps along, or does something other than die alone in the dark.

    In the other direction of exactly the same issue, show me an AI that can be dropped onto a road in the middle of nowhere, with no road signs, no road paint, and no satnav, and can ultimately figure out a safe speed and a safe direction and attempts to do something, without being told what that something should be.

    Living things have a concept of not sitting still doing nothing forever until death. We attempt to go back to some concept of normal. Show me the AI that has said concept of "normal" or "home" or "routine" and makes unprompted decisions to maintain or re-acquire them when lost.

    That's what's currently missing. And it's missing because we try to build machines to be 100% correct. That means the car should know exactly how far to the next car in-front -- to the inch.

    No human being (excepting professional drivers) is going to know how many inches away is the car in-front. Maybe we can estimate number of car lengths, but you know we aren't anywhere near correct.

    But we're awesome at seeing that it's-getting-closer or it's-getting-farther-away. That's why we don't drive head-long into walls. That's not a mistake that we can make (with operational eyes).

    Do you think that flies, bees, and birds do calculus to fly together? I'm staring at eight goldfinches at my backyard feeder. They fly in sporadic swarms with each-other, posturing in mid-air, without colliding. You think they took a math course? You think they calculate the force-due-to-gravity? Bull shit.

    We put one foot in front of the other, and we fall forward; repeatedly. We call it walking. We don't do any math to figure out where the floor is, how far to step, and exactly which angle at which to lean. We move the foot until it hits the floor. That's where the floor is. We lean until we fall, we push until we stop falling. No math.

    We're not always right. But it works out.

    We don't have any AI that can make decisions because they only try to be perfect -- and that's exactly what stifles humans who try to be perfect: we call it fear of failure, and it's a major problem.

  • GPT-3 is the first AI system that has obvious, immediate, transformative economic value.

    Bullshit. There are many older AI systems already in use with "immediate, transformative economic value" such as CNN based object detection for things such as Advanced Driver Assistance Systems or robots for sorting things or automated fruit picking robots and automated translation and transcription systems, also AI based image enhancements in many smart phones. These are well established AI-based systems, they have real value and they run at usable speeds on inexpensive embedded systems.

Keep up the good work! But please don't ask me to help.

Working...