Are We in an AI Overhang? (lesswrong.com) 85
Andy Jones, a London-based machine learning researcher, writes: An overhang is when you have had the ability to build transformative AI for quite some time, but you haven't because no-one's realised it's possible. Then someone does and surprise! It's a lot more capable than everyone expected. I am worried we're in an overhang right now. I think we right now have the ability to build an orders-of-magnitude more powerful system than we already have, and I think GPT-3 is the trigger for 100x-larger projects at Google and Facebook and the like, with timelines measured in months.
GPT-3 is the first AI system that has obvious, immediate, transformative economic value. While much hay has been made about how much more expensive it is than a typical AI research project, in the wider context of megacorp investment it is insignificant. GPT-3 has been estimated to cost $5m in compute to train, and -- looking at the author list and OpenAI's overall size - maybe another $10m in labour, on the outside. Google, Amazon and Microsoft all each spend ~$20bn/year on R&D and another ~$20bn each on capital expenditure. Very roughly it totals to ~$100bn/year. So dropping $1bn or more on scaling GPT up by another factor of 100x is entirely plausible right now. All that's necessary is that tech executives stop thinking of NLP as cutesy blue-sky research and start thinking in terms of quarters-till-profitability.
GPT-3 is the first AI system that has obvious, immediate, transformative economic value. While much hay has been made about how much more expensive it is than a typical AI research project, in the wider context of megacorp investment it is insignificant. GPT-3 has been estimated to cost $5m in compute to train, and -- looking at the author list and OpenAI's overall size - maybe another $10m in labour, on the outside. Google, Amazon and Microsoft all each spend ~$20bn/year on R&D and another ~$20bn each on capital expenditure. Very roughly it totals to ~$100bn/year. So dropping $1bn or more on scaling GPT up by another factor of 100x is entirely plausible right now. All that's necessary is that tech executives stop thinking of NLP as cutesy blue-sky research and start thinking in terms of quarters-till-profitability.
Deus Ex future. (Score:2)
All that's necessary is that tech executives stop thinking of NLP as cutesy blue-sky research and start thinking in terms of quarters-till-profitability.
Picus TV with GPT being an early Eliza Cassan.
https://deusex.fandom.com/wiki... [fandom.com]
No (Score:3)
The answer is somewhere in the middle (Score:2)
The software engineers have their services. They work in their own silos doing their own thing. They can generate all kinds of data and send it wherever they like. They're really good at turning business requirements into working software.
The project managers know the business. They speak with var
Re: (Score:2)
This is why Al Gore created the Internet. DARPA had cash to make a network suitable for world-wide communication, as the military needed to link all the bases in the world (current count about 800). The government needed to fund and build the "data scientist" side, then effectively paid to open it up to the world, with little to no direct return on the investment.
Here, NOAA, NASA, or someone else needs t
specifically re weather (Score:2)
> Or NOAA tracking every hurricaine down to the butterfly. Cat4 warnings in SC 3 months from now from a butterfly flap in Africa. We have (most) of the data, we just don't have the analyzation down, so the "models" need to stop being humans with sharpees, but an AI with more data and no set model.
The models are not human with sharpees, and general machine-learning algorithms perform poorly unless they have strong physics-based priors or training. The good models we do use are based on physical laws.
And
Re: (Score:2)
Re: (Score:1)
Re: (Score:1)
No... (Score:2)
>An overhang is when you have had the ability to build transformative AI for quite some time, but you haven't because no-one's realised it's possible.
Actually an overhang is the rock you face when you realise you wont be typing for a few days.
No. (Score:2)
Re:No. (Score:5, Funny)
As of yet here is no evidence that hard AI is even possible.
Exactly. We have no evidence of sentient behavior anywhere on our planet.
Re:No. (Score:4, Funny)
If I had mod points they would be yours. I'd go with insightful rather than funny though.
Re: (Score:2)
Re:No. (Score:4, Informative)
"Hard AI" is not what the article is about, as was made obvious by the summary.
Re: No. (Score:2)
Re: (Score:2)
As of yet here is no evidence that hard AI is even possible.
Why would you think it is "impossible"? I don't see any scientific or ultimate technical limitation that would rule it out. It is a matter of time, just how much time is the only real question.
Re: (Score:2)
Could you point to the evidence it's possible? You know, address his actual statement.
Re: (Score:3)
To be fair, he wasn't talking about finally building hard AI (i.e. "true" AI that is sentient, the stuff of science-fiction). He was merely talking about developing AI applications that are economically and societally transformative. And in that regard, I wouldn't be surprised at all if he may well be proven right.
We do seem to be in a position where our ability to create powerful AI (though still not anything close to a hard AI) has exceeded our ability to conceptualize uses for it. We're treating it like
Re: (Score:2)
Surveillance is the killer app for AI.
Re: (Score:2)
Whi
Re: (Score:2)
We do seem to be in a position where our ability to create powerful AI (though still not anything
close to a hard AI) has exceeded our ability to conceptualize uses for it.
Except for driverless cars, which are the driving force (as it were)
for all the billions that have been invested in AI research lately.
Re: (Score:2)
We are a walking, talking example. We just don't know how to Synth/Host (depending on whether you play Fallout 4 or watch Westworld) a meat brain yet. We grow an AI, just in the womb, not a lab, yet.
Re: (Score:2)
No way man. A brain, well, specifically the pineal gland, is a portal to the soul dimension.
Re: (Score:2)
A brain is a computer.
Not all researchers in the field agree on this. See for instance "The Feeling of Life Itself" by Christof Kock (2019 MIT Press).
Re: (Score:2)
Not all researchers in the field agree on this.
See for instance "The Feeling of Life Itself" by Christof Kock (2019 MIT Press).
Only if you define "researcher" as "a mumbo-jumbo spewing philosopher".
Re: (Score:2)
Define your terms, and I'll tell you whether I believe you or not. But the definitions have to be operational. If I can't use them to test whether, say, an octopus is intelligent, then they aren't acceptable.
Now if you were to claim that there is a reasonable doubt that "hard AI is even possible", I'd agree. I can come up with definitions such that that is reasonably dubious. The other way of putting it, though, is just flapping your mouth without making sense.
Re: (Score:2)
As of yet here is no evidence that hard AI is even possible.
That's true, provided you define "hard AI" as "stuff that isn't possible".
At least that is a definition, and nobody has come up with a better one
that doesn't involve a ton of hand-waving.
Re: (Score:2)
What? Um, there are a lot of "AI systems" that have all those things, and they have existed since the 1970s.
It is essentially impossible to be in an "AI overhang." The moment some "AI" problem is solved it ceases to be "AI."
GPT-3 will find applications and each time it does it will be given a catchy name and taken for granted as yet another mechanical solution to some previously unfathomable problem. It will not, however, be thought of as "AI."
Because no solved problem ever is.
Is this misstype? (Score:4, Funny)
Did he mean "Hangover"?
GPT-3: Overhyped (Score:4)
I find the hype around GPT-3 greatly exagerated.
It's basically just a giant glorified "auto-correct"-like text predictor. It takes text modelling (a concept that existed two decades ago [eblong.com]) and just throws insanely vast amount of processing power to it.
Of course, given the sheer size of the neural net powering it underneath, the result are incredible in terms of style that the AI can write in.
It can keep enough context to make whole pargraph that are actually coherent (As opposed to the "complete this sentence using you're phone's autocomplete" game that has been making rounds on social media). And that paragraph will have a style (text messaging, prose, computer code) that matches the priming you gave to it.
But it's still just text prediction, the AI is good at modelling the context, but doesn't have any concept of the subject it's writing about, it's just completing paragraphs in the most realistic manner, but doesn't track the actual things it's talking about.
It will write a nice prose paragraph, but do not count on it to complete the missing volumes of GRR Martin's Songs book serie - it lacks the concept of "caracters" and tracking ehri inner motivation.
It will write a nice looking C function, but is unable to write the Linux kernel, because it doesn't have a mental map of the architecture that is needed.
It's not really transformative or revolutionnary. It's just the natural evolution what two decade of research in AI (Neural Net and giant clouds able to train hyper large nets) can add to the simplistic Markov toy example I linked above.
It's basically Deep Drumpf [twitter.com] on steroids.
Re:GPT-3: Overhyped (Score:4, Interesting)
It's unknown how fundamentally different those things actually are.
If you suggest things to the human mind, it will continue that thought, and perhaps act on it. That's why advertising is a core engine of the economy.
Re: (Score:2)
Not entirely. There are lots of tests for language comprehension, and some very impressive results at training machines in that regard. GPT-3 has been evaluated in that context. Its creators, "Open"AI describe it as "rudimentary."
Journalists and Managers (Score:2)
Are both very good at repeating things that they hear without understanding what they actually mean. They perform superficial text transformations on them.
BTW. Artificial Neural Networks are over hyped. I do not know how GPT-3 actually works any more than the journalists that write about it, but I suspect that a large database and a complex grammar engine has more to do with it than any ANN.
Re: (Score:2)
I have no doubt that GPT-3 and systems like it will find all sorts of uses. I guess if you're an MBA with an interest in spam, that would make it more impressive than other things. I'm not.
"BTW. Artificial Neural Networks are overhyped. I don't know how they work, but they're hype." [paraphrased]
Lol.
Re: (Score:2)
I suspect that a large database and a complex grammar engine has more to do with it than any ANN.
You would be wrong,
https://arxiv.org/pdf/2005.141... [arxiv.org]
"Here we show that scaling up language models greatly improves task-agnostic,few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billionparameters, 10x more than any previous non-sparse language model, and test its performance inthe few-shot setting."
Re: (Score:2)
They may not be different at a fundamental level. I suspect that they aren't. But intelligence needs to be multidimensional, and pure text won't get you there. You need to, e.g., cross-correlate the experience of a dog with the texts about dogs. And there's more difficult problems when you start considering goals.
I suspect that, eventually, something like GPT3 will be a component of a real intelligence, but that real intelligence will have evolved from a separate function. Possibly an office manager sy
Re: (Score:2)
It's not really transformative or revolutionary. It's just the natural evolution what two decade of research in AI...
The article was clear that it talked about transformative *economic* impact. You have written that it's not *technologically* transformative. I think you're talking at cross purposes.
Re: (Score:2)
In the same way that Internet Protocol and HTTP wasn't particularly transformative technologically between 1992 and 1999. The technological jumps were from 1970's to 1992 or so. This may be the Netscape Navigator moment for use of large corpus-trained language models. People using models vs researchers training them.
Netscape Navitagor moment (Score:2)
This may be the Netscape Navigator moment for use of large corpus-trained language models. People using models vs researchers training them.
*that* is much more likely - I think - than the "it's revolution" kind of hype we're seeing everywhere.
Specially because the stuff is versatile in multiple styles, it could be handy as a generic engine to put report/instructions/etc. into form.
As in, another process (might be pure programmatic, might be another AI layer) generates data, and GPT-3 could be tasked to "fill in the blanks" - to wrap the data into an actual textual form.
Not only giving it an initial prompt, but also aditionnal constrain (the nic
Re: (Score:2)
I find the systems aimed at language comprehension more interesting. GPT-3 will undoubtedly have all kinds of economic applications though. Comprehension doesn't seem to be a huge requirement in a lot of areas.
Re: (Score:2)
> It will write a nice prose paragraph, but do not count on it to complete the missing volumes of GRR Martin's Songs book serie - it lacks the concept of "caracters" and tracking ehri inner motivation.
The main discovery of the last 15-20 years of neural network based language modeling is how to incorporate what I would call "intermediate-term" state which is sort of an approximation to concepts----in particular the language models have capabilities well beyond what a Markov model can do. Does it work li
Re: (Score:2)
...may be enough for a number of practical, finite business tasks, like "route this customer rant or spam sent by email to the right department".
This is exactly what we're looking for, both internally for staff to get support from various groups (IT,HR,Business Services, Procurement) and for external customers to obtain support from internal resources. Something that can (politely) interrogate the caller to ascertain the issue and route the appropriate knowledge to them or redirect the call to the most appropriate - and available - support person.
Re: (Score:2)
Magic long time ago (Score:2)
So.. it is another example of 'if we throw more processing power at it, maybe this time something magical will happen' projects?
the actual magic already happened two decades ago with much more primitive modeling tools (check the Markov example I linked)
It's more an example "if we throw more processing power at it, maybe this time, the magical sparks will have a slightly different colour ?" project.
Re: (Score:2)
I find the hype around GPT-3 greatly exagerated.
Like every other "breakthrough" in AI for the last 60 years.
insignificant cost (Score:1)
I love when people say this about other people's money " in the wider context of megacorp investment it is insignificant."
Only an academic...
Re: (Score:1)
... other people's money and time. Ya hear that all you researchers behind GPT-3? Thanks for your insignificant contribution.
Re: (Score:3, Informative)
There is a name for this "not AI" comment: The AI effect. Basically, whatever can be done with a machine is automatically considered "not AI", because it's no longer magical, just engineering.
https://en.wikipedia.org/wiki/AI_effect
> I cannot think, nor actually be truly creative.
I'm very sorry about your lack of thought and creativity.
Re: (Score:2)
Call it what it is. It's the "my mind is magic effect."
It's probably the same root cognitive bias as "god of the gaps."
Re: (Score:3)
Re: (Score:2)
So if you insist a duck is a bicycle, and I say "no, that's a duck" you're claiming that I'll never be satisfied even if you do show up with a duck?
That's pretty silly.
Re: (Score:2)
Actually, what you're talking about is a simple variation of the No True Scotsman...but that's not what's going on here.
Except, we DON'T have an AI that CAN pass a Turing Test. That's been the simple standard for a long, long time.
Simply redefining AI to be anything that does something complicated is sophistry.
"My calculator has AI" /facepalm.
"No, it really doesn't"
"Oh, I see this is the AI effect because what it does now is just engineering"
Re: (Score:2)
You've hit the nail on the head. After decades of complete failure to come up with anything remotely resembling AI, companies simply redefined the term "AI" to mean "something done on the computer that the average marketing droid doesn't understand", now instead of nobody being able to say they have AI, EVERYONE can say they have AI. Another fabulous win for marketing, another major loss for language and general understanding.
And this is what makes the question in the headline so difficult to answer, becaus
Re: (Score:2)
Re: (Score:2)
Ai has basically been defined by the turing test. Nothing comes even remotely close to solving that. And honestly that's way too low a bar as it is.
Re: (Score:2)
Exactly. Even even the stuff replacing humans is not intelligent. It just shows that some things do not actually require intelligence, but can be brute-forced. Interesting for things like Chess and Go, but not all that surprising as they have very strict rules. A lot of other tasks traditionally done by humans turn out to not actually require intelligence either these days, or only require it in special situations. A lot of the work mid-level clerks do in banks or insurances falls under that. Driving in man
Re: (Score:2)
I suspect driving will turn out to have to be done the way we do it, which is fairly complicated. But complicated computers are available, and they don't have to be as complicated as we are to do a "better job" at driving, as long as they are programmed to be conservative... and we define it as not getting into accidents and driving efficiently.
Re: (Score:2)
I love the game of Go, but this verges on mystical claptrap.
Every moment you have to place a stone, the game offers you a finite set of choices, each of those has a finite number of responses by the opponent, which in turn present a finite number of choices.
It's a conceptually simple but practically massive heuristic to evaluate this staggering number of choices at each step for suitability - each stone placed maximizing scoring opportunities for later turns.
Human intuition of a go master is brilliant, but
The onset of an AI winter, more likely (Score:4, Interesting)
Re: (Score:1)
There's still a lot of potential for applying current AI to various industries (some of it socially questionable). Thus, even without big breakthroughs, I suspect a lot of product development in AI.
The first AI bubble didn't produce enough practical technology to coast through the lack of new breakthroughs. This time it reached the practical hurdle at least. There may be a slow-down in investment, though, when investors realize it's stuck in a semi-rut.
Re: (Score:2)
Maybe. It does seem that things have been moving slowly, but this may be "seeming". There *has* been a lot of overpromising, but it's also true that there haveve been a lot of developments and progress.
I suspect that there is a lot of work going on that isn't public, but that's just a suspicion. It's possible that we need some new ideas. IIUC, the GPT3 developers said, pretty much, that it's the end of the road for this path of development. But robots are going to make a big difference. They operate o
Re: (Score:2)
They are really good at getting answers fast and finding novel patterns, but you can't open them up and explain why it got an answer. Thi
Nope. We are at "peak AI bullshit" (Score:2)
That is the phase where too many morons still think they can get rich of that thing and try to push it hard, supported by too many cheering clueless useful idiots ("He is the messiahs! I must know, I have followed a few!"). Next phase is the collapse when enough people realize this whole thing is just nonsense and can not deliver anything even remotely approaching the grand promises. Classical vaporware hype cycle, extended a bit because dumb automation (all we have) actually has its uses.
Oke dokie...... (Score:1)
Re: (Score:2)
Cite please.
Now do dendrites. Then figure out how the cell decides which axons to fire.
Re: (Score:2)
That's not clear at all. Much learning seems, e.g., to be mediated by glia cells, and some happens via gradients in the intercellular fluid.
OTOH, a lot of the brain is devoted to automatic processes, like thermal regulation, breathing, etc. Not to intelligence per se.
Also some processing is done at the synapse level.
The real answer is that neural nets are a crude emulation of a small part of the brain. Whether they capture all that is needed is not clear. Probably not, but perhaps computer neural nets c
Long way (Score:2)
Call me back when an artificial system can operate 100 billion axon connected neurons with power less than 30W.
Re: (Score:2)
A hypothetically self-aware AI system doesn't care what its power consumption is so long as it isn't causing anyone to shut it off. It could use quite a bit of power and not be threatened for that reason. It would have that in common with us.
On the other hand, this article is clickbait because there's no reason to be "worried" that AI might be able to cheaply spew out contextless, meaningless text which is nevertheless syntactically, stylistically, and factually correct. It's one thing to recite facts, and
*beep beep* Cult Detected (Score:2)
Um... (Score:2)
You misspelled "fervently hope".
Drink the CoolAid with me buddy! (Score:2)
Because CoolAid also has Ai in it so it must be good!
Probably. (Score:4, Insightful)
AI and General Purpose Robots are now where microcomputers and networks were in the late 80ies, early 90ies.
I remember people raving about this new thing called "the web" and I was like "WTF are they talking about? Always online is *way* to expensive, this Fidonet thing is ruled by citizens, not corps, no way will the web take off. Mobile Fidonet on a handheld? Probably. But people using the web big time? No way." I wasn't aware how important colorful pictures you can click on are.
Today I'm a web developer and still have to explain the difference of a picture of a website and an actual website. To 'experts'.
And today my job only still exists, because 17 year old FOSS CMSes are so convoluted in architecture they need an expert to do the busywork.
That's going to go away soon.
If you know what you're doing, you can already replace 90% of skilled labour with botwork in some circumstances. It actually is very much like in the early 90ies, where you could already computerrize with easy, but people and overall processes weren't quite there yet.
The robots are there already, in 10-15 years they will become ubiquitous, just like tablet computers became when iPad came along. Tablet computers have been around for a looong time before that, but the era of iPad made them universal. Same with robots. And probably faster.
Re: (Score:2)
Re: (Score:2)
The Web only took off because it was non-propitiatory, not closed
The idea had been around for a very long time, but the infrastructure could not support it, both physical and technical
Now anyone can build an automated website in a few minutes, people still get paid to build them though
the iPad was not the first or the best, it just was the well known one when the technology caught up to making it feasible to build something actually useful
Bots will get easier, and more pervasive, but they simply cannot do
Re: (Score:2)
Future is not physical. Every time people were asked how the future will look like they always answer on physical improvements that never came (fying cars, trips to the moon, etc.).
What no one was able to predict was the digitalization of the whole world (Internet, etc.).
I think it will be the same. Robots will improve, of course, but a lot less as expected. Tesla had te remove a lot of its robots...
AI doesn't make decisions (Score:3)
There's a huge part missing from AI. Even with today's absolutely horrible to-everything-by-pattern-matching that results in seeing-stop-signs-where-there-aren't-any and driving-headlong-into-solid-walls, AI still can't do a damned thing without being situated manually.
By that I simply mean that interacting with the world requires lasers and training and vast data sets and calibration, fails catastrophically in new situations, and processes far too many inputs to make an educated guess, when millions of existing species do infinitely better with far far far less.
And it comes down to one very abstract concept -- self-decision-making.
You know how you might be driving, and then there might be a blizzard, and then the sun comes out in the middle of the blizzard, and you can see almost nothing. That happened to me once twenty years ago. Or when you're driving at night, and the road dips downward, and the fog rolls across like pea soup, and an on-coming car's lights blind you, and you see nothing. That happens to me almost every week of the year.
In those, and almost every other non-standard driving scenario, I understand that my senses are decreased in resolution and ability. I slow down. I focus harder. I turn off the radio. I act differently.
Show me an AI that notices that it's suddenly less-than-capable, and modifies its behaviour to continue to function without assistance. Show me an AI that pulls over and waits for things to clear up. Then show me an AI that realizes it may be days of fog, and then calls for help, or limps along, or does something other than die alone in the dark.
In the other direction of exactly the same issue, show me an AI that can be dropped onto a road in the middle of nowhere, with no road signs, no road paint, and no satnav, and can ultimately figure out a safe speed and a safe direction and attempts to do something, without being told what that something should be.
Living things have a concept of not sitting still doing nothing forever until death. We attempt to go back to some concept of normal. Show me the AI that has said concept of "normal" or "home" or "routine" and makes unprompted decisions to maintain or re-acquire them when lost.
That's what's currently missing. And it's missing because we try to build machines to be 100% correct. That means the car should know exactly how far to the next car in-front -- to the inch.
No human being (excepting professional drivers) is going to know how many inches away is the car in-front. Maybe we can estimate number of car lengths, but you know we aren't anywhere near correct.
But we're awesome at seeing that it's-getting-closer or it's-getting-farther-away. That's why we don't drive head-long into walls. That's not a mistake that we can make (with operational eyes).
Do you think that flies, bees, and birds do calculus to fly together? I'm staring at eight goldfinches at my backyard feeder. They fly in sporadic swarms with each-other, posturing in mid-air, without colliding. You think they took a math course? You think they calculate the force-due-to-gravity? Bull shit.
We put one foot in front of the other, and we fall forward; repeatedly. We call it walking. We don't do any math to figure out where the floor is, how far to step, and exactly which angle at which to lean. We move the foot until it hits the floor. That's where the floor is. We lean until we fall, we push until we stop falling. No math.
We're not always right. But it works out.
We don't have any AI that can make decisions because they only try to be perfect -- and that's exactly what stifles humans who try to be perfect: we call it fear of failure, and it's a major problem.
Many AI real world systems already exist (Score:2)
GPT-3 is the first AI system that has obvious, immediate, transformative economic value.
Bullshit. There are many older AI systems already in use with "immediate, transformative economic value" such as CNN based object detection for things such as Advanced Driver Assistance Systems or robots for sorting things or automated fruit picking robots and automated translation and transcription systems, also AI based image enhancements in many smart phones. These are well established AI-based systems, they have real value and they run at usable speeds on inexpensive embedded systems.