Linus Torvalds on 'Hilarious' AI Hype (zdnet.com) 42
Linus Torvalds, discussing the AI hype, in a conversation with Dirk Hohndel, Verizon's Head of the Open Source Program Office: Torvalds snarked, "It's hilarious to watch. Maybe I'll be replaced by an AI model!" As for Hohndel, he thinks most AI today is "autocorrect on steroids." Torvalds summed up his attitude as, "Let's wait 10 years and see where it actually goes before we make all these crazy announcements."
That's not to say the two men don't think AI will be helpful in the future. Indeed, Torvalds noted one good side effect already: "NVIDIA has gotten better at talking to Linux kernel developers and working with Linux memory management," because of its need for Linux to run AI's large language models (LLMs) efficiently.
Torvalds is also "looking forward to the tools actually to find bugs. We have a lot of tools, and we use them religiously, but making the tools smarter is not a bad thing. Using smarter tools is just the next inevitable step. We have tools that do kernel rewriting, with very complicated scripts, and pattern recognition. AI can be a huge help here because some of these tools are very hard to use because you have to specify things at a low enough level." Just be careful, Torvalds warns of "AI BS." Hohndel quickly quipped, "He meant beautiful science. You know, "Beautiful science in, beautiful science out."
That's not to say the two men don't think AI will be helpful in the future. Indeed, Torvalds noted one good side effect already: "NVIDIA has gotten better at talking to Linux kernel developers and working with Linux memory management," because of its need for Linux to run AI's large language models (LLMs) efficiently.
Torvalds is also "looking forward to the tools actually to find bugs. We have a lot of tools, and we use them religiously, but making the tools smarter is not a bad thing. Using smarter tools is just the next inevitable step. We have tools that do kernel rewriting, with very complicated scripts, and pattern recognition. AI can be a huge help here because some of these tools are very hard to use because you have to specify things at a low enough level." Just be careful, Torvalds warns of "AI BS." Hohndel quickly quipped, "He meant beautiful science. You know, "Beautiful science in, beautiful science out."
Failure mode (Score:2)
Something that has to be worked on, but mostly on a training standpoint is the way the LLM fail at programming.
It is not very graceful to make a code that look exactly like a code that run but don't.
The mistakes made by the NN should be easy to spot, or pointed by the system itself, such as being capable of testing the code in action, and if it can't make it work well enough, use a pseudo-code explaining what needs to be done on that spot.
Re: (Score:3)
It takes decades to train a human, and we make mistakes constantly. I believe AI will get the job done as soon as we work out better ongoing feedback mechanisms and give the models the equivalent of emotions and instincts.
But we really should be working on the hardware first. The amount of power required to build these things is ludicrous when you compare them to the capacity and power requirements of the human brain.
Re: (Score:2)
You need to set your expectation right. From personal experience, current LLMs can't always generate complete and accurate code on the first iteration. But they are improving.
What they give you right now is a substantial boost in productivity. For instance, they will:
- produce code that is mostly accurate, you will need to go through it to make it fully functional but it is way better than coding from scratch.
- analyze your code and give you helpful suggestions on how to improve it.
- spot bugs in your code.
Pretty on point... (Score:5, Interesting)
It's certainly categorically new and will have some applications, but there have been some rather persistent "oddities" that seem to limit the potential. Meanwhile some impossibly large amounts of money are being thrown as if the age of the artificial super intelligence is now a few months away.
Fully expect one of a few ways the scenario ends poorly for the big spenders:
-Turns out that our current approaches with any vaguely possible amount of resources will not provide qualitative experience significantly better than Copilot/ChatGPT today. It suddenly became an amazing demo from humble hilarious beginnings, but has kind of plateaued despite the massive spend, so this scenario wouldn't surprise me.
-A breakthrough will happen that gets to "magic" but with a totally different sort of compute resource than folks have been pouring money into, making all the spending to date pointless.
-The "ASI" breakthrough happens and completely upends the way the economy works and renders all the big spending moot.
Re: (Score:3)
Re: (Score:3)
The trend is not for mixtures of experts, which people liked in the 1990s. The trend is to let the data magically solve the applied math modelling problem, There will be a reckoning, but it likely won't be in your or my lifetime.
Re: (Score:2, Interesting)
> as if the age of the artificial super intelligence is now a few months away
I think people are too obsessed with the AGI. We don't actually even need AGI for the AI to change the world radically. But I think the best guess is between 2028 - 2040.
> making all the spending to date pointless.
This scenario become impossible when Google released results from AlphaFold2. Even if that would be the only thing that AI research ever manages to create, it would still be enough to make all the spending worth of
"Flying colors" describes a failure condition. (Score:2, Interesting)
It seems to me that one of the following must be true. Either true, human-level artificial intelligence will be achieved
I doubt the last case is true, unless civilization destroys itself somehow before it happens. In the other cases, well, there's really no excuse for not planning for how we're going to safely integrate artificial intelligence into our civilization with a minimum of h
Re: (Score:3)
Surviving with automation is actually quite simple. Governments need to buy all of the companies to themselves. After that, government has two options:
- Get profits from products and share money to people without job
- Give free products, like food to the people.
Linus has become the old man shaking his fist (Score:3, Interesting)
GPT-4/5/6 might not replace him as a kernel architect, but it sure as hell is (and will increasingly be) making a ton of people in a lot of industries waaay more productive. There isn't an infinite supply of work, so a lot of jobs will go away--never to return.
And no, this isn't some millennial/Zoomer potshot: I'm two years older than Linus
Re:Linus has become the old man shaking his fist (Score:5, Insightful)
No, he gets it. There's way too much BS associated with this right now. "Autocorrect on steroids" is spot on.
Re:Linus has become the old man shaking his fist (Score:4, Insightful)
They're both right.
LLMs are garbage, completely incapable of replacing people, basically autocorrect on steroids. Lots of jobs will be lost. "Productivity" as measured by the amount of bullshit generated, will increase exponentially.
And the enshittification of everything will continue unabated.
Re:Linus has become the old man shaking his fist (Score:4, Interesting)
I thought it was bullshit too until GPT-4 came out and I used it for some non-trivial tasks.
Re: Linus has become the old man shaking his fist (Score:2)
Re: Linus has become the old man shaking his fist (Score:4, Insightful)
And remember right now is the LEAST capable this technology will ever be
No, right now is the MOST capable these statistical context models will ever be. After this, their input data is corrupted by their own output stream.
Re: (Score:2)
...and their PR departments. It didn't take long for AI systems to start offering people some good old-fashioned politically correct ethnic diversity!
Re: (Score:2)
> debugging, code review, or data extraction
I am working on my own program, in order to make money. Tell me why I would upload my code to a 3rd party that will then use it/make it available to the world?
Re: (Score:2)
People want to take from the public domain and not give it.
Re: Linus has become the old man shaking his fist (Score:1)
Re: Linus has become the old man shaking his fist (Score:1)
Re: (Score:3)
I think people misunderstand the power of LLM.
LLM is not good at giving you answers to anything because of the hallucinations. This is what most people try to use it for and this is why they feel it is not good for anything.
LLM is good at modifying the input you give it. For example you can give it a book and ask it to write a summary of it. Or you can give it all the medical research papers in the world and ask it to create a summary of research papers related to X, or you can give it some code and ask it
Re: (Score:2)
Re: (Score:2)
You talk about summaries, but you don't define what a summary *is*, and you have no way to measure how well a purported summary is produced by an LLM, relative to some standard. You're just wishful thinking about the usefulness at this point in time.
In actual fact, a summary, however reasonably defined, has certain properties such as describing the content of some text in fewer words than the original. That makes it a kind of lossy compression algorithm. The problem i
Re: (Score:3)
Once again, and a little louder: it doesn't have to be AGI to cause huge disruptions. My experience and the experience of many others is that current LLMs in the GPT-4 class can be huge productivity enhancers. Connect the dots.
Re: Linus has become the old man shaking his fist (Score:1)
Re: (Score:2)
Re: (Score:1)
Re: (Score:3)
Won't this AI code result in hacky code? Easy to make, very difficult to maintain. Especially without AI?
Right now, I think LLMs are not nearly efficient enough to run on relatively modest hardware. As I have been playing around with TaskWeaver some 6 months ago, I couldn't help but enjoy the things I could do with it. Today I make a new TaskWeaver setup, its own code-base appears 3 times as large and it won't work without Docker anymore.
Docker on WIndows...that is a special kind of hell, so I setup a fresh
Re: (Score:2)
> Won't this AI code result in hacky code? Easy to make, very difficult to maintain. Especially without AI?
As Torvalds said, humans have no problem creating bad code without AI, so there is nothing new here.
Re: (Score:3)
Linus totally misses the point, which is kind of unusual.
GPT-4/5/6 might not replace him as a kernel architect, but it sure as hell is (and will increasingly be) making a ton of people in a lot of industries waaay more productive. There isn't an infinite supply of work, so a lot of jobs will go away--never to return.
And no, this isn't some millennial/Zoomer potshot: I'm two years older than Linus ...
I'm not sure your intuition is correct. Sure the supply of work isn't infinite but it does increase when productivity goes up.
Look at a website designer, in the early 90s you were writing HTML and CSS by hand, drawing icons with crappy editors, etc, etc.
Now, you've got crazy libraries and full-fledged website builders, I'm guessing a modern web designer is MUCH more productive.
The result? There's waaay more website designer jobs out there, that's partially because the Internet is bigger, but also because yo
Re: (Score:2)
Pattern of change (Score:5, Interesting)
> he thinks most AI today is "autocorrect on steroids."
That is a step up, Jetsons or not.
A historian on NPR noticed that past technical booms have a common pattern: investors over-estimate the short-term impact but under-estimate the medium and longer-term impact.
Most the first batch of prominent Dot-Com co's folded, but the next generation of co's completely changed commerce, social interaction, and news.
The first railroad co's overbuilt track and most failed or had to merge to survive. But the vast network of tracks eventually got heavily used and revolutionized commerce.
If AI follows that pattern, most initial ideas and companies will flop, but will plant the seeds for the next generation to thrive on.
However, that may be survivor bias. There was an AI boom in the late 80's that failed to make a significant dent. We don't know if the current batch will similarly stall for a few decades.
Embed an LLM in Linux (Score:1)
Re: (Score:3)
You are probably talking about WarpAI.
Available currently in a free and paid version. Installs and runs its own Terminal application inside Linux. It didn't run when I tried to install it in a Linux server edition without a GUI. You may have more luck with that, but I don't expect it. However, if you run Linux with a GUI together with WarpAI, the concept of 'Linux terminal' becomes a lot less difficult. Just tell what it is you wish to accomplish and it will either do it for you and/or create the scripts fo
Re: (Score:2)
> Well an LLM interface could help with that - have it translate user commands (optional voice interface) into actions in Linux
User: Computer, remove my files... ... in the trash bin.
Computer: Done!
User:
Re: (Score:2)
What whims are you referring to? It is not like he has changed the language of the kernel every year.
The Hype (Score:3, Insightful)
The hype was inserted when somebody cleverly but callously put “Intelligence” in the brand name Artificial Intelligence. Nobody can even describe what regular, natural intelligence is let alone describe it sufficiently to distinguish between natural and artificial intelligence. If they just claimed that it’s a significant beneficial improvement over the existing algorithms that would be finebut claiming we can create a conscious intelligence - or that we are even moving in that direction - has not been justified. I say take the word “intelligence” out of the name, calm down and make the best of this advancement.
Re: (Score:2)
The hype was inserted when somebody cleverly but callously put “Intelligence” in the brand name Artificial Intelligence. Nobody can even describe what regular, natural intelligence is let alone describe it sufficiently to distinguish between natural and artificial intelligence. If they just claimed that it’s a significant beneficial improvement over the existing algorithms that would be finebut claiming we can create a conscious intelligence - or that we are even moving in that direction - has not been justified. I say take the word “intelligence” out of the name, calm down and make the best of this advancement.
No, the word "intelligence" is not the problem. The problem is the ambiguity of the word "artificial". Many people read it as in "artificial light", which basically means REAL light from MAN-MADE light sources. How it should be read is as in "artificial smile" - i.e. the intelligence not being genuine and real - it is just faking it.
For A Hummer Everything Is A Nail (Score:2)
Machine Learning is not within the grasp of someone who only did system programming his whole life.
Re: For A Hummer Everything Is A Nail (Score:1)