2015 Radio Interview Frames AI As 'High-Level Algebra' (doomlaser.com) 56
Longtime Slashdot reader MrFreak shares a public radio interview from 2015 discussing artificial intelligence as inference over abstract inputs, along with scaling limits, automation, and governance models, where for-profit engines are constrained by nonprofit oversight: Recorded months before OpenAI was founded, the conversation treats intelligence as math plus incentives rather than something mystical, touching on architectural bottlenecks, why "reasoning" may not simply emerge from brute force, labor displacement, and institutional design for advanced AI systems. Many of the themes align closely with current debates around large language models and AI governance.
The recording was revisited following recent remarks by Sergey Brin at Stanford, where he acknowledged that despite Google's early work on Transformers, institutional hesitation and incentive structures limited how aggressively the technology was pursued. The interview provides an earlier, first-principles perspective on how abstraction, scaling, and organizational design might interact once AI systems begin to compound.
The recording was revisited following recent remarks by Sergey Brin at Stanford, where he acknowledged that despite Google's early work on Transformers, institutional hesitation and incentive structures limited how aggressively the technology was pursued. The interview provides an earlier, first-principles perspective on how abstraction, scaling, and organizational design might interact once AI systems begin to compound.
Re: (Score:2)
Great! (Score:2)
Recorded months before OpenAI was founded, the conversation treats intelligence as math plus incentives rather than something mystical
Great, maybe these people can add their insight to Wikipedia [wikipedia.org], since clearly no one thought of AI that way before 2015.
In fact, if we're going to be reductionist, we could say that incentives are also just math so put me on Wikipedia too, since no one has ever thought of that either!
Re: (Score:2)
Great, maybe these people can add their insight to Wikipedia [wikipedia.org], since clearly no one thought of AI that way before 2015.
Even for your sarcastic joke, why bother? The AI article for wikipedia already has this description, including links to the papers from the 50s and 60s which describe how AI algorithms work in this way.
A Series of Tubes (Score:2)
Anyone know how the internet was framed as back in 2006?
Also (Score:2)
Re: Also (Score:1)
Re: (Score:2)
Knowing most people, we would help. It's better to bootlick and stick around then be the target.
okay (Score:1)
"Mystical"? (Score:4, Interesting)
Re: (Score:1)
Not understanding doesn't actually make it mystical, but when you pile lack of understanding upon more of the same, you can get to basically the same place in the end. After enough steps of not understanding you might as well be looking at magic, because you don't understand even the concepts you need to understand the idea you're thinking about.
Some people know how LLMs work, that's how we got them. But a person can't reasonably understand every detail of how an LLM of any significant complexity produced a
Re: (Score:2)
Re: (Score:2)
Further, your logic is so fucking boneheaded.
It goes, as such:
Since we understand the fundamental parts of a computer, we understand that it is bound by rules and thus not mystical. Good. Same page.
But then you seem to imply that your brain follows an entirely different set of logic rules:
Our own brains may or may not be mystical as we don't know enough about the brain to create one, as we have created computers.
Au contraire, my low-IQ friend.
We don't know enough about the brain in the same way that we don't know enough about an LLM.
The metabehavior of th
Re: (Score:2)
Re: (Score:2)
You are way overstating the amount they know about the brain.
No, I am not.
They have no idea what role any given hormone plays in shaping our thoughts.
You're conflating signal with metabehavior.
A hormone is merely a signal into a neuron.
Thoughts are a behavior of the system as a whole.
They don't even really know what dopamine is for.
See above.
You're going to continue to make this mistake.
You have 2 paths forward, from here.
To continue to be willfully wrong, or to accept that your argument is substrate independent.
We understand as much about the brain as we understand about a "computer".
They fallow very similar sets of basic rules.
It is only the system as a whole that is confoundin
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
That's like asking how an EM field affects the input to a transistor.
You are trying to graft the behaviors of the system at scale to its smallest components. Neurons do not think. Dopamine has no special significance to them other than to alter their action potential.
At the core, your brain is a simple threshold logic machine. One that is almost too fucking complicated to really imagine.
Even our largest LLMs are nowhere close to the connectivity of your brain. They're 2 orders of mag
Re: (Score:2)
Re: (Score:2)
Please tell me your evidence that dopamine does not shape thoughts and that affects of hormones do not affect human thinking.
I literally did not say that. You're so stuck in your preconceptions that your eyes refuse to read what's right in front of them.
I'll quote myself: "Dopamine has an effect on thought, because it is a signal into the neural net that affects its large scale behavior."
That's dogma over data, and it's doing you no favors. You're going to be wrong about a lot of shit in this world with your perceptive abilities so crippled.
The problem with this discussion is your conflation of neurons and the brain.
Your arg
Re: (Score:2)
Re: (Score:2)
Now, my turn: Why is it that you believe that animate objects are capable of mystical behavior?
Re: (Score:2)
Re: (Score:2)
You can't rule out that there is a flying spaghetti monster in the sky operating your brain via puppet strings. That doesn't mean that it's rational to consider the possibility. You are, whether you mean to or not, ascribing magical properties to neurons and chemicals when trying to draw the distinction you are drawing between computers and brains. This is a simple immutable fact: If the brain follows the known laws of physics, and there is absolutely no good reason to think that
Re: (Score:2)
Re: (Score:2)
There is, indeed, nothing fucking magical about any of this.
When you have fucking cosmically large logic networks, you can only describe the shit they can pull off as "emergent", because you have no hope in hell of ever fully reverse engineering the system.
The only pointed criticism of your otherwise pretty intelligent OP, was of this:
Our own brains may or may not be mystical as we don't know enough about the brain to create one
Our brains aren't mystical. If they are, they're the first thing in the entire universe we have encountered that is, and some day, we may ne
Only if... (Score:2)
...you don't care about the correctness of your calculations.
Re: (Score:2)
Re: (Score:2)
Well, luckily maths is like Shakira's hips - they don't lie. What AI-advocates want, and what AI-advocates can have, are two separate things.
See also Grokking - aka "Getting It" (Score:1)
Re: (Score:2)
Nonsense. You can program a neural net to add and multiply better than a human. You can specifically teach one to add and multiply about as well as a human.
LLMs typically don't because they're not trained to do so. AI is about making machines that can do things humans do well, not things machines do well.
Re: (Score:2)
The catch is, it needs to be a "thinking" LLM.
For a non-thinking LLM, there's a fundamental limit to how well it can calculate since only its context is its state.
Expecting an LLM to calculate a complex number without showing its work is absurd.
This is just the latest, "LLMs can't tell you how many 'r's are in strawberry" (an also completely incorrect claim, for any v
Re: (Score:2)
I don't see why. Minksy made famous the fact that you can trivially encode any logic gate except XOR into a one-layer perceptron. In the interest of selling books he coyly failed to note that you can encode XOR into a two-layer one. With just NAND or NOR you can build a standard arithmetic unit. The "calculator" the OP mentioned *is* a neural network.
The hard part is getting the system to recognize that some dipshit is asking
Re: (Score:2)
I don't see why. Minksy made famous the fact that you can trivially encode any logic gate except XOR into a one-layer perceptron. In the interest of selling books he coyly failed to note that you can encode XOR into a two-layer one. With just NAND or NOR you can build a standard arithmetic unit. The "calculator" the OP mentioned *is* a neural network.
There's no question that you can encode an ALU in a chain of MLPs. That's not what an LLM is, though.
The hard part is getting the system to recognize that some dipshit is asking it to do some arithmetic, extract the relevant operands, and pass it off. That's an actual AI problem and it's something LLMs are pretty good at.
Pass it off to what? An ALU that you imagine formed inside of the transformer layers?
It is pretty absurd to expect a thing specifically designed and trained to interpret language to be good at arithmetic and to figure out it's own algorithms for doing it. Humans, at least the vast majority of them, who get a lot more training than just reading books, can intuitively recognize numbers up to about four. Everything else is spatial pattern recognition (e.g. arrangement of dots on a die) or counting. We can't intuitively do arithmetic much at all; we use memorization and laboriously learned algorithms, usually involving external aids like pencil and paper, much like reasoning LLMs combine language models with other types of model, scratch memory, and other tools.
Bingo.
However, they're just fine at doing it. They just need to be able to show their work.
If you ask an LLM to produce an answer to 43312 x 13453 in one-shot, it will hallucinate the answer. It produces tokens. There isn't an ALU in there.
It needs to be able to generate enough tokens to solve it like a person would in t
Re: (Score:2)
An LLM is absolutely a chain of MLPs. It's 60-70% flat out MLPs and the remaining bit, the transformers, are mathematically equivalent to MLPs with a few extra bits interspersed.
You could probably arrange something like that if you trained it with that in mind. It would be dumb though. You pass it off to a regular ALU, which
Re: (Score:2)
An LLM is absolutely a chain of MLPs. It's 60-70% flat out MLPs and the remaining bit, the transformers, are mathematically equivalent to MLPs with a few extra bits interspersed.
Dude, are you being deliberately obtuse?
Of course they're a fucking chain of MLPs.
What they are not are ALUs.
There is no reasonable way for an ALU to come into being in the way that they're trained. None.
That's why I said LLMs are not that. You could absolutely train an NN to be an ALU, with ease.
You could probably arrange something like that if you trained it with that in mind. It would be dumb though. You pass it off to a regular ALU, which is what your reasoning models are doing. The OP I replied to claimed AI, which is not LLMs, could not add or multiply. There is a long and stupid history of claiming that universal approximators simply cannot approximate some function.
No, that is not what reasoning models are doing.
They have learned to solve the problem like a human would, not like an ALU. That's why it takes so many tokens to do it. They must show their work.
There's no qu
Re: (Score:2)
Sigh.
Re: (Score:2)
What constitutes a "thinking" LLM ?
Re: (Score:2)
Linear algebra, yes. (Score:2)
A complete fail to understand (Score:1)
Algebra is in the same basked as logic and set-theory (with some limitations). It delivers absolutely correct results, but has very large and deep search spaces and hence a strong tendency to run into state-space explosion and expensive computations.
LLMs are statistics, which is an entirely different basket. It comes with results that are probably or maybe true, very flat search spaces giving fast computations and hallucinations.
Or in other words: Algebra delivers reliably, but the cost may be large and oft
Re: (Score:2)
Statistics is based on algebras, known as sigma algebras.
Re: (Score:2)
LLM inference is fundamentally linear algebra.
Beyond that, the model can learn symbolic algebraic reasoning, just like a human can.
The substrate of that reasoning is not relevant in the slightest.
Re: (Score:2)
I believe you may be confusing statistical inference with logical inference. LLMs use linear algebra to do the former. The latter is the province of propositional and predicate logic (expert systems, theorem provers, etc., for example, Prolog).
Re: (Score:2)
I believe you may be confusing statistical inference with logical inference.
No... I'm not.
LLMs use linear algebra to do the former.
Correct... I implied nothing else.
FTA:
In that sense, LLMs feel less like a revolution and more like a delayed convergence—linear algebra finally getting enough data, enough depth, and enough money to show its teeth.
The latter is the province of propositional and predicate logic (expert systems, theorem provers, etc., for example, Prolog).
And LLMs, as it turns out, since they can also do linear algebra token-by-token.
Re: (Score:2)
So we will be having the solution to P ?= NP directly, then ? :) Sweet Jesus, maybe so...
Re: (Score:2)
I mean sure, it's possible. LLMs have proven able to formulate proofs. But there are still far more humans that are smarter than these things, and they still haven't gotten it.
I'm confused about whether you're trying to imply that proving P ?= NP is the threshold for demonstrating the ability to logically infer, and thus implying that humans are incapable of logic
Re: (Score:2)
My last remark was a bit of whimsy -- I would find it amusing if an unresolved great question in computational complexity was resolved by a computer. I wasn't trying to imply anything.
As for how long it's going to take, who knows ? Fermat's Last Theorem took 358 years, and lots of very bright bulbs took a swing at it, including Gauss and Euler (although both found a bit of traction).
Re: (Score:2)
Re: (Score:2)
Can't. My anger is incalcuable.
Reductionist (Score:1)
A book is an ordered collection of letters.
You can't deny that. Still somehow books are more than just their letters.
If the answers are wrong, just stir... (Score:2)
ignoramuses (Score:2)
Ignoramuses will be ignoramuses. Language is algebra. Of course, the average person who thinks they know everything doesn't know what [abstract] algebra actually is. Maybe they need to read some Noam Chomsky before he became political.
It's Not Algebra... (Score:3)