Forgot your password?
typodupeerror
AI Math

2015 Radio Interview Frames AI As 'High-Level Algebra' (doomlaser.com) 56

Longtime Slashdot reader MrFreak shares a public radio interview from 2015 discussing artificial intelligence as inference over abstract inputs, along with scaling limits, automation, and governance models, where for-profit engines are constrained by nonprofit oversight: Recorded months before OpenAI was founded, the conversation treats intelligence as math plus incentives rather than something mystical, touching on architectural bottlenecks, why "reasoning" may not simply emerge from brute force, labor displacement, and institutional design for advanced AI systems. Many of the themes align closely with current debates around large language models and AI governance.

The recording was revisited following recent remarks by Sergey Brin at Stanford, where he acknowledged that despite Google's early work on Transformers, institutional hesitation and incentive structures limited how aggressively the technology was pursued. The interview provides an earlier, first-principles perspective on how abstraction, scaling, and organizational design might interact once AI systems begin to compound.

This discussion has been archived. No new comments can be posted.

2015 Radio Interview Frames AI As 'High-Level Algebra'

Comments Filter:
  • Recorded months before OpenAI was founded, the conversation treats intelligence as math plus incentives rather than something mystical

    Great, maybe these people can add their insight to Wikipedia [wikipedia.org], since clearly no one thought of AI that way before 2015.

    In fact, if we're going to be reductionist, we could say that incentives are also just math so put me on Wikipedia too, since no one has ever thought of that either!

    • Great, maybe these people can add their insight to Wikipedia [wikipedia.org], since clearly no one thought of AI that way before 2015.

      Even for your sarcastic joke, why bother? The AI article for wikipedia already has this description, including links to the papers from the 50s and 60s which describe how AI algorithms work in this way.

  • Anyone know how the internet was framed as back in 2006?

  • This is basically what the HHGTTG had for AI for some of the robots, just a reward sensor and then let the robot figure out the problem itself.
  • by Anonymous Coward
    That may be true, but I can ask about accounting treatment and get a well-reasoned multi-page complex answer instantly. I can ask for computer code to be written and literally days of work will be done in seconds. I can feed my broken code in, describe the error and it will find the bug. I can transform famous photos into photorealistic photos of muppets which would probably cost millions to do with puppeteers and set builders. It's like saying HTTP is not much different from FTP so the world wide web is in
  • "Mystical"? (Score:4, Interesting)

    by fluffernutter ( 1411889 ) on Wednesday December 24, 2025 @07:29AM (#65879309)
    Not understanding something does not mean it is mystical. Few people know how the H.265 codex works, but we all know it is not 'mystical' because it runs on a computer system and we know that the computer system has rules and the video is shown based on those rules. By extension, fewer people know how LLMs work. Maybe no one knows how LLMs work. But it still holds true that it is running on the same computer systems that are bound by rules so therefore it is not mystical. Our own brains may or may not be mystical as we don't know enough about the brain to create one, as we have created computers.
    • Not understanding doesn't actually make it mystical, but when you pile lack of understanding upon more of the same, you can get to basically the same place in the end. After enough steps of not understanding you might as well be looking at magic, because you don't understand even the concepts you need to understand the idea you're thinking about.

      Some people know how LLMs work, that's how we got them. But a person can't reasonably understand every detail of how an LLM of any significant complexity produced a

    • Even less know the difference between a codex and a codec, apparently.

      Further, your logic is so fucking boneheaded.
      It goes, as such:
      Since we understand the fundamental parts of a computer, we understand that it is bound by rules and thus not mystical. Good. Same page.
      But then you seem to imply that your brain follows an entirely different set of logic rules:

      Our own brains may or may not be mystical as we don't know enough about the brain to create one, as we have created computers.

      Au contraire, my low-IQ friend.
      We don't know enough about the brain in the same way that we don't know enough about an LLM.
      The metabehavior of th

      • You are way overstating the amount they know about the brain. They have no idea what role any given hormone plays in shaping our thoughts. They don't even really know what dopamine is for. They have no idea how to cure someone whose brain is misfiring. There is no cure for schizophrenia, PTSD or bipolar disorder like replacing a video card. If we knew as well how the brain worked as we know about computers then we would make them instead of computers.
        • You are way overstating the amount they know about the brain.

          No, I am not.

          They have no idea what role any given hormone plays in shaping our thoughts.

          You're conflating signal with metabehavior.
          A hormone is merely a signal into a neuron.
          Thoughts are a behavior of the system as a whole.

          They don't even really know what dopamine is for.

          See above.

          You're going to continue to make this mistake.

          You have 2 paths forward, from here.
          To continue to be willfully wrong, or to accept that your argument is substrate independent.
          We understand as much about the brain as we understand about a "computer".
          They fallow very similar sets of basic rules.
          It is only the system as a whole that is confoundin

          • s/fallow/follow/;
          • Ok so how does dopamine affect the signal? You can't know how they work without knowing all the influences. Maybe I'm wrong in a world where you can produce the answer. How many hormones are there that influence firing of neurons? It seems like you want to say that none of this has any impact on our thoughts, but they know it does. They just don't know how.
            • It does not matter.
              That's like asking how an EM field affects the input to a transistor.

              You are trying to graft the behaviors of the system at scale to its smallest components. Neurons do not think. Dopamine has no special significance to them other than to alter their action potential.
              At the core, your brain is a simple threshold logic machine. One that is almost too fucking complicated to really imagine.
              Even our largest LLMs are nowhere close to the connectivity of your brain. They're 2 orders of mag
              • Please tell me your evidence that dopamine does not shape thoughts and that affects of hormones do not affect human thinking. It seemed to me that they effect human thinking a great deal. Van Gogh's bipolar disorder drove his art. People kill due to schizophrenia, etc.
                • Please tell me your evidence that dopamine does not shape thoughts and that affects of hormones do not affect human thinking.

                  I literally did not say that. You're so stuck in your preconceptions that your eyes refuse to read what's right in front of them.
                  I'll quote myself: "Dopamine has an effect on thought, because it is a signal into the neural net that affects its large scale behavior."
                  That's dogma over data, and it's doing you no favors. You're going to be wrong about a lot of shit in this world with your perceptive abilities so crippled.

                  The problem with this discussion is your conflation of neurons and the brain.
                  Your arg

                  • So let's change the discussion. If you feel computers are capable of mystical behavior, what other inanimate objects are? Can a travel alarm clock or a calculator be mystical?
                    • I have claimed no such mystical behavior. I have claimed emergent behavior.
                      Now, my turn: Why is it that you believe that animate objects are capable of mystical behavior?
                    • I don't. I just can't rule out anything that hasn't been disproven.
                    • Then that is why you fail.
                      You can't rule out that there is a flying spaghetti monster in the sky operating your brain via puppet strings. That doesn't mean that it's rational to consider the possibility. You are, whether you mean to or not, ascribing magical properties to neurons and chemicals when trying to draw the distinction you are drawing between computers and brains. This is a simple immutable fact: If the brain follows the known laws of physics, and there is absolutely no good reason to think that
                    • Ok so the brain isn't mystical either. Doesn't mean AI is
                    • Here, you and I finally agree.
                      There is, indeed, nothing fucking magical about any of this.

                      When you have fucking cosmically large logic networks, you can only describe the shit they can pull off as "emergent", because you have no hope in hell of ever fully reverse engineering the system.

                      The only pointed criticism of your otherwise pretty intelligent OP, was of this:

                      Our own brains may or may not be mystical as we don't know enough about the brain to create one

                      Our brains aren't mystical. If they are, they're the first thing in the entire universe we have encountered that is, and some day, we may ne

  • ...you don't care about the correctness of your calculations.

    • Ouch. You hit the nail on the head. But, you do realize that AI-advocates want to make LLMs the STANDARD of correctness. So any human-derived solution to a problem that deviates from the AI-solution is -- by definition -- just wrong. Financially HUGE AI-companies might convince dystopian governments that they (company) are "too big to fail" . Thus the gub'mnt gun-barrel will defend LLM "truthfulness" while suppressing non-machine analysis. Analogous to USA bailouts of banks, ca
      • Well, luckily maths is like Shakira's hips - they don't lie. What AI-advocates want, and what AI-advocates can have, are two separate things.

  • Add a random jitter to the LLM and we're at AGI. Human brains are not as special as we think. Grokking aka "Getting It" https://en.wikipedia.org/wiki/... [wikipedia.org]
  • I don't know if you would call that high-level, though.
  • Algebra is in the same basked as logic and set-theory (with some limitations). It delivers absolutely correct results, but has very large and deep search spaces and hence a strong tendency to run into state-space explosion and expensive computations.

    LLMs are statistics, which is an entirely different basket. It comes with results that are probably or maybe true, very flat search spaces giving fast computations and hallucinations.

    Or in other words: Algebra delivers reliably, but the cost may be large and oft

    • Statistics is based on algebras, known as sigma algebras.

    • Idiocy.
      LLM inference is fundamentally linear algebra.

      Beyond that, the model can learn symbolic algebraic reasoning, just like a human can.
      The substrate of that reasoning is not relevant in the slightest.
      • I believe you may be confusing statistical inference with logical inference. LLMs use linear algebra to do the former. The latter is the province of propositional and predicate logic (expert systems, theorem provers, etc., for example, Prolog).

        • I believe you may be confusing statistical inference with logical inference.

          No... I'm not.

          LLMs use linear algebra to do the former.

          Correct... I implied nothing else.

          FTA:
          In that sense, LLMs feel less like a revolution and more like a delayed convergence—linear algebra finally getting enough data, enough depth, and enough money to show its teeth.

          The latter is the province of propositional and predicate logic (expert systems, theorem provers, etc., for example, Prolog).

          And LLMs, as it turns out, since they can also do linear algebra token-by-token.

          • So we will be having the solution to P ?= NP directly, then ? :) Sweet Jesus, maybe so...

            • If humans have been unable to provide a proof for P ?= NP, why is it you think a language model that has learned human logic would be able to?

              I mean sure, it's possible. LLMs have proven able to formulate proofs. But there are still far more humans that are smarter than these things, and they still haven't gotten it.

              I'm confused about whether you're trying to imply that proving P ?= NP is the threshold for demonstrating the ability to logically infer, and thus implying that humans are incapable of logic
              • My last remark was a bit of whimsy -- I would find it amusing if an unresolved great question in computational complexity was resolved by a computer. I wasn't trying to imply anything.

                As for how long it's going to take, who knows ? Fermat's Last Theorem took 358 years, and lots of very bright bulbs took a swing at it, including Gauss and Euler (although both found a bit of traction).

  • by Anonymous Coward

    A book is an ordered collection of letters.

    You can't deny that. Still somehow books are more than just their letters.

  • Obligatory XKCD cartoon [xkcd.com] (circa May 2017).
  • Ignoramuses will be ignoramuses. Language is algebra. Of course, the average person who thinks they know everything doesn't know what [abstract] algebra actually is. Maybe they need to read some Noam Chomsky before he became political.

  • by Thelasko ( 1196535 ) on Wednesday December 24, 2025 @04:34PM (#65880371) Journal
    It's not Algebra, it's statistics. [wikipedia.org]

Cobol programmers are down in the dumps.

Working...