Forgot your password?
typodupeerror
GNU is Not Unix

Richard Stallman Critiques AI, Connected Cars, Smartphones, and DRM (youtube.com) 77

Richard Stallman spoke Friday at Atlanta's Georgia Institute of Technology, continuing his activism for free software while also addressing today's new technologies.

Speaking about AI, Stallman warned that "nowadays, people often use the term artificial intelligence for things that aren't intelligent at all..." He makes a point of calling large language models "generators" because "They generate text and they don't understand really what that text means." (And they also make mistakes "without batting a virtual eyelash. So you can't trust anything that they generate.") Stallman says "Every time you call them AI, you are endorsing the claim that they are intelligent and they're not. So let's let's refuse to do that."

"So I've come up with the term Pretend Intelligence. We could call it PI. And if we start saying this more often, we might help overcome this marketing hype campaign that wants people to trust those systems, and trust their lives and all their activities to the control of those systems and the big companies that develop and control them."

"By the way, as far as I can tell, none of them is free software."

When it comes to today's cars, Stallman says they contain "malicious functionalities... Cars should not be connected. They should not upload anything." (He adds that "I am hoping to find a skilled mechanic to work with me in a project to make disconnected cars.")

And later Stallman calls the smartphone "an Orwellian tracking and surveillance device," saying he refuses to own one. (An advantage of free software is that it allows the removal of malicious functionalities.)

Stallman spoke for about 53 minutes — but then answered questions for nearly 90 minutes longer. Here's some of the highlights...


Stallman's talk Friday covered everything from dark patterns, censorship, back doors, subscriptions, and remote updates (and remote bricking). "Insecurity that users can't fix is one of the consequences of non-free software."
  • He called age-verification laws "unjust surveillance... imposed by laws. There's been no campaign to require a compromise so that you can prove you're an adult without their knowing who you are."
  • To a question about gaming hardware, Stallman said "I don't pay much attention to games. They're almost all proprietary software, so let them end up in the trash." (Though he added later that "There are some free games implemented in free software, which is nice if you've got time to play a game.")
  • He uses a ThinkPad that's "old enough that in this model, you can shut off [Intel's] Management Engine permanently and never let it run again."
  • Richard Stallman's preferred free distro is Trisquel. But when it comes to mobile device operating systems, "Basically, they're all bad... I don't want the tracking that mobile devices do, so I don't want to have one."
  • What does he think of the push to rewrite GNU's coreutils in Rust "I think it shows that some people hate copyright."

    And what does he think of Rust in general? "I'm in favor of all languages as long as they're implemented in free software." (His one issue is "the trademark conditions on Rust don't make it concretely clear what changes you have to make to be allowed to distribute a modified version... I sure wish they would clarify this.")
  • Has he had run-ins with U.S. intelligence agencies trying to foist back doors or surveillance into GNU? "Uh, no... I would suspect that if they wanted to do that, they wouldn't approach me."
  • Stallman also had suggestions for educators. "Universities should teach students to do reverse engineering. It should help them focus on the projects that will remove obstacles to winning freedom." And later he recommended universities try educational programs where students contribute to free software programs.
  • He also pointed out that today's web sites have what he calls "a sucker clause" typically saying "we may change these terms and conditions. If you continue to use the site, you will be held to have accepted the new conditions, even if you have not seen them." Stallman believes this should be illegal. (He says one medical site added updated terms saying users agreed not to sue over mishandling of personal data, but instead to resolve it in arbitration.)

And someone actually asked Stallman: Should I use Emacs over Vi? Stallman answered... "As long as it's a free version of Vi, then there's no moral reason one way or the other.

"But Emac loves every user and will be sad if you don't love it back."

At the end of the event, Stallman auctioned off an autographed hardcover copy of Free Software, Free Society: Selected Essays of Richard M. Stallman, an Emacs manual -- and a gnu-shaped stuffed animal (which went for $260). But towards the end, 72-year-old Stallman was asked "What do you hope to accomplish in the next 10 or 20 years with this movement..." HIs answer?

"Well, sad to say, I probably won't see what happens in the next 10 years. I'm getting old. I may still be alive in 10 years. But I wouldn't count on that. It's not going to be up to me for that long."

He also said, more generally, that "we don't have enough control over the situation to be able to plan for specific goals.

"We can plan for what direction to push in."

This discussion has been archived. No new comments can be posted.

Richard Stallman Critiques AI, Connected Cars, Smartphones, and DRM

Comments Filter:
  • Speaking of cars (Score:4, Interesting)

    by quonset ( 4839537 ) on Sunday January 25, 2026 @03:51PM (#65948374)

    Directly below this story is one about Congress failing to repeal the kill switch mandate for cars. Maybe the guy is on to something.

    • by evanh ( 627108 )

      I got myself a converted car for commuting. No fanciness at all. Not even power-steering, LOL.

      The battery is a Lithium-Phosphate and fits under the bonnet thankfully. On the down side it only has 60 km range.

      • So a golf cart?

      • On the down side it only has 60 km range.

        So, 37.28 freedom units? That's only about 7 miles more range than my e-scooter.

        ...and I can take my scooter on the train.

        • On the down side it only has 60 km range.

          So, 37.28 freedom units? That's only about 7 miles more range than my e-scooter.

          ...and I can take my scooter on the train.

          How much fun is that scooter with 3 bags of groceries and/or in the rain/snow? I had a motorcycle for a while and it sucked in all those cases. Granted, I wasn't allowed to take in on train, but it would have fit. :-)

    • Directly below this story is one about Congress failing to repeal the kill switch mandate for cars. Maybe the guy is on to something.

      He's been screaming from the hilltops warning about pretty much every locked-down, consumer-hostile trend the tech industry has been on, even going as far back as when these ideas were just proposals. The thing is, the vast majority of consumers don't see it as a problem and buy the stuff anyway.

      It's like McDonald's - everybody knows it's unhealthy and really doesn't even taste all that good, but somehow the business is still incredibly successful.

      • It's called the market. People make their own trade-offs and buy accordingly. Most people are not coders nor do they have coder-friends, so giving them the source code of everything they buy is not of much use to them. Which is why they buy Windows laptops or Macs, which are easy to use OOTB, and have companies they can go to if things don't work

        I do agree that it is disgraceful, the trends that Apple, Microsoft and Google are going through, doing things like almost bricking old Apple devices, infestin

    • Directly below this story is one about Congress failing to repeal the kill switch mandate for cars. Maybe the guy is on to something.

      He is on the Left. Maybe he could pick up the phone and tell all his Democrat friends to repeal the kill switch mandate. It could then be a grand bipartisan move, just like the one to open up the Epstein files

      Also, why does he need to find a skilled mechanic to work on a disconnected car? That's how cars used to be, back from a few years ago. I still drive a 2014 model, and it's pretty disconnected. Real issue is that as a communist, he doesn't have the first clue about how companies work, so that ev

  • by FeelGood314 ( 2516288 ) on Sunday January 25, 2026 @03:52PM (#65948382)
    Who always seems to be right.
    • by allo ( 1728082 ) on Sunday January 25, 2026 @04:09PM (#65948416)

      And he is the only one to be consequent. Most people here know about enshittification, know why ToS are unfair to users and hugely in favor of companies and know that most proprietary software includes anti-features working against the users. Many here are old enough to remember when virus scanners found "adware" and know every app does not only contain ad frameworks, but also tracking frameworks and sometimes other ways to monetize your device.

      Still most choose convenience and just accept it the way it is. You may have a custom ROM for Android, but did you uninstall the Playstore? And if you did, are you still avoiding any non-free apk? What about your Google account? Dropbox? Some of the Meta social media? Windows? Apple? I don't know if I know about any paranoid nerd but Stallman who 100% stands by his word and manages to avoid all these things.

      • It would help to know how much he was paid to give the speech.

        It would help if these personality X made comment about topic Y during a speech included how much the person was give the speech.

        " Meanwhile, speaking is my main source of income, and the extra price of a business class ticket would be a lot more useful for me if I can spend it on something else" from http://xahlee.info/emacs/misc/... [xahlee.info]

        Asking for the same reason why nearly every memoir by someone exiting a political organization, news organization,

        • by martin-boundary ( 547041 ) on Sunday January 25, 2026 @06:52PM (#65948688)
          He's Stallman. If you don't know anything about him, you must have lived under a rock for the last 40 years. Please hand over your geek card to the attendant at the door, and walk two doors to the left. The hermit support group has already started.
        • I'm pretty sure it's those woke foundations, like Mozilla or Gnome, which would be happy to pay him. Since they nowadays blow their money on Leftist causes, rather than on their core projects. The Mozilla Foundation is no longer about browsers, while the Gnome foundation is more about making Gnome like Windows, and sabotaging X11 and non-systemd systems. I'm sure a lot of these FOSS foundations would be happy to toss cash Stallman's way
    • Re: (Score:3, Interesting)

      by hdyoung ( 5182939 )
      Repeat this after me, 5 times, please. “Outside of their field of expertise, scientists and engineers are no better than laypeople.”

      You are now a better person.

      Stallman’s published writing on the topic of CSAM suggests that, no, he’s most *definitely* not always right.

      I respect Stallman for a bunch of reasons. He’s probably got a solid 15 or 20 IQ points on me, and his views are worth considering. But that doesn’t mean that he’s always right. His extrem
    • "Pretend Intelligence" is what the word "Artificial Intelligence" already means.

    • He predicted enshittification. Guess he needed to make up a word for it. He'd still be dismissed as a nut or Luddite or communist etc. because people who are way way ahead of the curve just go over people's heads and upset them. The emotional memory and impressions remain a long time and it takes a long time for the slow people to catch up and realize the truth. They have to be reminded of the predictions and that they ignored them; maybe people respect the man for being ahead of his time but they usually R

    • by mjwx ( 966435 )

      Who always seems to be right.

      He's only the paranoid crackpot to those who don't listen.

      He's not the nutbar talking about vaccine hurties or fake birds, he's actually addressing a serious crisis in western society that we seem to be sleepwalking into because it's not affecting the average person just yet.

  • A smartphone is a handheld ARM-based computer with crappy controls and an unusual network adapter, a cellular modem. All but a few of them come with shitty closed OSes, but the same is true of desktop computers. He should try a Purism Librem 5 for example.

    There is nothing inherent to a smartphone that makes it an Orwellian surveillance and tracking device any more than there is anything inherent to a desktop computer that makes it an instrument of Microsoft's control over users.

  • by VaccinesCauseAdults ( 7114361 ) on Sunday January 25, 2026 @04:18PM (#65948426)
    "Large language models" are "artificial intelligence" in much the same way that "Linux" is "on the desktop".
    • by evanh ( 627108 ) on Sunday January 25, 2026 @05:06PM (#65948502)

      Linux has many real desktops, both in options and uses. Linux does desktops really well. If Windoze weenies choose to hide in the sand then that's on them.

      Whereas LLMs can't do any real intelligence at all.

      • by jd ( 1658 )

        Depends. If "do" is interpreted as per the Carry On films, then possibly it does.

        If "do" is interpreted to mean "can actually be deemed even vaguely like", then you're absolutely right.

      • Iâ(TM)m aware of this. We have thousands of Linux workstations running (mostly) RHEL 9 Rocky, both as dedicated developer and artist machines, and in site farms for distribute compute. What I meant was, it still isnâ(TM)t Year Of Linux On The Desktop because only some tiny percentage of machines in the world with local users are running Linux.
      • by mjwx ( 966435 )

        Whereas LLMs can't do any real intelligence at all.

        To be fair, that also describes a lot of carbon based language models. Hence I use the artificial ones to deal with the natural ones so I don't have to.

  • Though you could say it's rather apt.

  • by cliffjumper222 ( 229876 ) on Sunday January 25, 2026 @04:29PM (#65948444)

    "And they also make mistakes without batting a virtual eyelash. So you can't trust anything that they generate." - woah! That's damning! Thank goodness that doesn't apply to humans!

    Seriously, anyone who is down on this technology needs to hand in their nerd/geek card and go and start shouting at clouds. That goes for Slashdot readers too. The handwringing over a technology by middle aged men is, just sad. This tech is transforming, dirty, gritty, and messed up. It's like the Internet used to be. If you can't handle it, then be honest, your world view is outdated and you've become your parents.

    • Re:Untrustworthy (Score:5, Insightful)

      by fleeped ( 1945926 ) on Sunday January 25, 2026 @04:48PM (#65948474)

      The whole point is that if we want accuracy, we use machines. If I wanted fiction and unreliability, I'll ask humans. If I want to calculate an integral, I don't want an RNG infused answer that might or might not be wrong.

      The techology is really cool, it's not AGI, but it has it's place and uses, but with a few issues: 1) it's pay-to-win: the ones with the deepest pockets can make the best product 2) privacy issues - all your data gets siphoned and sold to the highest bidder 3) you unskill yourself by using it, as you're delegating cognitive work to something else 4) it's all built on illegally hoovering humanity's data without paying anything back.

      So, it's not like we're grumpy because we're old people. There are serious issues.

    • by evanh ( 627108 )

      It doesn't apply to humans at all though. We choose to lie, LLMs don't have a clue.

      • It doesn't apply to humans at all though. We choose to lie, LLMs don't have a clue.

        That's not true at all.

        People often recall [wikipedia.org] things differently from the way they happened. Human memory is malleable, and can lose accuracy [wikipedia.org] when presented with misleading information post facto, or else people make up things to fill in gaps in memory, and sincerely believe those false memories to be true. They don't choose to lie; it's the way the human brain works. Some of those hallucinations can be shared by large groups of people - see the Mandela effect [wikipedia.org].

    • Yeah! All the cool kids are down for losing their jobs, effing up the environment and polluting the internet with a load of useless dross so bazillionaires can make more money! Youre so hip youre square daddy-o!

    • by leptons ( 891340 )
      Slop is slop, whether machine or human generated. And unfortunately, I can always count on the machine to generate slop. That can't be said for all humans, because at least I can fire a human that gives me slop, but the machine is what it is - if you trust the machine more than a human, then you are a fool.
  • by SoftwareArtist ( 1472499 ) on Sunday January 25, 2026 @05:20PM (#65948520)

    I challenge anyone to come up with a definition of intelligence that 1) is clearly satisfied by humans, 2) is clearly not satisfied by current AI, and 3) isn't totally contrived (like defining intelligence as "the ability of humans to...").

    Here are some common definitions of intelligence.

    "The ability to take in information and make decisions based on it." Computers do that.

    "The ability to solve problems." Computers do that.

    "The ability to predict the consequences of your actions." Computers do that.

    "The ability to understand complex subjects." Humans often speak or reason about things without really understanding them. For examples, see the comments thread on any slashdot story. Besides, "understand" is another word that doesn't have a clear definition.

    If you believe AI isn't intelligent then how do you define intelligence? Are you absolutely certain AI doesn't satisfy your definition? Are you absolutely certain that humans do? And if you answered yes to both, is it a contrived definition that doesn't match how most people use the word?

    • by evanh ( 627108 )

      Computers are programmed to make decisions and predict.

      Humans use computers to solve problems.

      • by merde ( 464783 )

        Humans use computers to solve problems.

        Computers use humans to solve problems, eg. when a drive fails in a RAID array, the computer asks a human to change it.

    • by Brain-Fu ( 1274756 ) on Sunday January 25, 2026 @05:52PM (#65948580) Homepage Journal

      When people are speaking "naturally" (which is to say, without scientific or philosophical precision), the word "intelligence" is a vague and sloppy word that rolls in all kinds of mental abilities, including and in particular "conscious experience."

      When people say "the LLM doesn't actually understand anything" for the most part they are using the word "understand" to mean "hold the concepts in conscious awareness." For them, intelligence requires consciousness (or just "life") which they expect computers don't have.

      So, they will continue to argue that computers aren't intelligence in the "important" sense of the word, no matter how much one might drill them for precision. And this is the primary reason why the definition of "artificial intelligence" remains a perpetual moving target. As has been said before, "artificial intelligence is whatever computers can't do yet." People just jump from one random thing that humans can do, that computers can't, to another, like a skier bouncing from one mogul to the next, because these are just demonstrations of the "core thing" which is nothing more than their own sense of being a conscious being (which they still think computer's aren't).

      • I was updating a program that uses c-ares to use a newer version, some of the functions that were being used were deprecated.

        Now I've never used c-ares, this was in a program that I do use and I wanted to bring up to date as it had been neglected for a long while.

        One of the replacements gave me a pointer to an opaque type. I needed to know how to get the data out of the opaque type, somewhere there would be helper functions to do exactly this.

        So I asked chatGPT, immediately it said ptr->value. Obviously,

        • Thank you for the very clear and concise example of precisely what I was talking about.

          You zoomed right in on a very specific thing that humans normally do that the computer didn't do in this case, and then declared that this is exactly computers aren't intelligent. The irony is you were talking about a computer that can write code; people have been saying for decades that computers aren't intelligent precisely because they can't write code! Furthermore, the example you gave only sometimes happens; most o

      • And this is the primary reason why the definition of "artificial intelligence" remains a perpetual moving target.

        It's not a moving target at all. The term was coined in 1955, and its accepted definition hasn't changed the slightest bit in the 70 years since. It comes from the proposal for the Dartmouth Summer Research Project on Artificial Intelligence [stanford.edu]. Here is how they defined it.

        For the present purpose the artificial intelligence problem is taken to be that of making a machine behave in ways that would be called intelligent if a human were so behaving.

        That is still exactly what practitioners in the field take it to mean. If you come across other definitions, that's people making up their own definition because they don't know how the field has defined it for 70 years.

        Two important poi

    • by jd ( 1658 )

      I use a definition I've extrapolated from the Turing Test:

      In maths, if f(x)=g(x) for all x, then f=g

      However, for humans, f(x) isn't a razor-thin line, it's a band that follows a normal distribution, so we have to modify this a little. AIs also don't produce an absolute result but a band that, again, follows a normal distribution. (Nobody demands identical neuron firings between humans for the same stimulus, or even by the same human for the same stimulus.)

      If the band for f(x) predominantly lies within 1 std

      • Comment removed based on user account deletion
        • by jd ( 1658 )

          If you want to find anything that is unverifiable or unfalsifiable about the work of Dr John H Conway's theorem, you are welcome to do so and publish a rebuttal paper. If you don't, then I will accept that as evidence that you didn't bother to check the theorem or did and found that your claim doesn't hold water.

          QM works just fine with Relativity. There are two areas which give any problems, gravity and time. However, Quantum Field Theory requires neither to exist, and therefore there is no particular groun

    • by Misagon ( 1135 )

      It is not about human intelligence, but about different types of technology within the field of "artificial intelligence".

      In classic "Artificial Intelligence" before neural networks became mainstream, the term used to stand for when a problem was solved exactly because it was based on carefully crafted rules, put in the system by experts.
      Now it means technology that is a black box by design which probably is correct, most of the time, hopefully, knock-on-wood, and based on random posts on the Internet from

  • I'm not criticizing the guy, he's often right, but he's not exactly forward-minded, that's what I'm trying to say.
  • Sad (Score:5, Interesting)

    by Elektroschock ( 659467 ) on Sunday January 25, 2026 @07:06PM (#65948708)

    It is sad that Richard Stallman got banned by an online mob after a legally sound defence argument for diseased Prof. Marvin Minsky, as a scholar visting Epstein, while the best friend of Jeffrey Epstein ruins the country and asks for a piece of ice.

  • Good old RMS (Score:5, Insightful)

    by hadleyburg ( 823868 ) on Sunday January 25, 2026 @07:24PM (#65948744)

    He is a principled man.

    That doesn't mean he is always right, or that he always turns out to be right, but he is willing to go to some lengths to practice what he preaches.
    Most of us don't do that. We might feel that our personal data should be private to ourselves, but are willing to compromise on that when the principle just becomes too difficult to maintain. That's ok, but I think it's also appropriate to have an admiration for someone like RMS who sticks to his principles through thick and thin.

    He's a bit like Pete Seeger in the folk music world, or one of the prophets of old. He might seem out of date. He might be made fun of. But there is something admirable about a principled man.

  • ...If only people could understand him...

    Richard is a hero by far too advanced on his time. The huge majority of people simply do no, or probably, can not understand the whole potential of what he is saying. The so-called "Artificial Intelligence" buzz word is the latest evidence of this sad reality.

    If Human Kind succeeds not to destroy itself, maybe in few hundreds of years Richard M. Stallman will be remembered as the pioneer of Human Kind Evolution, and people then might say "If we only could have unders

There are no data that cannot be plotted on a straight line if the axis are chosen correctly.

Working...