Forgot your password?
typodupeerror
Intel Linux

Linux Finally Starts Removing Support for Intel's 37-Year-Old i486 Processor (phoronix.com) 131

"It's finally time," writes Phoronix — since "no known Linux distribution vendors are still shipping with i486 CPU support."

"A patch queued into one of the development branches ahead of the upcoming Linux 7.1 merge window is set to finally begin the process of phasing out and ultimately removing Intel 486 CPU support from the Linux kernel."

More details from XDA-Developers: Authored by Ingo Molnar, the change, titled "x86/cpu: Remove M486/M486SX/ELAN support," begins dismantling Linux's built-in support for the i486, which was first released back in 1989. As the changelog notes, even Linus is keen to cut ties with the architecture: "In the x86 architecture we have various complicated hardware emulation facilities on x86-32 to support ancient 32-bit CPUs that very very few people are using with modern kernels. This compatibility glue is sometimes even causing problems that people spend time to resolve, which time could be spent on other things. As Linus recently remarked: 'I really get the feeling that it's time to leave i486 support behind. There's zero real reason for anybody to waste one second of development effort on this kind of issue'..."

If you're one of the rare few who still keep the decades-old CPU alive, your best bet will be to grab an LTS Linux distro that keeps the older version of Linux for a few more years.

Linux Finally Starts Removing Support for Intel's 37-Year-Old i486 Processor

Comments Filter:
  • by GPLHost-Thomas ( 1330431 ) on Monday April 06, 2026 @07:40AM (#66079270)
    Oh... How will Hubble do, since it runs a 486 DX 4 ? :)
    • by iabervon ( 1971 ) on Monday April 06, 2026 @09:23AM (#66079408) Homepage Journal

      It uses VRTX, reportedly. Linux wasn't suitable as a real-time OS when the Hubble was designed, or really even when the Hubble got the 486 installed in 2009.

      • by thegarbz ( 1787294 ) on Monday April 06, 2026 @11:02AM (#66079594)

        Linux isn't suitable as a real-time OS now either strictly speaking. In fact that one of the top hits from a search on Linux RTOS is a paper from NASA (from a comparatively recent 2019) discussing the performance of Linux with every RTOS relevant kernel feature set into the most ideal position. Their conclusion was... well you probably will hit your event deadline if you throw fast enough hardware at it, but it is still nothing like a true RTOS.

        • Worst case, if they need true realtime functionality, there is QNX, which has predated Linux and is still going. Not sure if it supports 32 bit, but wouldn't be surprised.

        • You have no idea what you are talking about [redhat.com]. (As usual)

          2019 was 7 years ago, and the Linux kernel has had hard realtime capability for a very long time. Of course, the kernel doesn't live in a vacuum, so using a properly configured kernel, using an embedded distribution, and properly tuning it is necessary for best results, as discussed in the linked document. I don't even have to read the NASA paper you didn't bother to link to in order to guarantee that you didn't read and understand it, then genuinely
          • 2019 was 7 years ago, and the Linux kernel has had hard realtime capability for a very long time.

            Congrats, you proved my point well. All of Linux's "realtime capability" (in quotes since the kernel still does not achieve true real time reponse, it can't, it's fundamentally not designed for it) is old. There's been no meaningful developments in the past 6 years in this space. Which means that my reference is still quite valid.

            As is yours by the way. You countered the point I made that NASA did a detailed analysis in 2019 that determined it is not a true RTOS, with some Red Hat post from less than a year

            • I'm a real-time embedded Linux engineer. You are, well ... We all already know what you are. Off you go now little troll.
              • Well both can be true right?
                Real time capabilities but not fundamentally real time. Though for 99.99% of the applications it would not matter as long as your hardware is strong enough.
                I can see that for space it would become an issue.
                • There are two categories of realtime: soft and hard. Realtime is complicated, and no OS can guarantee hard realtime if the hardware is not up to the task (excuse the pun.) For example, if you are running an OS written in assembly language on am Intel 8051 microcontroller clocked at 10 Mhz you cannot handle events that could easily achieve hard realtime on a modern system, because the task switching overhead alone precludes such capability, even if your application is just an infinite loop.
      • It uses VRTX, reportedly. Linux wasn't suitable as a real-time OS when the Hubble was designed, or really even when the Hubble got the 486 installed in 2009.

        Why was VRTX chosen? Because of the embedded environment or because of real time needs?

        If the decision was about embedded and real-time was not required, then Linux would be a viable solution today. Today, embedded Linux is a fine choice for non-real time needs.

  • by Snotnose ( 212196 ) on Monday April 06, 2026 @08:20AM (#66079300)
    We were I/O bound with a saturated network. Our boss bought a Pentium to see if it would help. He didn't understand what "I/O bound" meant.

    Went to a Comdex and saw this new fangled thing called a network switch. 3-4 of those in our system solved our problems.
  • by jddj ( 1085169 ) on Monday April 06, 2026 @08:50AM (#66079330) Journal

    Somebody thought it'd be a great idea to remove full 10Mb Ethernet support from two recently-purchased routers I tried at home (bought the second after the first didn't work).

    Turns out this would've cost me my venerable and much-loved Roku Soundbridge M500 and M2000 network music players, which are working just fine, thanks.

    I had to buy a cheap switch to put between them to straighten this out. Waste of money.

    I understand that 486 support takes up needed and scarce dev resources, and it seems reasonable to me to remove it.

    But I wonder what hidden breakage (like my case) happens as a result of making these "reasonable" decisions.

    • by Burdell ( 228580 )

      My $DAYJOB's data center switch upgrade got switches that have 48x 10G SFP+ ports plus 8x 100G QSFP+ ports. When we installed them, we realized that some really old Dell Poweredge servers try to drop to 100M when using shared DRAC (with the dedicated DRAC port also being 10/100M only), and the switches don't support 100M. We also had to look at a bunch of rack PDUs to find options that were 1G rather than 10/100M.

      100M uses less power than 1G, so I guess that's why Dell did that (sounded like a good idea to

      • My (admittedly anecdotal from the totally unscientific sample of random stuff I've had reason to work on) impression is that some 'shared' BMC ports had oddities related to network controller sideband interface speeds, since NC-SI is what the BMC is depending on if the NIC is on someone else's PCIe root. It's not like the BMC actually needs a faster link for much(normal management traffic probably doesn't fill 10mb and mounting virtual media may be literally once-in-a-lifetime) so the actual speed of the NC
        • by Burdell ( 228580 )

          Again, it's about power saving. Idle 1000BASE-T draws noticeably more power than idle 100BASE-TX (IIRC the drop from 100BASE-TX to 10BASE-T is not as significant). There are Energy Star ratings and EU rules about how much power an "off" and "standby" device can draw, so dropping to a lower NIC speed helps reach those levels.

          There was a proposal for a "low power idle" mode extension of 1000BASE-T, not sure if that went anywhere or got implemented.

      • by Zarhan ( 415465 )

        I also like the idea that if you stick a copper SFP into modern "merchant silicon" switch (Broadcom Jericho), you may or may not get working autonegotiation. And copper devices like to have that on by default, even if it's technically not needed for Gigabit Ethernet. And then there might be half-assed support where support for autonego is only for clause 37, but not clause 73 (or vice versa, don't remember which. Anyway, it meant that hooking up a Juniper SRX to a Cisco NCS 57C3 (or anything else using Jeri

    • by Pizza ( 87623 ) on Monday April 06, 2026 @09:21AM (#66079398) Homepage Journal

      It's not that "they removed 10Mbps support" so much as the underlying ethernet hardware (MAC and/or PHY) used by that new router simply doesn't support it. Might not even support 100Mbps either; once you cross into multi-gigabit world, sub-1Gbps support is the exception rather than the rule. Why? Because that would require a more complex (ie expensive) design and be utilized by almost nobody.

      • Sure. I understand well that things change.

        However, it'd be nice if the box had a big yellow burst that said: "Now Without 10Mb/s Support That Has Worked Until Now On The Same Connector Ever Since We Came Up With It!"

        Sucks to get it home and find that this one is borked, too.

        • Why did it need a 'big yellow burst' when it would have plainly advertised itself as a 100/1000 instead of a 10/100/1000?
          • Well, because neither router says what you suggest on the package. Each mentions 2.5Gb Ethernet ports, but neither says what other standards the ports support.

            Neither does the switch I used to fix the situation: it advertises 1Gb Ethernet ports.

            The "big yellow burst" (yeah, of course I was being a little facetious) could let you know "this won't support the minimum speed, in the way everything else with this connector has for decades".

            As it stands, nothing on the box will answer that question before you buy

    • Using WoL (wake-on-LAN) also tends to use 10 Mbit Ethernet. There isnâ(TM)t an LED light combination for that speed on my 2.5 Gbps NetGear switch (so the port lights are all off) but otherwise it still works.

    • The answer is rather simple: don’t upgrade the kernel to a newer version. At this point, any kernel features added would not benefit a 486. Existing and working code is unaffected.
    • by tlhIngan ( 30335 )

      Actually, many consumer gigabit Ethernet switches lack 10Mbps support these days. They are 100/1000baseT only.

      Business and enterprise switches though I've found (including Cisco ones, which you can find dirt cheap used) still are 10/100/1000Mbps. Even newer business and enterprise class switches retain support.

      Of course, once you step into 10Gbps Ethernet, you have to be careful because many only are 10Gbps only, while some do support 1/10Gbps. 2.5Gbps support is iffy unless it's specified which is annoying

      • Actually, many consumer gigabit Ethernet switches lack 10Mbps support these days. They are 100/1000baseT only.

        Business and enterprise switches though I've found (including Cisco ones, which you can find dirt cheap used) still are 10/100/1000Mbps. Even newer business and enterprise class switches retain support.

        I have a HP Aruba 3810M, 48 port PoE, 40 of them are 1G and 8 are "SmartRate" 10G. It's consider "obsolete" and nearing EoL. It does support 10M on the 1G ports but 100M is the lowest you can set it from the browser. You have to get into the cli to force 10M. It does not support 10M on the 10G ports, and while the spec sheet doesn't show it the option to lock to to 100M is in both places so I assume it actually does. I might have to go play around with this later with some 10/100M cameras I have.

        Of course, once you step into 10Gbps Ethernet, you have to be careful because many only are 10Gbps only, while some do support 1/10Gbps. 2.5Gbps support is iffy unless it's specified which is annoying since many things have 2.5Gbps ports.

        I've on

    • A modern Ethernet PHY already has to support multiple different encoding schemes, MLT-3 for 100BASE-TX and PAM-5 for 1000BASE-T (and PAM-64, etc for higher speeds like 2.5 or 10 gig.) Eliminating the Manchester encoding for 10BASE-T is a logical way to save on cost and complexity, especially since most companies probably don't have a lot of legacy 10 mbit only gear to test with.

  • 386 and 486 CPUs were back in the day. I used a commodore 64 for most of my school life and a word processor / typewriter when I needed something that could print text better than the ancient thermal printer I had (no joke I had an okidata thermal printer for my commodore. Quality was good but when you eventually couldn't get the regular paper you had to buy the rolls and I had the jury rig a feeder)

    A lot of people complain about the Sega 32x and Atari Jaguar ports of Doom but it was mind-blowing to be able to play Doom on $300 worth of hardware in 1995. If you wanted a computer that could match the performance of even a Sega 32x you were dropping at least $1,500. Literally five times the price.

    But I do remember when prices came down after I got back into computers after a bit. Picking up a 486 DX 100 for about $150 and then going to a computer shop asking for a Vesa local bus video card for it and the guy just pulled it out of a junk pile and gave it to me. Remember going home and booting up primal rage and X-Men children of the atom on that thing and being blown away with that computer could do. Terminal velocity .
  • My first Linux installation was Redhat 3.03 on a 16MHz 386/SX system in mid-1995. For those of you without an AARP card, that's a 32 bit CPU with a 16 bit bus, which Intel released to cannibalize the market for the 286, which did not have a memory management unit. That means no swapping, you run out of ram, it was game over.

    I think the 486/25 that replaced the 386/SX arrived in ... 1996 ... and it had an astonishing *eight megabytes* of memory. I had kept a one megabyte LIM/EMS 4.0 physical memory card from my 286 when I got the 386/SX, and that actually mattered with Windows 3.x. I put it in the 486, but given that vast eight megabyte expanse of dram it didn't last long.

    Then in late 1997 my employer went bankrupt and as part of the dissolution I brought home the dual Pentium 133 system with 32 megabytes of ram. I remember all my IRC friends were so jealous of that monster ...

  • Intel 486, and CPUs before it, did not include an unique ID. Pentium and all CPUs after it each have their own unique hardwired ID per unit.
    But I guess using a 486 today is pretty unique anyway, so it doesn't really matter. :)
  • by Necron69 ( 35644 ) <jscott.farrow@NOsPaM.gmail.com> on Monday April 06, 2026 @10:30AM (#66079546)

    I was there, three thousand years ago...

    Ok, well it was only ~35 years ago, but I well remember cobbling together installable floppy images from Usenet to get Linux running on my 486DX with a bunch of GNU utilities. This took many hours of downloads and preparation over a dial-up connection, but this was the only way to install because even SLS hadn't come out with a coherent Linux distribution yet.

    My 486 system had a whopping 4MB of RAM with a 200MB hard drive (my first). I massively overpaid for it and charged it all on my shiny new Circuit City credit card while I was still in college.

    At my student job, I had an awesome, monochrome DECstation 3100 running Ultrix 3.1, so the thought of being able to run UNIX at home was just awesome.

    Those were the days. :)

  • There are low-end third-party SoC CPUs [wikipedia.org] out there such as those made by DM&P which are not necessarily fully Pentium compatible and those CPUs are still being made and used. The DM&P chips are particularly popular with retrogamers because you can install an ISA card or a PCI card (depending on the model) and thus use period-correct sound cards or video cards. Whichever chips are no longer supported by Linux will probably no longer be used in new embedded designs going forward and thus will likely has
    • Yeah, Via made a clone that was similar not-quite-i586 fairly recently too.

      I have an old embedded box with one that has SATA 6Gbps ports on it that I thought I would use zeroing out old hard drives.

      I tried Puppy, DSL, SystemRescueCD, and a bunch of others and none would finish boot. FreeDOS is fine.

      It's either eWaste or I need to dig out an Infomagic CD from the attic to get Redhat 9 pr whatever. Probably need to look up when the jump from 3 to 6 happened in SATA land.

      But Linus is correct that actual dist

  • What is it worth to compile the kernel for an architecture that doesn't support enough RAM to boot a recent Linux kernel?

  • This is dumb, they should have already entirely dropped support for IA-32 by now. The AMD Athlon 64 was released in 2003, so the patents for amd64 have already expired, and you can source Skylake era Xeons all day long on eBay for less than $5 per processor.

  • I installed it 30+ years ago because I had learned UNIX 35+ years ago. It was lot cheaper then a UNIX license and was fun to learn. I also installed Slackware but preferred the Red Hat pkg mngr. :-) Man I'm old as the hills.

There is nothing so easy but that it becomes difficult when you do it reluctantly. -- Publius Terentius Afer (Terence)

Working...