Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
Software Programming IT

Carmack: World Could Run on Older Hardware if Software Optimization Was Priority 73

Gaming pioneer John Carmack believes we're not nearly as dependent on cutting-edge silicon as most assume -- we just lack the economic incentive to prove it. Responding to a "CPU apocalypse" thought experiment on X, the id Software founder and former Oculus CTO suggested that software inefficiency, not hardware limitations, is our greatest vulnerability. "More of the world than many might imagine could run on outdated hardware if software optimization was truly a priority," Carmack wrote, arguing that market pressures would drive dramatic efficiency improvements if new chips stopped arriving.

His solution? "Rebuild all the interpreted microservice based products into monolithic native codebases!" -- essentially abandoning modern development patterns for the more efficient approaches of earlier computing eras. The veteran programmer noted that such changes would come with significant tradeoffs: "Innovative new products would get much rarer without super cheap and scalable compute."

Carmack: World Could Run on Older Hardware if Software Optimization Was Priority

Comments Filter:
  • He's correct (Score:4, Insightful)

    by BytePusher ( 209961 ) on Tuesday May 13, 2025 @10:08AM (#65373119) Homepage
    The more CPU and memory the more bloat. Younger software engineers don't feel the need to optimize... I was once one of them and I thought it was impressive, but absurd to write code in assembly(I still do and I'm not wrong). However, at this point we're be vibe coding our way down the slippery slope of irreversible climate change.
    • by AvitarX ( 172628 )

      I don't think Carmack is suggesting that there's any world where we do efficiency for efficiency's sake though.

      He seems to be implying the driving force is innovation, and that's easier with micro services for easier development but more hardware.

      If hardware stopped getting better and cheaper we'd slow down development. He doesn't even seem to be suggesting (from the summary) that we're making the wrong choice, just observing the impacts of rapid hardware improvement, and that ended we'd still have runway t

      • by Bert64 ( 520050 )

        If hardware stopped getting better and cheaper we'd slow down development.

        No he's saying that if hardware stopped getting faster and cheaper we'd focus more on optimization to make better use of the existing hardware.

        Now if you look at platforms like the amiga and atari this was exactly what happened - new hardware stopped being made, so while there was still an active developer community there was a lot of focus on getting the most out of the existing hardware.

        When doom was open sourced and ported to the amiga it was unplayable at first, and got a lot faster over time. Contrast

      • Are click and fade when dragging windows really innovative? Is most UI innovative? Are buckets of emojis innovative? Iâ(TM)m still waiting for instant on portable devices rather than boot up status bars. Thatâ(TM)s been 20-plus years coming. I care far more about memory safe programming, stability/security of my devices, and maintainability than I care about BS features/upgrades

    • Heres an argument for efficiency with a "save the world" sting in the tail.

      4% of Greenhouse gasses are caused by computing. Its not 50% or whatever, but it still counts, lots of 4%s add up.

      So if we absolutely spitball this and say 50% of our CPU energy budget is being eaten by inefficient code we can reduce CO2 by 2% by writing efficient code.

      Javascript is killing the world, basically.

      • by Z00L00K ( 682162 )

        A major culprit when it comes to inefficient code is that there are delivery deadlines that must be held. If it works it's not broken and can be delivered.

        Performance issues is something for maintenance to fix. But maintenance is always underbudgeted and can only fix the worst issues.

        That's why we got Windows 11 that's a lot slower than Windows 10.

        • It's not always possible to fix performance issues later during the maintenance phase. Many perf problems stem from poor design decisions. Same with security. Even if you can fix some, it's much more expensive to do it later than before release.

    • PowerPanel (Score:5, Informative)

      by JBMcB ( 73720 ) on Tuesday May 13, 2025 @10:25AM (#65373155)

      I have Cyberlink's PowerPanel installed on my windows machine to monitor my UPS. Diving into it's install directory you can find:

      • A java runtime environment (90MB)
      • A python install with around 200MB of packages, including PyQT, numpy,
      • A few dozen MB of MSVCRT and MFC libraries
      • NodeJS

      This is for a pop-up application that does basic UPS monitoring. Back when I had an APC PowerChute was about 40MB for the entire client, most of which were driver DLLs for all of their various UPSes.

      At worst this app should be written in C++/Qt or WX, and should take up about 50MB. Instead the entire frontend is written in JavaScript, running on Node, and I'm assuming they are using Python to talk to the UPS itself and do network stuff as Twisted and requests is in there.

      Yeah this isn't exactly performance related, but it's indicative of how development is done these days. 400MB of code to draw a window on the screen to show what your UPS is doing, and some basic networking stuff.

      • Will apcupsd talk to it? I run it on my windows box with an old APC.

      • It's not the size of the code. It's the tiny little parts of that code that are most used and least efficient. You might see significant performance improvements with a little custom inline assembly code which don't effectively change the size at all.

        However...

        "At worst this app should be written in C++/Qt or WX, and should take up about 50MB. " If you wrote this in assembly language, it would probably be under 1 MB. If you care about memory. But 1MB of assembly code is a beast.

      • I think you meant Cyberpower, not Cyberlink. There may be other less bloated software you could use. I have some APC with Home assistant and the Network UPS tools add-on. That'll run on a Raspberry Pi. Still may be bloated, though, as HA stuff is mostly Python.

    • "I was once one of them and I thought it was impressive, but absurd to write code in assembly(I still do and I'm not wrong)."

      You're situationally wrong. In the most limited cases, or for optimization, it is sometimes necessary and it's absurd to try not to.

    • when we have really good C compilers?

      The effort should go into algorithms and code profiling. And reliance on libraries.

      • by gweihir ( 88907 )

        Sometimes you need assembly. A modern C compiler makes embedding a little assembly easy. GCC, for example, even allows you do do it MC68000 style instead of the deranged Intel assembly syntax. Usually you do not need more than a few lines, but any real systemd or driver coder and some others should be able to do it.

      • Right, compilers are pretty damn good these days and we've learned a lot of tricks from people who write assembly. However, compilers are not AI and will invariably do things that are less than optimal in some cases. It's not really feasible to write large codebases in assembly, nor are the gains generally worth the effort. My point is simply that when I was first learning to code in the 90s the guys writing assembly were mocking the kids writing C and C++ for writing bloated software with tons of unnecessa
      • Libraries can be written in assembly, too. Did a fair bit of that in the early 1990s for DOS games. My static libraries ended up linked into amm the company's games.

        Even later on in my career in the 2000s, I have worked on crypto code in assembly for various CPUs. The entire security stack was not written in assembly, though. It was mostly C.
        This was all user land stuff, not kernel.

      • when we have really good C compilers?

        The effort should go into algorithms and code profiling. And reliance on libraries.

        Algorithms are always first. Code profiling tells you where assembly could be useful, you don't write assembly without profiling.

        Also assembly is still popular. SIMD is assembly language programming, even if using compiler intrinsics.

    • by gweihir ( 88907 )

      Assembly is special-purpose only and (except very early) always has been. But a modern C compiler is not hard to use, code is portable and the binaries are blazingly fast if the coder knows what they are doing. The problem with many other languages is that they needlessly increase the active memory work-set and that decreases chache efficieny. That can cost you a factor of 10x, 100x or even more in performance.

      • I have definitely seen massive effects from caching when changing just a few lines of C code. 50% cut in perf. The reality is that when you are writing portable code that has to do run on many CPUs with various amounts and types of cache, you don't really have the luxury to do all this work. Only if you have a fairly controlled hardware/software environment, typically embedded.

    • Yep, my 10+ year old desktop still handles most of the indie games I like to play, even fairly new releases. Starfield is a very pretty PowerPoint presentation, but I don't need that level of graphics for most games I play. Sure my laptop with an NVIDIA 4070 card is the one I use to play when I don't want to go to the office and when I want high res screenshots for various artwork, but I don't feel like I'm missing out much when I use my "ancient" machine.

      I will upgrade eventually, probably when there is

      • Yep, my 10+ year old desktop still handles most of the indie games I like to play, even fairly new releases.

        Same here. it's on its 4th GPU. Its also had plenty of RAM and even that was doubled a few years ago.

    • Younger software engineers don't feel the need to optimize

      Not just the younger ones. I rarely optimize like I used to. I do a lot in Python today, because it has good GPU integration via pytorch, good libraries and programmers who know their way around python is much easier than C++. Time was I'd write cunning algorithms to minimize the amount of processing then split it up over the available CPUs (2 for example). Now I write dumbshit code and if it's slow, fling it to the GPU and watch it go fast.

      I miss do

    • My current CPU (5800x3D) has 6x more cache (96MB) than my first Pentium system had RAM (16MB), and that Pentium was no slouch, a lot of them at the time had 8MB of RAM.

    • It's not not AI generated code: Python is absolutely atrocious and is one of the most popular languages right now.

      I have one app that management insisted I use. Its written in Python and slow as molasses. The developer goes on and on about "how well it scales" and came in and set us up on a Kubernetes cluster where we had like 8 VM's serving a simple app that shouldn't take more than a single machine to run.

      You don't have to code in things in raw assembly, but so many things are stacking on top of ineffic

    • He's correct about the lack of optimization, yes. But I don't agree with him on returning to monolithic architecture. Monolithic architecture makes code maintenance more difficult, without solving the optimization problem. I'm old enough to remember when everything was monolithic, and it was un-optimized back then too! Not only that, but it was harder to optimize because you had to consider so many interdependent components.

    • by Luthair ( 847766 )
      I think its more late stage capitalism - companies only care about pushing out features, they don't care about supporting them or doing it right for the long term.
    • It isn't about age... it is about demands. If a developer doesn't get deliverables in, even if they are resource hogs, they get replaced. Developers are viewed as a disposable, fungible quantity right now, so code is just done to make the Scrum master happy. Security? Who cares, if it is a show stopper, there are many layers between a customer that got hacked and the dev, so there are far fewer consequences having an obvious security hole as opposed to not getting stuff in on time.

      Agile and Scrum are of

  • What are LLMs... (Score:5, Insightful)

    by Chris Mattern ( 191822 ) on Tuesday May 13, 2025 @10:19AM (#65373137)

    ...but "throw raw data and computing power at the problem until you can't see it anymore"?

  • No, they would be as rare/frequent as they are now. The problem is that any and every startup would like you to believe that their product is new, innovative and worth investing in. The truth is, that most of the time it's just the same pancake with a slightly different seasoning. Hardly anything is innovative these days.

  • As a general guiding principle it would probably be enough to direct optimization effort at older code, which by definition has survived the surface churn of innovation. Obviously I mean older code that is widely used, which will tend to be the stuff towards the bottom of your stack.

    To some extent this is surely happening - I bet there's more optimization work done in kernels and operating systems than in apps. But even within apps it will be happening - your UI might be tweaked weekly, but those query opti

    • Maybe older code that has survived surface churn is that way because it pre-dates the Bloatware Age of assembling applications by linking to tons of libraries?

      And optimizing it may break something that doesn't need fixing?

      • by pr0nbot ( 313417 )

        I work on a code base that is, in parts, 40 years old. There is nothing more satisfying that digging into some bizarre lockup or slowness, discovering it's because of assumptions or constraints from the 80s that no longer hold, ripping out or rejigging a load of code and seeing the problem go away not just in one application but in all the applications. Yes of course you need to maintain code without breaking things, and the damage footprint is higher the more dependents the code has, but there are well-est

  • Complaints about software bloat have been around since the invention of the integrated circuit. The more computing power at your disposal, the less incentive there is to optimize code. People were complaining about how bloated windows XP was, but it looks lean and mean compared to today's operating systems.

    However, I think the impact of software bloat has become most acute in the gaming realm. The problem is that gaming graphics haven't changed noticeably in a decade now, yet the costs and capabilities requ

  • He's absolutely correct.
    There's nothing faster than a skillfully written, monolithic 'C' codebase with just a wee bit of assembly thrown in for those CPU-bound routines. Twitter could run on 1/4 the servers they use currently; I could do everything I normally do on my computer at home or work on a 10 year old Athlon II or Core i5 CPU.

    BUT, there aren't enough programmers in the world capable of skillfully writing and skillfully maintaining all the services, features, games, entertainment, surveillance, etc

    • There's nothing faster than a skillfully written, monolithic 'C' codebase with just a wee bit of assembly thrown in for those CPU-bound routines.

      How monolithic? Does the author of every little utility need to implement their own GUI custom-tailed for the needs of that application? Or are we going to be generous and allow them to statically link in the GUI that comes 'close enough' - uh oh, that sounds like compromise sneaking in.

      Beyond GUIs the broader issue is code re-use. The more you can re-use, t

      • How monolithic? Does the author of every little utility need to implement their own GUI custom-tailed for the needs of that application?

        The core functionality of every little utility should be separate from any GUI. So that the GUI could be replaced platform to platform, and the core functionality remain portable unmodified code. So the code could be used in a server or embedded environment where there is no GUI.

    • There is nothing inherent in monolithic code, that makes it more optimizable. I'd argue it's harder to optimize, because there are so many interdependencies.

  • I miss the byte-counting assembler days of the 70's and 80's. RAM was measured in bytes or kilobytes and ROM space was almost as precious. Code was printed on paper, and computer monitors were heavy glass boat-anchors. You had to configure dip switches to get devices to talk to each other (slowly). Documentation was a bookshelf, not a web site. Persisent storage required rusty iron - ok, that's still the case, but now it can be solid state too. We configured IRQ's and I/O addresses. Sometimes we had to p

    • Yep. And many of us who grew up in that era can still right decent, compact, and fast code - to do some things. Unfortunately, there are too many jobs today that require a wide amount of CS ability. You may be very good at X, but the project needs X, Y, and Z. If Y and Z can be provided by a bloat filled library and get the project out the door faster, that's what people do. They don't want to hire an expert on Y and Z as well.
  • The harder you push for software optimization, the more you work towards assembly, the universe of talent and knowledge transfer diminishes. The further you push software optimization, the closer you reach 0 humans available. Modern software development techniques help scale the universe of available developers at the cost of hardware.
  • He is right, there's at least an order of magnitude of performance wasted in most systems, and I guess a lot more in some.

    The issue is that in many cases the hardware is cheaper than the engineering time.

    • "The issue is that in many cases the hardware is cheaper than the engineering time."

      I imagine this is one of those conventional wisdom things that's not based on reality, typically. Sure hardware is cheap. But most companies don't buy hardware anymore. They rent cloud computing.

      At even a somewhat modest scale, it can be pretty easy to save $50k or $75k there by addressing bottlenecks. Hit a couple of those each year and you've paid for an engineer.

    • by Bert64 ( 520050 )

      The issue is that in many cases the hardware is cheaper than the engineering time.

      In the short term perhaps, but generally code is written to perform repetitive tasks. That is the code will be written once, but it will execute thousands or even millions of times. That results not just in extra time, but extra power usage and greater hardware costs.

  • https://permacomputing.net/ [permacomputing.net] is also part of the 'answer' to energy usage, hence climate. I'm old, my first mainframe jobs were 96K memory, tape drives + 10m (washing machine size!) disk drives that managed large companies. More secretaries and clerks, which may not be a bad thing, entry level work.
  • rushed deadlines need to go as well

    • rushed deadlines need to go as well

      As an employee, I totally agree. As a small business owner, I realize that is a pipe dream. That would require the entire world to agree, which is never going to happen. As long as businesses have the freedom to compete with each other, the first out the door with a working product gets the customers.

  • Carmack has experience with efficient, low(er) level coding and hence has an inuition about what a computer can actually do. (I do too.) Today, it all is laywers upon layers, emulations, interpreted things, inefficnencies that compilers cannot fix, comupting wasted on things not needed (for example, why graphically render a table in a browser, when a HTML 2.0 native table would be perfectly fine), and so on and on.

    The other thing is that "innovative new products" are rarely innovative and often only new in

  • by Larry_Dillon ( 20347 ) <dillon@larry.gmail@com> on Tuesday May 13, 2025 @10:45AM (#65373209) Homepage

    As long as developers keep getting the fastest computers, with much more resources than the average consumer PC, programs will be bloated. If you gave all programmers I3's with 8GB of RAM, you would see a massive reduction in code bloat and a huge increase in efficiency -- but at the cost of developer time. They are simply not feeling the pain that normal people with average or older computers feel.

  • It is possible to squeeze out a lot of efficiency without resorting to this level of optimization. Just rethinking how much CPU your code uses can save a lot. Rework your code to efficient. It will get faster as a result. Do this at every level from compiler development on up.

    Learning to code tightly is a difficult task. Maybe bring back some coding for machines like the Commodore 64. I remember reworking code just to save 4 bytes. I need just that much more space for the program to run. We don't need to ge

  • I have a gigabit internet connection, but some javascript heavy sites even with a content blocker are slower than the dial up era. Also Firefox thinks its funny to use over 10 gigabytes of ram with less than 10 tabs open.
  • Here like a lot of places when some said tariff or when someone says Taiwan or supply chain, there is big fear reactions, oh noes our entire economy will grind to a halt if we don't have chips on the very latest process at bargain prices.

    Nope not even a little. The first thing we can do is simply stretch the hardware refresh cycle. Maybe the games won't be happy but those computers on the desks of wall street offices down to the ones running the shop floor don't suddenly quit working because they are 2, 5

    • Things have certainly changed a lot. My at home computer is a 8 core machine and 6 years old and I don't have the slightest inclination to update it. I am guessing I can get another 10 years out of.
  • For doing an mesh on a vehicle CFD, I need something with more memory. I would not even consider building another computer unless it had at least 256GB RAM, but that may not be enough. It comes down to mesh resolution. Currently, at 64GB I can learn something from a 10mm mesh, but I need to get it down to 55, which would be 256GB. I want 2.5mm, which would be pretty good, but that may be 1TB of RAM. I am living on a fixed budget. Sigh.
  • Every AI warning we ever had was right.
    • I don't think actual GPUs are much more expensive. There are products being sold that can be used as GPUs, but are actually Tensor or CUDA processors that can also work for gaming. Those are very expensive (like $thousands). But the last generation cards actually perform better on many games, because they were designed for gaming. Those cards have generally risen with inflation and can be acquired for several $hundred.

  • Which is more expensive in the long run, one day of human time or 1hr of electricity? This seems to ignore that we are mostly optimizing for people's time instead of the hardware. We have vastly more software than existed in the past. This isn't because that many more people write software but it is easier to write, maintain and distribute than ever before. Assembly and C might give great efficiency but it is terrible to manage and debug versus new languages. This is true even for veterans of those langu
    • Most of the software written at places I've worked at is intended to be used at massive scale. That adds up quickly if you use cloud computing. So, yeah, engineers are still cheap when they're working on large scale projects.

  • It seems reasonable enough to suspect that requiring hardcore optimization would raise the barrier to entry vs. being able to just rapid prototype something on top of a giant heap of abstraction layers; but I'm a little puzzled by the implication that we are in an age of innovative new products.

    If anything, that is what is so disappointing about the bloat. It would be one thing if we lived in an age of exciting prototypes made possible by exceptionally quick time to minimum viable product; but most of th
  • "Nobody should ever need more than 640k of RAM."

    Once everyone found it easier to just add more memory, the idea of efficiency started to go away... knowing the more efficient way to divide by 2, knowing how to avoid unnecessary program pointer jumps (if the first character of the string does not match, why run strncmp() for when you have a large data set)... these are things taught by experience. One of the best efficiency teachers would be to write for an embedded system with very limited memory. You lea

  • by reanjr ( 588767 ) on Tuesday May 13, 2025 @11:06AM (#65373285) Homepage

    It's amazing how fast software runs when someone actually knows what they're doing.

    I remember having a discussion with coworkers over selecting languages for command line tools. It's amazing to me that anyone would even think to suggest using Java to write a command line tool. The startup time for Java is atrocious and it's completely inappropriate to use it to write a tool that might get out in a bash loop.

    Simply by thinking for a few moments and choosing something like C or even Bash can make a huge difference in performance with essentially no extra effort (for simple enough tools).

  • Old school programmers managed to use cleverness to get performance out of minimal hardware
    In the 80s, I wrote image processing routines in assembly on a 4Mhz 8088

  • We need smarter compilers so humans can write maintainable microservices or whatever and the compiler can build a deployable efficient monolithic binary.

    That's the point of having a Universal Turing Machine.

    Our concept of compilers is still from the 70's with incremental improvements. Perhaps Go has taken one step in this direction more than others on the journey of a hundred miles. SSA trees and such were once a hot topic of research but not everybody is distracted with sexy AI topics.

    John is correct on th

  • This is obvious when you play an id tech game, they look amazing on lower end hardware.

  • by Synonymous Homonym ( 1901660 ) on Tuesday May 13, 2025 @11:22AM (#65373313)

    Hardware is more energy efficient than ever. And Software could make better use of it. In the trilemma of "good-cheap-fast, pick any two", good is always the one that doesn't get picked. That also means that correct, safe, secure, and resilient are at best afterthoughts.

    Software optimisation is not the problem. Modern languages, compiled and interpreted, are doing a great job at optimising.

    Monolithic designs are also not the answer; they are part of the problem. Intuitively one might expect that one big silo hiding all the complexity would be easier to optimise, but that is not the case: The complexity doesn't go away, and hiding the combinatorial explosion means you lose insight and maintainability.

    Microservices are also not the answer. You want small, specialised tools that each do one thing, and do it well; and you want to build systems from those inherently parallel, scaleable tools. You don't want the overhead of long-running processes communicating over the network. You want small, short-lived processes running in parallel, managed by the operating system (or a VM efficiently using the IPC primitives of the OS). You want short, fast scripts using highly optimised specialised commands.

    These are lessons that had to be learned over and over again. Artisans have learned them (good craftsmanship), mechanical engineers have learned them (respect the humble screw), electrical engineers have learned them (sockets and breadboards), and programmers have learned them (Unix philosophy).

    For all the cruft in software, hardware also has room for improvement: Currently it is more cost effective to kluge together packages (and not in a modular way) than to design an elegant machine. We get processors that implement many ISAs in one and still stall for ALUs, while it requires legislation to make the battery replaceable.

    But hardware has mass and volume, software has not. (Well, in a theoretical physics sense, it does, but practically that's immesureable.) So software expands to fill all available space and saturate every processor cycle; because empty RAM is wasted, and idling CPUs are just space heaters. And what used to be accomplished in a few kilobytes now takes gigabytes, which take longer to load than the old solutions took to compute. What used to be a few lines of text is now still text, but spread over several files using different structuring syntaxes, compressed, indexed, and served by a daemon using no structured query language.

    Because if things were easy to read and easy to edit and easy to process, it wouldn't feel technical. It would feel like anyone could do it, it would fell like magic. At the same time, no thought is wasted on grokking the magic, because the IDE, the language server, and the LLM tell you how to work around the limitations that prevent you from making mistakes, saving you the trouble of understanding the theory.

    Anyone can add complexity, and anyone will. Keeping things simple is the real skill.

    And we don't keep things simple by ignoring complexity, one way or another.

    But modular designs make innovation easier.

  • Windows 95 ended the era of doing what you could with existing hardware and demanded new hardware to do the same work as before. I've been shouting this from the rooftops ever since. All people do is call me crazy and get me arrested for trespassing.

  • No mention of the private corporate sector making optimization a "you do that on your own time and we'll make ourselves rich" effort?
  • I have only upgraded hardware when it began to show defects, and bought 4-5 year old business equipment then.

    It helps that I run Devuan Linux with a bare openbox desktop and just the few programs like Pale Moon and Libreoffice, which limits the need for hardware resources.
    My (offline) gaming laptop is now 7 or 8 years old, running Windows 7 and the few older games that interest me, worst offender Cities Skylines (not II) because added mods and assets cause so much bloat that 32GB RAM is no luxury.

  • Back in 1986, I was working writing Commodore 64 programs for a Biology teacher at a Catholic high school. Technically, the C-64 was well past "outdated" by then, but the teacher didn't really want to have to learn new equipment. Admittedly finding an old Petspeed compiler to compile his old BASIC software helped immensely, but beyond that I was able to optimize his code to make it run multiple times faster. We had those machines doing things that made the IBM-supporters at the school board jealous.

    Okay, AI

  • It's a tragedy of the commons: Everyone wants to hire trained coders, no one wants to train them. Either AI will pan out and it will become a nothing-burger with LLMs shrinking your code to 10% of the size capable of running on 1/20th the cycles or businesses are going to have to train coders I mean obtain more work visas to get coders from India.

We have a equal opportunity Calculus class -- it's fully integrated.

Working...