Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
Software Programming IT

Carmack: World Could Run on Older Hardware if Software Optimization Was Priority 138

Gaming pioneer John Carmack believes we're not nearly as dependent on cutting-edge silicon as most assume -- we just lack the economic incentive to prove it. Responding to a "CPU apocalypse" thought experiment on X, the id Software founder and former Oculus CTO suggested that software inefficiency, not hardware limitations, is our greatest vulnerability. "More of the world than many might imagine could run on outdated hardware if software optimization was truly a priority," Carmack wrote, arguing that market pressures would drive dramatic efficiency improvements if new chips stopped arriving.

His solution? "Rebuild all the interpreted microservice based products into monolithic native codebases!" -- essentially abandoning modern development patterns for the more efficient approaches of earlier computing eras. The veteran programmer noted that such changes would come with significant tradeoffs: "Innovative new products would get much rarer without super cheap and scalable compute."

Carmack: World Could Run on Older Hardware if Software Optimization Was Priority

Comments Filter:
  • He's correct (Score:4, Insightful)

    by BytePusher ( 209961 ) on Tuesday May 13, 2025 @10:08AM (#65373119) Homepage
    The more CPU and memory the more bloat. Younger software engineers don't feel the need to optimize... I was once one of them and I thought it was impressive, but absurd to write code in assembly(I still do and I'm not wrong). However, at this point we're be vibe coding our way down the slippery slope of irreversible climate change.
    • by AvitarX ( 172628 )

      I don't think Carmack is suggesting that there's any world where we do efficiency for efficiency's sake though.

      He seems to be implying the driving force is innovation, and that's easier with micro services for easier development but more hardware.

      If hardware stopped getting better and cheaper we'd slow down development. He doesn't even seem to be suggesting (from the summary) that we're making the wrong choice, just observing the impacts of rapid hardware improvement, and that ended we'd still have runway t

      • Re:He's correct (Score:5, Informative)

        by Bert64 ( 520050 ) <{moc.eeznerif.todhsals} {ta} {treb}> on Tuesday May 13, 2025 @11:44AM (#65373377) Homepage

        If hardware stopped getting better and cheaper we'd slow down development.

        No he's saying that if hardware stopped getting faster and cheaper we'd focus more on optimization to make better use of the existing hardware.

        Now if you look at platforms like the amiga and atari this was exactly what happened - new hardware stopped being made, so while there was still an active developer community there was a lot of focus on getting the most out of the existing hardware.

        When doom was open sourced and ported to the amiga it was unplayable at first, and got a lot faster over time. Contrast that with more modern development and if a game is too slow they'll just delay the release until hardware catches up.

      • Are click and fade when dragging windows really innovative? Is most UI innovative? Are buckets of emojis innovative? Iâ(TM)m still waiting for instant on portable devices rather than boot up status bars. Thatâ(TM)s been 20-plus years coming. I care far more about memory safe programming, stability/security of my devices, and maintainability than I care about BS features/upgrades

        • >>Thatâ(TM)s been 20-plus years coming.

          Kinda like unicode support on Slashdot.

        • by AvitarX ( 172628 )

          Well the inefficient micro services that Carmack mentions in the summary do add to safety, correct?

          My phone is instant on. In the sense that it reliably stays on for weeks at a time.

          Micro services are more maintainable too.

          It sounds like the innovation of micro services at the cost of efficiency is something that will eventually be lost (according to Carmack) if hardware stops improvingm

    • Re:He's correct (Score:5, Interesting)

      by sg_oneill ( 159032 ) on Tuesday May 13, 2025 @10:23AM (#65373153)

      Heres an argument for efficiency with a "save the world" sting in the tail.

      4% of Greenhouse gasses are caused by computing. Its not 50% or whatever, but it still counts, lots of 4%s add up.

      So if we absolutely spitball this and say 50% of our CPU energy budget is being eaten by inefficient code we can reduce CO2 by 2% by writing efficient code.

      Javascript is killing the world, basically.

      • Re:He's correct (Score:5, Insightful)

        by Z00L00K ( 682162 ) on Tuesday May 13, 2025 @10:33AM (#65373177) Homepage Journal

        A major culprit when it comes to inefficient code is that there are delivery deadlines that must be held. If it works it's not broken and can be delivered.

        Performance issues is something for maintenance to fix. But maintenance is always underbudgeted and can only fix the worst issues.

        That's why we got Windows 11 that's a lot slower than Windows 10.

        • Re: He's correct (Score:5, Insightful)

          by madbrain ( 11432 ) on Tuesday May 13, 2025 @11:36AM (#65373345) Homepage Journal

          It's not always possible to fix performance issues later during the maintenance phase. Many perf problems stem from poor design decisions. Same with security. Even if you can fix some, it's much more expensive to do it later than before release.

          • by Z00L00K ( 682162 )

            Try to convince the project managers of that.

            • by dbialac ( 320955 )

              Try to convince the CEOs and VPs of that.

              FTFY. The project manager's deadlines were set at much higher levels within the company.

          • I would make the claim more strongly - performance issues due to poor design and algorithm choice cannot be fixed by maintenance (rewrite the codebase is not an option). Only making the correct choices early can deliver efficient software.

            This also highlights a pitfall with that agile adage about always trying the simplest solution first, because you can always fix it later. Maybe OK for UI implementations, but a danger to rely on when implementing core functionality.

        • by dbialac ( 320955 )
          Microsoft actually focused on efficiency when they brought around Windows 7, hence it ran a lot faster. Some of the gains in boot times came simply because they didn't try to load every driver or background process all at once. The everything loaded was the same code.
    • PowerPanel (Score:5, Informative)

      by JBMcB ( 73720 ) on Tuesday May 13, 2025 @10:25AM (#65373155)

      I have Cyberlink's PowerPanel installed on my windows machine to monitor my UPS. Diving into it's install directory you can find:

      • A java runtime environment (90MB)
      • A python install with around 200MB of packages, including PyQT, numpy,
      • A few dozen MB of MSVCRT and MFC libraries
      • NodeJS

      This is for a pop-up application that does basic UPS monitoring. Back when I had an APC PowerChute was about 40MB for the entire client, most of which were driver DLLs for all of their various UPSes.

      At worst this app should be written in C++/Qt or WX, and should take up about 50MB. Instead the entire frontend is written in JavaScript, running on Node, and I'm assuming they are using Python to talk to the UPS itself and do network stuff as Twisted and requests is in there.

      Yeah this isn't exactly performance related, but it's indicative of how development is done these days. 400MB of code to draw a window on the screen to show what your UPS is doing, and some basic networking stuff.

      • Will apcupsd talk to it? I run it on my windows box with an old APC.

      • I think you meant Cyberpower, not Cyberlink. There may be other less bloated software you could use. I have some APC with Home assistant and the Network UPS tools add-on. That'll run on a Raspberry Pi. Still may be bloated, though, as HA stuff is mostly Python.

      • by tlhIngan ( 30335 )

        A lot of it is probably to be cross-platform. In the past, you probably wrote the code for one platform and that's it - Windows, for example.

        Nowadays Macs and other platforms are popular so you need to either develop an app dozens of times and try to keep them in sync (feature parity) or you use various cross platform libraries and then write the code once and it works across multiple OSes.

        This is especially tricky if you have something like macOS that has multiple architectures to go along with it, and var

      • I have Cyberlink's PowerPanel installed on my windows machine to monitor my UPS ... 400MB of code to draw a window on the screen to show what your UPS is doing, and some basic networking stuff.

        The Linux version of PowerPanel install pkg is less than 5MB, with a 16MB pwrstatd, and maybe uses some libs. The interface is terminal only, but the lights don't dim when it loads.

        As a side note, the battery has six seconds of run time after two years of very light use.

    • when we have really good C compilers?

      The effort should go into algorithms and code profiling. And reliance on libraries.

      • by gweihir ( 88907 )

        Sometimes you need assembly. A modern C compiler makes embedding a little assembly easy. GCC, for example, even allows you do do it MC68000 style instead of the deranged Intel assembly syntax. Usually you do not need more than a few lines, but any real systemd or driver coder and some others should be able to do it.

      • Right, compilers are pretty damn good these days and we've learned a lot of tricks from people who write assembly. However, compilers are not AI and will invariably do things that are less than optimal in some cases. It's not really feasible to write large codebases in assembly, nor are the gains generally worth the effort. My point is simply that when I was first learning to code in the 90s the guys writing assembly were mocking the kids writing C and C++ for writing bloated software with tons of unnecessa
      • Libraries can be written in assembly, too. Did a fair bit of that in the early 1990s for DOS games. My static libraries ended up linked into amm the company's games.

        Even later on in my career in the 2000s, I have worked on crypto code in assembly for various CPUs. The entire security stack was not written in assembly, though. It was mostly C.
        This was all user land stuff, not kernel.

      • when we have really good C compilers?

        The effort should go into algorithms and code profiling. And reliance on libraries.

        Algorithms are always first. Code profiling tells you where assembly could be useful, you don't write assembly without profiling.

        Also assembly is still popular. SIMD is assembly language programming, even if using compiler intrinsics.

      • A few years ago I ported some legacy device firmware from its ancient Sun-based development environment to gcc (68k cross-compiler) and Linux. Most of the code compiled reasonably as-is. Some of it required a bit of hand-holding, like telling gcc that I really did need to store four characters one at a time rather than a single long when talking to a dual-port RAM interface.

        Some of the low-level OS code did in fact require assembly. So be it.

        ...laura

    • by gweihir ( 88907 )

      Assembly is special-purpose only and (except very early) always has been. But a modern C compiler is not hard to use, code is portable and the binaries are blazingly fast if the coder knows what they are doing. The problem with many other languages is that they needlessly increase the active memory work-set and that decreases chache efficieny. That can cost you a factor of 10x, 100x or even more in performance.

      • I have definitely seen massive effects from caching when changing just a few lines of C code. 50% cut in perf. The reality is that when you are writing portable code that has to do run on many CPUs with various amounts and types of cache, you don't really have the luxury to do all this work. Only if you have a fairly controlled hardware/software environment, typically embedded.

        • by gweihir ( 88907 )

          Well, yes. But the thing is, in C it is easy to keep the memory work-set small (relatively easy, that is), while, for example, in Java or Python, it is exceptionally hard.

    • Yep, my 10+ year old desktop still handles most of the indie games I like to play, even fairly new releases. Starfield is a very pretty PowerPoint presentation, but I don't need that level of graphics for most games I play. Sure my laptop with an NVIDIA 4070 card is the one I use to play when I don't want to go to the office and when I want high res screenshots for various artwork, but I don't feel like I'm missing out much when I use my "ancient" machine.

      I will upgrade eventually, probably when there is

      • Yep, my 10+ year old desktop still handles most of the indie games I like to play, even fairly new releases.

        Same here. it's on its 4th GPU. Its also had plenty of RAM and even that was doubled a few years ago.

    • Younger software engineers don't feel the need to optimize

      Not just the younger ones. I rarely optimize like I used to. I do a lot in Python today, because it has good GPU integration via pytorch, good libraries and programmers who know their way around python is much easier than C++. Time was I'd write cunning algorithms to minimize the amount of processing then split it up over the available CPUs (2 for example). Now I write dumbshit code and if it's slow, fling it to the GPU and watch it go fast.

      I miss do

    • My current CPU (5800x3D) has 6x more cache (96MB) than my first Pentium system had RAM (16MB), and that Pentium was no slouch, a lot of them at the time had 8MB of RAM.

    • It's not not AI generated code: Python is absolutely atrocious and is one of the most popular languages right now.

      I have one app that management insisted I use. Its written in Python and slow as molasses. The developer goes on and on about "how well it scales" and came in and set us up on a Kubernetes cluster where we had like 8 VM's serving a simple app that shouldn't take more than a single machine to run.

      You don't have to code in things in raw assembly, but so many things are stacking on top of ineffic

    • He's correct about the lack of optimization, yes. But I don't agree with him on returning to monolithic architecture. Monolithic architecture makes code maintenance more difficult, without solving the optimization problem. I'm old enough to remember when everything was monolithic, and it was un-optimized back then too! Not only that, but it was harder to optimize because you had to consider so many interdependent components.

    • by Luthair ( 847766 )
      I think its more late stage capitalism - companies only care about pushing out features, they don't care about supporting them or doing it right for the long term.
      • I think its more late stage capitalism - companies only care about pushing out features, they don't care about supporting them or doing it right for the long term.

        Any stage capitalism, selling new (usually) generates more profits than maintaining old. Not really a head scratcher. Thankfully not everyone keeps to that strategy. One of the trending issues is that often the "new" isn't really that much better or more advanced than the old and it's often not even better made.

    • It isn't about age... it is about demands. If a developer doesn't get deliverables in, even if they are resource hogs, they get replaced. Developers are viewed as a disposable, fungible quantity right now, so code is just done to make the Scrum master happy. Security? Who cares, if it is a show stopper, there are many layers between a customer that got hacked and the dev, so there are far fewer consequences having an obvious security hole as opposed to not getting stuff in on time.

      Agile and Scrum are of

  • What are LLMs... (Score:5, Insightful)

    by Chris Mattern ( 191822 ) on Tuesday May 13, 2025 @10:19AM (#65373137)

    ...but "throw raw data and computing power at the problem until you can't see it anymore"?

  • No, they would be as rare/frequent as they are now. The problem is that any and every startup would like you to believe that their product is new, innovative and worth investing in. The truth is, that most of the time it's just the same pancake with a slightly different seasoning. Hardly anything is innovative these days.

  • As a general guiding principle it would probably be enough to direct optimization effort at older code, which by definition has survived the surface churn of innovation. Obviously I mean older code that is widely used, which will tend to be the stuff towards the bottom of your stack.

    To some extent this is surely happening - I bet there's more optimization work done in kernels and operating systems than in apps. But even within apps it will be happening - your UI might be tweaked weekly, but those query opti

    • Maybe older code that has survived surface churn is that way because it pre-dates the Bloatware Age of assembling applications by linking to tons of libraries?

      And optimizing it may break something that doesn't need fixing?

      • by pr0nbot ( 313417 )

        I work on a code base that is, in parts, 40 years old. There is nothing more satisfying that digging into some bizarre lockup or slowness, discovering it's because of assumptions or constraints from the 80s that no longer hold, ripping out or rejigging a load of code and seeing the problem go away not just in one application but in all the applications. Yes of course you need to maintain code without breaking things, and the damage footprint is higher the more dependents the code has, but there are well-est

  • Complaints about software bloat have been around since the invention of the integrated circuit. The more computing power at your disposal, the less incentive there is to optimize code. People were complaining about how bloated windows XP was, but it looks lean and mean compared to today's operating systems.

    However, I think the impact of software bloat has become most acute in the gaming realm. The problem is that gaming graphics haven't changed noticeably in a decade now, yet the costs and capabilities requ

  • He's absolutely correct.
    There's nothing faster than a skillfully written, monolithic 'C' codebase with just a wee bit of assembly thrown in for those CPU-bound routines. Twitter could run on 1/4 the servers they use currently; I could do everything I normally do on my computer at home or work on a 10 year old Athlon II or Core i5 CPU.

    BUT, there aren't enough programmers in the world capable of skillfully writing and skillfully maintaining all the services, features, games, entertainment, surveillance, etc

    • There's nothing faster than a skillfully written, monolithic 'C' codebase with just a wee bit of assembly thrown in for those CPU-bound routines.

      How monolithic? Does the author of every little utility need to implement their own GUI custom-tailed for the needs of that application? Or are we going to be generous and allow them to statically link in the GUI that comes 'close enough' - uh oh, that sounds like compromise sneaking in.

      Beyond GUIs the broader issue is code re-use. The more you can re-use, t

      • How monolithic? Does the author of every little utility need to implement their own GUI custom-tailed for the needs of that application?

        The core functionality of every little utility should be separate from any GUI. So that the GUI could be replaced platform to platform, and the core functionality remain portable unmodified code. So the code could be used in a server or embedded environment where there is no GUI.

    • There is nothing inherent in monolithic code, that makes it more optimizable. I'd argue it's harder to optimize, because there are so many interdependencies.

    • No, I don’t think he’s suggesting rewriting everything. As with anything else, it’s the old 80/20 or 90/10 rule - rewrite the small amount of bloatware code where the computer spends most of its time into more efficient code (preferably non-interpreted). That doesn’t mean rewriting the entire application, either - just the part that’s slowing things down; such places almost always exist in any large application. This isn’t even anything new; we had plenty of interpretive
  • I miss the byte-counting assembler days of the 70's and 80's. RAM was measured in bytes or kilobytes and ROM space was almost as precious. Code was printed on paper, and computer monitors were heavy glass boat-anchors. You had to configure dip switches to get devices to talk to each other (slowly). Documentation was a bookshelf, not a web site. Persisent storage required rusty iron - ok, that's still the case, but now it can be solid state too. We configured IRQ's and I/O addresses. Sometimes we had to p

    • Yep. And many of us who grew up in that era can still right decent, compact, and fast code - to do some things. Unfortunately, there are too many jobs today that require a wide amount of CS ability. You may be very good at X, but the project needs X, Y, and Z. If Y and Z can be provided by a bloat filled library and get the project out the door faster, that's what people do. They don't want to hire an expert on Y and Z as well.
  • The harder you push for software optimization, the more you work towards assembly, the universe of talent and knowledge transfer diminishes. The further you push software optimization, the closer you reach 0 humans available. Modern software development techniques help scale the universe of available developers at the cost of hardware.
  • He is right, there's at least an order of magnitude of performance wasted in most systems, and I guess a lot more in some.

    The issue is that in many cases the hardware is cheaper than the engineering time.

    • "The issue is that in many cases the hardware is cheaper than the engineering time."

      I imagine this is one of those conventional wisdom things that's not based on reality, typically. Sure hardware is cheap. But most companies don't buy hardware anymore. They rent cloud computing.

      At even a somewhat modest scale, it can be pretty easy to save $50k or $75k there by addressing bottlenecks. Hit a couple of those each year and you've paid for an engineer.

    • by Bert64 ( 520050 )

      The issue is that in many cases the hardware is cheaper than the engineering time.

      In the short term perhaps, but generally code is written to perform repetitive tasks. That is the code will be written once, but it will execute thousands or even millions of times. That results not just in extra time, but extra power usage and greater hardware costs.

  • rushed deadlines need to go as well

    • rushed deadlines need to go as well

      As an employee, I totally agree. As a small business owner, I realize that is a pipe dream. That would require the entire world to agree, which is never going to happen. As long as businesses have the freedom to compete with each other, the first out the door with a working product gets the customers.

  • Carmack has experience with efficient, low(er) level coding and hence has an inuition about what a computer can actually do. (I do too.) Today, it all is laywers upon layers, emulations, interpreted things, inefficnencies that compilers cannot fix, comupting wasted on things not needed (for example, why graphically render a table in a browser, when a HTML 2.0 native table would be perfectly fine), and so on and on.

    The other thing is that "innovative new products" are rarely innovative and often only new in

  • by Larry_Dillon ( 20347 ) <dillon@larry.gmail@com> on Tuesday May 13, 2025 @10:45AM (#65373209) Homepage

    As long as developers keep getting the fastest computers, with much more resources than the average consumer PC, programs will be bloated. If you gave all programmers I3's with 8GB of RAM, you would see a massive reduction in code bloat and a huge increase in efficiency -- but at the cost of developer time. They are simply not feeling the pain that normal people with average or older computers feel.

    • There are legitimate reasons for developers to have powerful computers. Development systems / IDEs, compilers, database engines, and so on, take a LOT of horsepower. If developers had to use all those tools on anemic machines, they'd have to sped most of their time just waiting for compiles to finish.

      A better solution would be to have real-world simulation modes, kind of like ow browsers let you simulate the screen of a mobile device, even though you have a desktop browser.

  • It is possible to squeeze out a lot of efficiency without resorting to this level of optimization. Just rethinking how much CPU your code uses can save a lot. Rework your code to efficient. It will get faster as a result. Do this at every level from compiler development on up.

    Learning to code tightly is a difficult task. Maybe bring back some coding for machines like the Commodore 64. I remember reworking code just to save 4 bytes. I need just that much more space for the program to run. We don't need to ge

    • Most bloated code is not the result of lack of optimization, but plain, ordinary incompetence.

      As an example, my company chose NHibernate as its mechanism for interacting with databases. The developers who chose it didn't really understand SQL, so NHibernate allowed them to think like developers, instead of like SQL engineers. And it showed. They wrote NHibernate code that would, say, get a list of all employees, then use a for-each loop to iterate over the list, getting each employee's additional informatio

  • I have a gigabit internet connection, but some javascript heavy sites even with a content blocker are slower than the dial up era. Also Firefox thinks its funny to use over 10 gigabytes of ram with less than 10 tabs open.
  • Here like a lot of places when some said tariff or when someone says Taiwan or supply chain, there is big fear reactions, oh noes our entire economy will grind to a halt if we don't have chips on the very latest process at bargain prices.

    Nope not even a little. The first thing we can do is simply stretch the hardware refresh cycle. Maybe the games won't be happy but those computers on the desks of wall street offices down to the ones running the shop floor don't suddenly quit working because they are 2, 5

    • Things have certainly changed a lot. My at home computer is a 8 core machine and 6 years old and I don't have the slightest inclination to update it. I am guessing I can get another 10 years out of.
  • For doing an mesh on a vehicle CFD, I need something with more memory. I would not even consider building another computer unless it had at least 256GB RAM, but that may not be enough. It comes down to mesh resolution. Currently, at 64GB I can learn something from a 10mm mesh, but I need to get it down to 55, which would be 256GB. I want 2.5mm, which would be pretty good, but that may be 1TB of RAM. I am living on a fixed budget. Sigh.
  • Every AI warning we ever had was right.
    • I don't think actual GPUs are much more expensive. There are products being sold that can be used as GPUs, but are actually Tensor or CUDA processors that can also work for gaming. Those are very expensive (like $thousands). But the last generation cards actually perform better on many games, because they were designed for gaming. Those cards have generally risen with inflation and can be acquired for several $hundred.

  • It seems reasonable enough to suspect that requiring hardcore optimization would raise the barrier to entry vs. being able to just rapid prototype something on top of a giant heap of abstraction layers; but I'm a little puzzled by the implication that we are in an age of innovative new products.

    If anything, that is what is so disappointing about the bloat. It would be one thing if we lived in an age of exciting prototypes made possible by exceptionally quick time to minimum viable product; but most of th
  • "Nobody should ever need more than 640k of RAM."

    Once everyone found it easier to just add more memory, the idea of efficiency started to go away... knowing the more efficient way to divide by 2, knowing how to avoid unnecessary program pointer jumps (if the first character of the string does not match, why run strncmp() for when you have a large data set)... these are things taught by experience. One of the best efficiency teachers would be to write for an embedded system with very limited memory. You lea

  • by reanjr ( 588767 ) on Tuesday May 13, 2025 @11:06AM (#65373285) Homepage

    It's amazing how fast software runs when someone actually knows what they're doing.

    I remember having a discussion with coworkers over selecting languages for command line tools. It's amazing to me that anyone would even think to suggest using Java to write a command line tool. The startup time for Java is atrocious and it's completely inappropriate to use it to write a tool that might get out in a bash loop.

    Simply by thinking for a few moments and choosing something like C or even Bash can make a huge difference in performance with essentially no extra effort (for simple enough tools).

    • But software doesn't feel any faster. Running Altium Designer still takes a good 10 seconds to load. It took that long 20 years ago. Going even further back it took the DOS version of Autocad the same amount of time to load on my 25Mhz 386. 40 years ago you could have a dozen people banging out code on a PDP-11/70 with a few megabytes of ram. These days a browser tab takes a gigabyte and several cpu cores.

      • The software I write feels blazingly fast. Like when you take a website using Webpack bundling and you break it all up into hundreds of tiny files, your performance goes through the roof. Instead of downloading a separate 1.2 MiB bundle of JS with each new app page, you get 75 cache hits and 5 downloads totalling 100 KiB. Bundling is based on completely flawed assumptions.

  • Old school programmers managed to use cleverness to get performance out of minimal hardware
    In the 80s, I wrote image processing routines in assembly on a 4Mhz 8088

  • by bill_mcgonigle ( 4333 ) * on Tuesday May 13, 2025 @11:16AM (#65373305) Homepage Journal

    We need smarter compilers so humans can write maintainable microservices or whatever and the compiler can build a deployable efficient monolithic binary.

    That's the point of having a Universal Turing Machine.

    Our concept of compilers is still from the 70's with incremental improvements. Perhaps Go has taken one step in this direction more than others on the journey of a hundred miles. SSA trees and such were once a hot topic of research but not everybody is distracted with sexy AI topics.

    John is correct on the end goal but unrealistic on the path.

    Complexity is always preserved but can be shifted.

    It's possible transformers could be used to find such paths rather than traditional compiler strategies but retraining is too expensive for small changes at this point.

    Maybe some young punk computer scientists will buck the AI trend in the near future since that's what everybody is doing.

    • Compilers are quite a bit better than they were 20 years ago. The level of optimizations you can get out of LLVM, including automatically applying SIMD operations to ordinary C/C++ code without having to use compiler intrinsics.

      For the most part, writing a concise program that solves one problem can be quite fast. And it doesn't really matter if you wrote it in C++ or Java or Cython.

      But in this day and age, when we make a large project, we pull in hundreds of libraries or packages. And there is not a lot of

  • This is obvious when you play an id tech game, they look amazing on lower end hardware.

  • by Synonymous Homonym ( 1901660 ) on Tuesday May 13, 2025 @11:22AM (#65373313)

    Hardware is more energy efficient than ever. And Software could make better use of it. In the trilemma of "good-cheap-fast, pick any two", good is always the one that doesn't get picked. That also means that correct, safe, secure, and resilient are at best afterthoughts.

    Software optimisation is not the problem. Modern languages, compiled and interpreted, are doing a great job at optimising.

    Monolithic designs are also not the answer; they are part of the problem. Intuitively one might expect that one big silo hiding all the complexity would be easier to optimise, but that is not the case: The complexity doesn't go away, and hiding the combinatorial explosion means you lose insight and maintainability.

    Microservices are also not the answer. You want small, specialised tools that each do one thing, and do it well; and you want to build systems from those inherently parallel, scaleable tools. You don't want the overhead of long-running processes communicating over the network. You want small, short-lived processes running in parallel, managed by the operating system (or a VM efficiently using the IPC primitives of the OS). You want short, fast scripts using highly optimised specialised commands.

    These are lessons that had to be learned over and over again. Artisans have learned them (good craftsmanship), mechanical engineers have learned them (respect the humble screw), electrical engineers have learned them (sockets and breadboards), and programmers have learned them (Unix philosophy).

    For all the cruft in software, hardware also has room for improvement: Currently it is more cost effective to kluge together packages (and not in a modular way) than to design an elegant machine. We get processors that implement many ISAs in one and still stall for ALUs, while it requires legislation to make the battery replaceable.

    But hardware has mass and volume, software has not. (Well, in a theoretical physics sense, it does, but practically that's immesureable.) So software expands to fill all available space and saturate every processor cycle; because empty RAM is wasted, and idling CPUs are just space heaters. And what used to be accomplished in a few kilobytes now takes gigabytes, which take longer to load than the old solutions took to compute. What used to be a few lines of text is now still text, but spread over several files using different structuring syntaxes, compressed, indexed, and served by a daemon using no structured query language.

    Because if things were easy to read and easy to edit and easy to process, it wouldn't feel technical. It would feel like anyone could do it, it would fell like magic. At the same time, no thought is wasted on grokking the magic, because the IDE, the language server, and the LLM tell you how to work around the limitations that prevent you from making mistakes, saving you the trouble of understanding the theory.

    Anyone can add complexity, and anyone will. Keeping things simple is the real skill.

    And we don't keep things simple by ignoring complexity, one way or another.

    But modular designs make innovation easier.

  • Windows 95 ended the era of doing what you could with existing hardware and demanded new hardware to do the same work as before. I've been shouting this from the rooftops ever since. All people do is call me crazy and get me arrested for trespassing.

    • by evanh ( 627108 )

      MS-DOS programs were renowned for bloat in their time. All the layers of hiding the underlying mess, as well as the featureless OS, were major factors. Win95 native programs were really quite compact and fast compared to what came next.

      The bloat curve is damn continuous.

  • I have only upgraded hardware when it began to show defects, and bought 4-5 year old business equipment then.

    It helps that I run Devuan Linux with a bare openbox desktop and just the few programs like Pale Moon and Libreoffice, which limits the need for hardware resources.
    My (offline) gaming laptop is now 7 or 8 years old, running Windows 7 and the few older games that interest me, worst offender Cities Skylines (not II) because added mods and assets cause so much bloat that 32GB RAM is no luxury.

  • Back in 1986, I was working writing Commodore 64 programs for a Biology teacher at a Catholic high school. Technically, the C-64 was well past "outdated" by then, but the teacher didn't really want to have to learn new equipment. Admittedly finding an old Petspeed compiler to compile his old BASIC software helped immensely, but beyond that I was able to optimize his code to make it run multiple times faster. We had those machines doing things that made the IBM-supporters at the school board jealous.

    Okay, AI

  • You want a clipping of Godzilla's toenail. At least 80% of OO programmers will call the objects they know... and you get Godzilla standing in front of you, with a frame around part of one toenail.

  • by Somervillain ( 4719341 ) on Tuesday May 13, 2025 @12:55PM (#65373633)
    I've seen this in business development. There are many times a simple Java DIY solution is 100x faster than some Spring framework that barely accomplishes the same thing...and engineered for far more complex use cases than yours...and every manager tells you to stop fucking around and use a standard....and if you persuade your manager that your solution is a better fit, they'll just promote you and some ambitious engineer will rewrite your shit from 200 lines of simple code to 1000 lines of framework declaration.

    I thought the promise of AI would be taking working code and making it work more efficiently, but to my surprise, the reverse happened. Now shitty engineers rely on AI to write their code and it's always bloated and half the time it doesn't even work. So...until this generative AI fad passes...or until they fundamentally rewrite how AIs code, this will only get worse. ...it doesn't help that the default programming language in most examples is Python.
  • It used to be just "Work expands so as to fill the time available for its completion". Now it's also "Program bloat expands so as to consume the processing and memory available for its execution".

  • Innovative new products would get much rarer without super cheap and scalable compute.

    In addition to being a resource hog, what companies label "innovative new products" are all too often harder to understand, less convenient to use, less configurable, and more privacy-invasive than their predecessors. They're also more likely to be rentware or SaaS. So AFAIC, the ability to continue using older hardware is just one of several benefits of saying "NO" to bloated throw-away flavour-of-the-month software.

  • Hardware is increasing in capacity at an insane rate, so of course software efficiency takes a back seat. Under most circumstances, no one notices that it takes slightly longer for unoptimized to run.

    Once we plateau ( again ), we'll see a greater push for optimization. These things happen in cycles.

  • You just have to code like a real professional software engineer https://github.com/EnterpriseQ... [github.com]!
  • Personally, I think that any employee who wants to write code in a non-safe language such as C that will run as part of a company's products or systems should get approval on sheepskin parchment written in the CTO's blood with a phoenix feather pen and sealed with the recently departed pope's wax seal.

    Obviously I am partially jesting: there are some cases where the low level languages are the only available or viable option (embedded code, device drivers, some cases with extreme optimization needs) but o

"It doesn't much signify whom one marries for one is sure to find out next morning it was someone else." -- Rogers

Working...