Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Networking Cloud Encryption Programming Security IT

Is Modern Software Development Mostly 'Junky Overhead'? (tailscale.com) 117

Long-time Slashdot theodp says this "provocative" blog post by former Google engineer Avery Pennarun — now the CEO/founder of Tailscale — is "a call to take back the Internet from its centralized rent-collecting cloud computing gatekeepers."

Pennarun writes: I read a post recently where someone bragged about using Kubernetes to scale all the way up to 500,000 page views per month. But that's 0.2 requests per second. I could serve that from my phone, on battery power, and it would spend most of its time asleep. In modern computing, we tolerate long builds, and then Docker builds, and uploading to container stores, and multi-minute deploy times before the program runs, and even longer times before the log output gets uploaded to somewhere you can see it, all because we've been tricked into this idea that everything has to scale. People get excited about deploying to the latest upstart container hosting service because it only takes tens of seconds to roll out, instead of minutes. But on my slow computer in the 1990s, I could run a perl or python program that started in milliseconds and served way more than 0.2 requests per second, and printed logs to stderr right away so I could edit-run-debug over and over again, multiple times per minute.

How did we get here?

We got here because sometimes, someone really does need to write a program that has to scale to thousands or millions of backends, so it needs all that stuff. And wishful thinking makes people imagine even the lowliest dashboard could be that popular one day. The truth is, most things don't scale, and never need to. We made Tailscale for those things, so you can spend your time scaling the things that really need it. The long tail of jobs that are 90% of what every developer spends their time on. Even developers at companies that make stuff that scales to billions of users, spend most of their time on stuff that doesn't, like dashboards and meme generators.

As an industry, we've spent all our time making the hard things possible, and none of our time making the easy things easy. Programmers are all stuck in the mud. Just listen to any professional developer, and ask what percentage of their time is spent actually solving the problem they set out to work on, and how much is spent on junky overhead.

Tailscale offers a "zero-config" mesh VPN — built on top of WireGuard — for a secure network that's software-defined (and infrastructure-agnostic). "The problem is developers keep scaling things they don't need to scale," Pennarun writes, "and their lives suck as a result...."

"The tech industry has evolved into an absolute mess..." Pennarun adds at one point. "Our tower of complexity is now so tall that we seriously consider slathering LLMs on top to write the incomprehensible code in the incomprehensible frameworks so we don't have to."

Their conclusion? "Modern software development is mostly junky overhead."
This discussion has been archived. No new comments can be posted.

Is Modern Software Development Mostly 'Junky Overhead'?

Comments Filter:
  • Let's not forget those small sites that have been slashdotted to no end.
    I'd say, if scaling costs you dollars instead of pennies, there's no point in not doing it.
    However, if scaling costs you thousands instead of dollars, you're better off not doing it.

    Yes, most things don' scale and never need to. Until they do. And then you're fucked and scramble to make them scalable at costs exceeding the overhead if you did.

    • by Baron_Yam ( 643147 ) on Sunday July 28, 2024 @03:08PM (#64661956)

      Meh. I've been running a site for a few decades now and even today it's a single VM on a 25 meg line. It's useful, but if the site address managed to get on Slashdot and enough people visited it to kill it?

      Big deal. If it was an important site to keep live 24/7, it'd already be able to handle the load. Some stuff isn't worth dollars when pennies are more than enough.

      • I don't think slashdot gets enough traffic anymore to be the threat it once was.
        • I don't think slashdot gets enough traffic anymore to be the threat it once was.

          I'm sure it doesn't. Between the shadow banning, Cloudflare shit, not allowing new user signups, duplicate articles, loss of what was once a fun Slashdot culture, random IP bans, all sorts of other bullshit, and the lack of Cowboy Neal options, I think Slashdot can be considered "CPR in progress."

    • by F.Ultra ( 1673484 ) on Sunday July 28, 2024 @05:04PM (#64662204)
      If you are hit with a true DDoS then it doesn't matter how much your solution will scale, the attackers will still have access to far more bandwidth and "scale" than you ever will.
  • The Rule (Score:4, Interesting)

    by The Cat ( 19816 ) on Sunday July 28, 2024 @02:44PM (#64661880)

    Ever see Apollo 13? You know the scene where John says "we have to turn everything off. Got to get it down to 12 amps."

    Even if that hadn't been the crucial element that allowed the spacecraft to maintain power until re-entry, it was generally a good idea for a wide variety of reasons.

    But that didn't stop the little bald-headed shit from shouting "can't run a vacuum cleaner on twelve amps John!"

    The little shit is an example of the guy in every planning session, every meeting and every evaluation who shouts down the voice of reason. He comes up with a zinger and disrupts all reasonable conversation in order to a) get attention and b) get promoted/save his job.

    Starting in about 1998, managers stopped listening to John and started listening to the little shit. The results started about six years later and speak for themselves.

    • Re:The Rule (Score:5, Funny)

      by Tablizer ( 95088 ) on Sunday July 28, 2024 @03:23PM (#64662006) Journal

      > didn't stop the little bald-headed shit from shouting

      Look, we got a domeaphobic and compactaphobic here.

    • "can't run a vacuum cleaner on twelve amps John!"
      Yes you can. Move to Europe, and discover the power of 230V.

      • Don't need to move anywhere, just import a vacuum. The European vacuums are limited to 900 Watt. That's less than 8 Amps even in the Statea.

      • "can't run a vacuum cleaner on twelve amps John!" Yes you can. Move to Europe, and discover the power of 230V.

        One of these days here in 'Murrica, we'll get big volts too!

        • Like most Americans, I have 240v in the house. Itâ(TM)s just that we donâ(TM)t need to use it except for the biggest loads like a stove. I think itâ(TM)s a better approach. Smaller voltages in general running through the home, with the ability to use double in the few devices that really need it.

          • by stooo ( 2202012 )

            Very wasteful in copper.

          • Like most Americans, I have 240v in the house. Itâ(TM)s just that we donâ(TM)t need to use it except for the biggest loads like a stove. I think itâ(TM)s a better approach. Smaller voltages in general running through the home, with the ability to use double in the few devices that really need it.

            Yes, I have the stove, dryer, hot tub spa, and my linear amplifiers running off 240 volts. And I prefer to not have that anywhere but where I need it. Getting zapped with 120 volts @60 Hz isn't all that pleasant, but it isn't as fatal as 240 volts can be.

            And yes, of course a person can be electrocuted by 120 Volts. Under the proper conditions a 9 volt battery can kill you. But those conditions seldom exist. But voltage goes up, so do the chances of having a really bad day. My 240 Volt lines can deliver

    • Please stop. I know you wanted to say something intelligent and try and back it up with some cultural trivia. You could have easily used a Dilbert cartoon.

      The guy who said "can't run a vacuum cleaner on twelve amps" was an engineer not a middle manager. He was pointing out the difficulty of getting the module to run on 12 amps, when it wasn't designed to do that. Which is why the engineers had to design a procedure to shutdown and startup the module because it had never been done before.

      So no, th
  • by LindleyF ( 9395567 ) on Sunday July 28, 2024 @02:47PM (#64661890)
    It's addictive. He has some very interesting insights. https://apenwarr.ca/log/ [apenwarr.ca]
  • by BrendaEM ( 871664 ) on Sunday July 28, 2024 @02:48PM (#64661894) Homepage
    We live in a world, where computer users are all beta-testers; why write well today, what we can patch tomorrow. All of our privacy has been given away. The powers that be are trying to destroy the personal computer, where the user owns their software. We weren't meant to leave the world of mainframes, where people could not own their own computer--just to be hooked up and on the meter to a commercial server. The WWW is not what was intended by the elder W3C; web pages were meant to be written by any human, using a text editor, without Javascript, to be universally viewed, on any device.

    The fact that Wikipedia thinks that not wasting your screen, is normal--is a sign that something is wrong.
  • by Big Hairy Gorilla ( 9839972 ) on Sunday July 28, 2024 @02:50PM (#64661900)
    We had feature complete software somewhere around 15 years ago, if not earlier.
    Everything since then has been bloat for data gathering, and make sure to re-arrange and hide functions to give the impression of progress... that is supposed to be the reason to "upgrade".

    Open source software doesn't have the imperative to pretend to improve. I use chunky opensource email programs like Sylpheed and Seamonkey.. stuff that looks old dated from mid 2000's... It's fucking great. High contrast, easy to read, highly functional, no data gathering.

    Some people say Windows XP was the pinnacle... No?
    • by AmiMoJo ( 196126 )

      The main reason people like containers is not too scale, it's because they just with without needing to install stuff and fix dependencies/config.

      It's not just web either. Embedded and Linux small system developers love them because they can do reproducible builds that work a decade later when they need to make some small change.

      • The main reason people like containers is not too scale, it's because they just with without needing to install stuff and fix dependencies/config.

        I think there is a lot of truth in that but it's another symptom of a malaise affecting a large part of the industry: importing the most trivial things from external dependencies and trying to glue everything together instead of having decent standard libraries available by default and being willing to write a little bit of code now and then. Popular "dynamic" languages like Python and particularly JavaScript have caused this syndrome to spread like a plague. The culture of dynamically linking libraries tha

        • I do think we're starting to see some welcome resistance to these trends. "Old-fashioned" self-contained compiled executables that can be installed by... copying that one file to the computer you want and running it... are becoming popular again in some quarters.

          This has both upsides and downsides. On the one hand, it sidesteps Dependency Hell and streamlines "installation" enormously. On the other hand, it can cause bloat, both on disk and in RAM. Sometimes it's also slower than the same software using dependencies, though I'm not sure why.

          I wouldn't want that pendulum to swing hard to one extreme or the other again - having it swaying a bit somewhere near the middle seems more flexible and useful.

          One terrific thing about the self-contained approach is that it all

          • by Anonymous Brave Guy ( 457657 ) on Sunday July 28, 2024 @10:57PM (#64662826)

            On the other hand, it can cause bloat, both on disk and in RAM.

            The thing is, I expect that the opposite is almost always true now. The bloat used to happen sometimes because many applications would all make use of a common library that would only need to be stored and loaded once if shared. But in practice libraries are often larger now and any given program often uses only a very small proportion of the code in them. Statically linking only the parts actually used and using a modern optimising tool chain could easily end up with smaller total executable size both on disk and in RAM even if several different programs that are currently running all happen to use the same handful of syscall wrappers and language standard library functions. And in any case the executable code part of a modern application is often so small compared to application data and the resources of the system that it's running on that even a bit of duplication wouldn't cause any harm.

            There's still a valid argument about avoiding duplication in genuinely resource-constrained environments like embedded systems. At least in those environments you typically know exactly what you're putting in there and can make an informed decision about the trade-offs.

            There's also still a valid argument about applying timely security updates where you only have to replace a commonly used but vulnerable library once system-wide if it's shared instead of updating everything that depends on it individually if it's statically linked every time. (Then again, there's a valid counterpoint that bugs or vulnerabilities in a bad shared library have a larger blast radius and can't necessarily be predicted or controlled by the developers of the software relying on that library the way they can with static dependencies they determine absolutely and can test as they see fit.)

            So I agree that flexibility is potentially useful here, but I still think there is a lot to be said for having a single, self-contained executable file that can run on any platform with a compatible hardware architecture and interfaces for system services. Both of these often last on the order of decades while remaining stable or backward compatible. It seems to be increasingly the case that we're lucky if the runtime environments for all these highly dynamic systems last for more than a few months.

            • Insightful and informative - thanks. The last paragraph prompted a thought that it should also be easier to optimize code when it doesn't have to adhere to the generalist design philosophy that might be necessary when using unaltered 'off-the-shelf' libraries.

              OTOH, using customized code risks introducing vulnerabilities that could be addressed at a single point when using shared libraries. And that comes right back to your 'blast radius' argument. So, as you pointed out, it's a complex situation. Lots of tr

          • by Sique ( 173459 )

            Sometimes it's also slower than the same software using dependencies, though I'm not sure why.

            Mostly it's a caching issue. If the library you are dynamically linking to is already loaded in memory or even in the processor cache, because some other binary is using it already, you will get better performance than if you have to reload them exclusively for your task from the hard disk each time you start your binary.

        • > copying that one file to the computer you want and running it

          Yeah. That Go executable that takes 70 MB for "Hello World."

          • You aimed a couple of orders of magnitude high on that one. And that's the price for statically linking the Go runtime. It's not as if it's going to scale proportionately with larger applications.

            Compared to a Python runtime/virtualenv, which really can run to tens of megabytes unless you heavily manually optimise it (been there, done that), or hundreds of megabytes of junk in a node_modules directory for the 753 transitive dependencies you accidentally installed, a single executable on that scale for a com

            • Oh, really? [github.com]

              The point isn't that it's impossible to make a small binary; it very much is (try ANSI C with one if the mini-libc versions around the 'net). You can have a static site generator in a few dozens of kB.

              The point is that nobody bothers.

              And this isn't a question of containers or not, or of interpreter or not.

              Containers, as they are, are pretty disk-efficient. They're meant to solve the problem of dependency encapsulation, and they to a terrific job of that.

              Whatever junk people put on top, it's on th

              • Fair point, Go does have form for being "heavy" in this way even in larger applications. Though I think it's also fair to point out that things can be done to significantly reduce the bloat in practice and part of the problem with Go is that those things aren't the default behaviour or sometimes widely known in the community. Other relatively static and pre-compiled languages like C, C++ and Rust don't tend to suffer from the same problem to anything like the same degree and do produce reasonably compact ex

      • reproducible builds that work a decade later

        without needing to install stuff and fix dependencies/config.

        That's not something that you need a container nor VM for. Windows has had this for decades now.

        What you need is a stable userspace API, or failing that, an automated means to install every DLL simultaneously, and resolve which one is needed at runtime. Windows has the latter in the form of the Side by Side assembly cache. (That %WINDIR%/WinSxS directory.)

        Linux on the other hand, tries to resolve all dependencies at install time using complicated package managers that cannot do their jobs because of sa

        • by caseih ( 160668 )

          Linux shared libraries have been versioned from the beginning and multiple versions can and do live side-by-side without any complicated SxS mechanism. SxS exists because DLLs and the loading mechanism for DLLS do not have intrinsic versioning.

        • Well, supposedly, I started it, so... might as well finish it off too :-)

          No. Dynamically linked bullshit like MS peddles causes a dependency of the worst sort.

          Freezing the code environment in an opensource format virtual machine is far better because it has no dependencies, and no cost other than brains.

          I have a few Virtualboxes from 20 years ago, both server (linux) and desktop WinXP with a code development environment, that are admittedly dependent on Vbox, at this time, but at least my business and code
        • by AmiMoJo ( 196126 )

          Yeah, unfortunately we are stuck with Linux for some of our older projects, and for our automated build infrastructure.

          Even LTS versions of Linux tend to be broken long before the LTS period ends, e.g. they only support ancient versions of OpenSSH that can't connect to modern VPNs due to not supporting any secure encryption schemes.

      • by gl4ss ( 559668 )

        The junky overhead makes it so that you'll end up with the same problems with containers.

    • I still use Eudora Pro 7.0 daily, albeit with heavily updated TLS
    • Some people say Windows XP was the pinnacle... No?
       
      Only with hindsight. Everyone forgets what a turd XP was until SP2. One of the most piece of shit Windows ever released until that service pack

  • "Our tower of complexity is now so tall that we seriously consider slathering LLMs on top to write the incomprehensible code in the incomprehensible frameworks so we don't have to."

    He doesn't think is a way more significant issue that the burden of the tools used? Wonder why? Because he's selling an alternative tool?

    As to the original question, is modern software development mostly junky overhead? No, no it's not, because most modern software development is embedded software just as it always has been.

  • Nobody wanted to wait for their code to compile, so they started using interpreted languages. Then they wanted the "oh I'm not working because my program is compiling" excuse back so they invented container building.

    I've never figured out why you'd develop that way. If it winds up needing a container when it's finished, shove it in one. It should take less than an afternoon.

    • by cen1 ( 2915315 )
      You can use it for a dev runtime but that doesn't mean you are building images with every line of code change.. for an interpreted language you can just volume mount to the exact version of insert-your-interpreted-language here.. and yes, most people just do it for for the final deploy.
      • by ceoyoyo ( 59147 )

        Doesn't sound like that approach would provide enough time to have a sword fight or "have an inspirational hallway interaction with coworkers."

  • by Casandro ( 751346 ) on Sunday July 28, 2024 @03:08PM (#64661960)

    You know how people say that any task will take long enough to fill the allotted time for it? Same goes for complexity. Things grow more and more complex until they are about to collapse.
    Many advances in software development allow us to hide complexity in a variety of ways. Sometimes we get something like the "Unibus" or "UNIX" or "von Neuman Computers" which actually eliminate complexity in a meaningful way, but usually it's just hiding it, making it seem easier to manage. In theory those advances could have given us the ability to handle more complex problems, or the problems programmers are facing in more efficient ways. Some of the phenomena of "10x programmers" or "10% programmers" (essentially the same thing viewed from 2 perspectives) can be attributed to programmers either limiting the complexity of what they are doing to becoming more efficient or doing the opposite.

    Software developers teams still fail at making simple database applications. Something that is a completely solved problem. Back in 1995 you could use Delphi 1 to create such an application for you, but only clicking. No typing necessary!

  • by physicsphairy ( 720718 ) on Sunday July 28, 2024 @03:09PM (#64661972)

    I read a post recently where someone bragged about using Kubernetes to scale all the way up to 500,000 page views per month. But that's 0.2 requests per second. I could serve that from my phone, on battery power, and it would spend most of its time asleep.

    No idea what they were trying to host, but that is a beautiful dream land where 500k requests are all evenly spaced at 0.2 req/sec. How does your phone do if a fifth of them happen at the same time? Are you going to be able to test your changes and know they will work when deployed? Can you test what happens when part if some other piece fails? Roll back easily with no downtime if you make a mistake? Upgrade your phone with no downtime? How graceful is it going to be if you have accidentally introduced something that leads a memory leak or connections not being released? How are you maintaining and auditing security? Can you provide limited access to someone else who needs to work on it? How are you hosting backups? What if this application is doing something more intensive (such as loading and processing huge images)?

    I'm sure this guy can write a simple web page and get 98% uptime on a simple server. And can be kind of nice to ditch all the heavy abstraction that goes into a modern scalable app. Probably a lot of cases where kubernetes is adding more effort than it's worth (and these days you have other more managed solutions). But if you have paying customers, especially big clients are using your software for their own customers, the problems you face and need to solve are multiplied far, far beyond your server having enough power to pass a load test on a typical day of the week.

    • by darkain ( 749283 ) on Sunday July 28, 2024 @03:25PM (#64662012) Homepage

      I was doing every single thing you listed in your questions long LONG before Kubs was even a thing. These all existed. We could do them, and quite easily in fact.

      Kubs has actually made several of those tasks MORE difficult in recent times.

      • by cen1 ( 2915315 )
        Yes, you could with a LOT of manual effort and reinventing wheels. Now you can just write a few 100 lines of yaml and get automatic autoscaling, rolling updates, A/B updates, HA.. The real effort is having the sysadmin capacity and expertise to run your own k8s cluster, which is why cloud vendors make so much $$ selling you a managed one.
        • Now you can just write a few 100 lines of yaml and get automatic autoscaling, rolling updates, A/B updates, HA.. The real effort is having the sysadmin capacity and expertise to run your own k8s cluster,

          The first sentence here seems to contradict the second sentence. If you need sysadmin capacity and expertise, then they are doing more than writing a few hundred lines of yaml.

      • by narcc ( 412956 )

        Indeed. Not only has K8 made those things more difficult, it's also made them more expensive!

        Let's be real here: all K8 is doing for most users is adding overhead, cost, and complexity. This is a consistent theme in software and why modern development has become "mostly junky overhead".

        Just for fun: Here's what Stack Overflow's infrastructure looked like in 2016. [nickcraver.com]

      • Yeah, Slashdot (this site) was getting way higher number of hits on some ancient (from our current perspective) servers from its early days a couple of decades ago. And this site is dynamic - not some static site.

        There was (and probably still is) a slashdot effect on sites linked to by slashdot.

      • It has also made many of these tasks far easier.

        For example, if I want to host several different websites, possibly with different backend languages, it makes that easier - even if it's only on a single host. You would have to manually configure a web server to reverse proxy to the individual sites' directories or backend processes, figure out certificate provisioning for all of the domain names, update DNS records, deal with updates/deployments, and so on. That's not even considering things you'd need to b

    • by Tom ( 822 )

      No idea what they were trying to host, but that is a beautiful dream land where 500k requests are all evenly spaced at 0.2 req/sec.

      Even if the peaks are two orders of magnitude (100x) larger, that's 20 req/sec. That'll get you a bored yawn from any non-braindead webserver running on anything more powerful than a microwave.

  • by ArchieBunker ( 132337 ) on Sunday July 28, 2024 @03:14PM (#64661978)

    AltaVista (remember them?) used to serve tens of millions of hits a day on machines less powerful than today's typical developer machine.

    https://groups.google.com/g/co... [google.com]

    You can hardly toggle an IO pin on an arduino today without pulling in a library from some random repository.

    • by cen1 ( 2915315 )
      Because you usually don't want to just toggle a pin but create an actual useful product. So you could either spend weeks or months reinventing the wheel and end up creating your own pin toggling library or use something that exists, has contributions from people smarter than you and is used by thousands of people. You have to pick your battles when using a library vs rolling your own and this is no different than it was back in the days.
    • Re:Agreed (Score:4, Funny)

      by narcc ( 412956 ) on Sunday July 28, 2024 @07:56PM (#64662494) Journal

      Sure, that pin toggling library might have 12 additional dependencies, each with their own dependencies, etc., etc., but our build tools handle all that silently! Hang on a sec, I need to ask ChatGPT why I can't get my tiny 1.2GB blinkin' light project onto my Arduino... The problem can't be my code. I cobbled together only the most popular libraries, so they must be highly optimized!

  • Yes YES yes YES yes!

    The ratio of caring about fads and me-too-ism versus parsimony in stacks is about 100 to 1.

    And the idea that one-stack-fits-all has got to go. Internal CRUD is not the same as web-scale e-commerce, and each shouldn't be watered down to cater to the other. Niche-focused tools/stacks are overall better (if road- and time-tested).

  • 0.2 Req / Sec? (Score:5, Interesting)

    by darkain ( 749283 ) on Sunday July 28, 2024 @03:21PM (#64661994) Homepage

    I was running over 400 req/sec on a 4-core/socket, 2 socket Xeon server in 2009 with PHP + MySQL. No CDN. No page caching. Every single page fetched live data from the database which contained hundreds of millions of records.

    Seriously, someone is bragging about 0.2 req/sec? I have a dynamic web environment right now running on an original Raspberry Pi (single-core, 32-bit, 700MHz, 256MB RAM) and would blow that out of the water.... oh yeah, and that RAM is also unified memory, so its shared with the GPU, so it really has even less.

    Shit today that people are making is just too goddamn wasteful.

    Its why when I mentor devs, I tell them to run their servers AND their clients on Raspberry Pis. It'll make them feel the performance impact of their applications, but in response time, and how much client-side resources it consumes too.

    • Raspberry pies are too fast for that really.

    • by narcc ( 412956 )

      when I mentor devs, I tell them to run their servers AND their clients on Raspberry Pis

      That's good advice.

      It's amazing how little so many developers these days get out of hardware we couldn't even imagine.

  • by kopecn ( 1962014 )
    Yes it is
  • by Dracos ( 107777 ) on Sunday July 28, 2024 @03:29PM (#64662018)

    Everyone with less than 10 years of (web) development experience seems convinced that every shitty little PWA with no real ideas or features, that gets 100 views a month, needs every single bit of enterprise-level infrastructure.

    It's a weird adaptation of Prosperity Gospel to software. "We're all just temporarily embarrassed startups."

    Meanwhile, these developers are severely lacking in fundamental skills and sense of perspective. They live in bubbles of the tech they use. Except, most self-styled "full stack developers" (which, if they don't name a language, invariably means Javascript... ugh) still manages to know one exactly thing about PHP (it sucks) or Python (it's slooooooow), without having any actual exposure to either.

    This situation is not sustainable. Eventually the industry will realize these "developers" aren't employable. They themselves will never realize that their portfolios full of "finished" "apps" benefit no one.

    Do the so-called development bootcamps literally, actually serve Kool-Aid to their marks?

    • by gweihir ( 88907 )

      Yep. Pretty much. A thoroughly pathetic situation and indeed completely unsustainable. We need actual engineering by competent people in this space, not amateur-level technicians creating flashy toys.

    • by Junta ( 36770 )

      You are right about their "one thing" and the amusing thing is that one thing almost always also applies to their chosen technology. For example, Python is slow, but so is Javascript, and in either case it usually doesn't matter because almost all the applications aren't compute intensive anyway.

      This did happen before, around the dot-com bomb the industry was full of gold rush seeking new college hires that paved the world in some pretty gnarly Java.

    • Do the so-called development bootcamps literally, actually serve Kool-Aid to their marks?

      Probably not.

    • I have no dog in this hunt, as a simple home hobbyist. Nobody seems to have mentioned the "corporate" aspect of software that scales. A startup -- whose job it is to sell itself, not supply current demand -- cannot be seen as a "nitch" player from day-one. If the web software it uses can only support 2-hits/sec then VCs and huge companies are not going to be impressed.  Where is the upside? The possibility of a  home-run reduces to that of a bunt.
  • This is a blogpost by a CEO. In other words, it's an ad.

    Are any editors' palms getting greased for this Slashvertising?

    On a positive note, it is more informative than a lot of the "news" here, so good job Tailscale shill.

    • by Junta ( 36770 )

      It might be an ad for name dropping their particular product, but the premise is pretty genericized and they aren't pitching some product to get out of that mess. Seems to be a tangential rant that may describe how they approached designing their solution, but the generic gripe applies across the industry to things that have nothing to do with the named product.

    • yeah this

      While the guy might even have some interesting points, everything dovetails neatly into a sales pitch. Yuck.

  • by gweihir ( 88907 )

    Because modern software developers are mostly incompetent. A majority of actually competent developers would never tolerate framework upon framework, dodgy code repositories and "managers" casing fad after fad.

  • by TheNameOfNick ( 7286618 ) on Sunday July 28, 2024 @04:16PM (#64662112)

    500,000 page views per month. But that’s 0.2 requests per second

    Page views do not equal requests. "Modern" web pages load a bunch of individual URLs, in addition to the external javascript libraries, fonts and other resources, and many of those requests are for resources that are dynamically created by interpreted scripts. I'm sure you have a nice phone, and with some optimization it could do what you claim, but "web scale" isn't about 20000 page views per day. What do you do when a single server isn't enough anymore? The web is still mostly bloat, web designers are still crazy gluttons, but there is a point where you actually need a distributed server arrangement, even if just to get closer to geographically diverse clients. Anyway, this hilarious almost 10 years old presentation is getting more relevant year after year: https://idlewords.com/talks/we... [idlewords.com]

    • In the 90s, people were doing dynamically loaded content with many different resources in Perl. These days, setting up a load balancer in front of a pair (or more) of servers is basically trivial.
  • by Junta ( 36770 ) on Sunday July 28, 2024 @04:26PM (#64662124)

    The industry is eternally susceptible to "habits of effective people" and launch headfirst into buzzword first engineering/promises. Most of the time, they are looking to solve a problem they will never have using tech that they don't actually understand.

    Has come up a number of times in my work this year.

    There was a stated problem that a certain team acted all melodramatic about how difficult it was to achieve. I thought it should be very easy so I did a proof of concept on a singular 10 year old desktop over my home internet connection that successfully handled the entire problem space with room to spare. So they decided they would start trying to deploy it it but couldn't figure out how to 'kubernetes' it up (despite, again, a single instance scaling way more than they would ever need, and kubernetes ultimately being superfluous, but workable, they just lacked the competence) so they said they won't do it and reported to executives that it still is an unsolvable problem.

    In another scenario, there is an issue that demands scale-out a little, but my team implemented it without kubernetes (kubernetes adds more complexity that doesn't really help in this scenario). We have a history of very large scale, but another team convinced execs that because we don't use kubernetes, we can't scale as well as that other team could and to give them the mission and funding for a really critical customer that my team would normally handle. Turned out they couldn't make their "solution" to scale to even 5% of our previous proven capacity and royally pissed off the customer. Now the executives are saying that we *must* not tell them about our usual strategy, because we would look even *worse* if we had the "better" solution and withheld it from them, so we have to make that other project work for them. Note this team previously lost us a very big customer, so if they lose this, I anticipate the business can only get snowed so much before they act sanely.

    Happened with OpenStack and is happening with Kubernetes, the projects never got "easy" to use and way too many novices are trying to use and need bailing out left and right because they end up in a scenario they cannot debug. A large amount of my time is digging other teams out of the mess they made by debugging their solutions in Kubernetes or similarly overly "architected" schemes. I *can* debug it and understand why and how of what's going on, but I bemoan that it's completely superfluous in all these little projects and wasting my time where they would have just done it a more straightforward way.

    But some article in the tech media comes along and reaffirms the perception that the only hope to scale is Kubernetes, and *every* little niche problem domain will need to scale to that point.

    My "this is overcomplicated crap" sense is triggered by:
    -Let's use Kubernetes
    -You need to add more message brokers
    -We can't make a dockerfile to install a simple software from a yum repository, you'll need to do it for us
    -Here's a chart showing the 8 instances every single request must traverse to be serviced
    -The customer will need a gateway appliance to connect their on-premise to our cloud hosted solution

    • Same here. I've been developing for almost 20 years now and in recent years I've noticed a worrying increase in the unnecessary complexity of many projects and where I've noticed this complexity has been caused by sheer incompetence on the part of the developers involved.

      And why do I say incompetence? Because they go after the latest fad in programming, without caring whether it fits the particular problem they want to solve, purely because it's already done and so they don't have to think too hard about
      • As an industry, we've given up on elegance.

        Not that programmers can't produce it, but that they can't even recognize it.
      • by Junta ( 36770 )

        What's worse, what they lack in competence they usually make up for in lip service to convincing their bosses that the latest framework will solve all the problems in the universe

        Fully agree, but to add, the bosses see two proposed realities:
        -You actually need to have and retain competence
        -This buzzword allows your programmers to be fungible low skilled lowest bidder to get the results you need

        Further add that it can often take *years* for the "jig to be up" with the second option, given how a lot of these scenarios play out. Currently the darling child of my org is a software division that goes *hard* on that second philosophy and they've managed to rationalize every failure for o

    • -You need to add more message brokers

      This one is worse than just complexity, it's practically a guarantee that you have mysterious race conditions, bottlenecks, and synchronization errors all over the place. Modern software developers somehow never learned how to do asynchronous programming.

  • The things he rails about are precisely why Crowdstrike's outage was possible to happen.

    But these kinds of habits are commonplace, if not ubiquitous, within the industry. Engineers (and I hesitate to call them that, because...) rely on these automated build/test suites to make sure their code works, but don't take the time to write basic good code or engineer the code, instead writing sloppy and incomplete code to 'just get it done'.

    Agile et al has been a cancer.

    • CloudStrike happened because they did NOT follow good deployment practices. Their issue affected on-prem and cloud system equally. Their system downloaded updated configuration files *without* a new install, just like all antivirus software does. The update that brought everything to its knees, was a "content data file" download. https://www.crowdstrike.com/fa... [crowdstrike.com] It had nothing to do with containerization or Kubernetes or Docker or any other such mechanisms.

      • by stooo ( 2202012 )

        >> CloudStrike happened because they did NOT follow good deployment practices.
        Like, Anybody basing their product or infrastructure on M.S. stuff can possibly follow any good deployment practices ???

        • Windows has nothing to do with it. CrowdStrike's sensor has crashed Linux too.

          https://www.theregister.com/20... [theregister.com]

          And yes, good deployment (and bad) practices are possible with any OS. Things like blue green deployments https://www.redhat.com/en/topi... [redhat.com], are not dependent on OS-specific features. CrowdStrike's model, on the other hand, made it impossible for customers to use blue green deployments, also regardless of OS.

  • by Somervillain ( 4719341 ) on Sunday July 28, 2024 @09:45PM (#64662702)
    This is a HUGE problem with HUGE ecological consequences. The problem is programming "experts" tell contradicting stories and the folks writing the checks have to make sense of a bunch of Aspie nerds shouting at each other.

    Some POS decides that Python is the only way they can be productive...because that's the only language they ever bothered to learn...then they get creative and write a bunch of libraries in Python that do useful things...now we need to all use Python because TIOBE or some lib or whatever.

    Same story with node.js....some ignorant POS that shouldn't be a professional software engineer goes around saying "Oh, you're using Java on the backend?....what is this 2002??...don't you know JavaScript is 'the language of the web'"...and between that and having 2000 frameworks, the runs slow AF and doesn't scale and now you have to buy 10x the cloud instances...but hey, your app is written "in the language of the web!!!"...so that has to be good for something, right?

    Hell, one can argue the same is true about Java, my language of choice. I learned Java because that's what everyone hiring was using. A lower level language with less frameworks would definitely be more efficient.

    Look at your "business logic" someday. I will wager that for 90% of working programmers, you're just doing basic CRUD. You use fancy terms like "business logic" or in the old days "middleware" that sounds cool, but all you do is take input from a form, validate it, and save it somewhere...usually a fancy relational database you don't need. How many people have apps like nearly everyone I've used where they normalize data into 10 tables, but only query on a few columns in a tiny subset of tables?...I had one app...to save a customer's data, 5kb, it would require 200 SQL statements...because NORMALIZATION...why?...ummm..."best practices?"....this is across 50+ tables...but only 5 of the main tables were every queried by anything other than ORM reassembing things. So...we're wasting a TON of energy and making our customers sit through delays and buying more cloud instances...why?...to say we're normalized...the app is slower. We get no benefit. But no one questions it. If I were to write the app intelligently normalization-wise, by saving the JSON from the user and using columns in a single table for indexing....instead of breaking it into tiny relational pieces across 50 tables, the codebase would be 1/10th of the size 100x faster and better in every way...

    ...then another engineer would look at it, say I am a total fucking moron (which well...is often true, but not in this one instance....)...and I would get fired. No matter how objectively better my idea is, it would get rejected by nearly every employer and my peers would say I am clueless and don't know how to design a relational database structure...I do...I know how to do it REALLY well...so well, I can tell it's pointless for the majority of applications I've worked on...

    ...but if you're a business owner and not a SQL Expert...how do you tell who is right?

    Same applies for productivity. If you think Python is a much more productive language than Java, you've never bothered to learn Java correctly. If you did, you'd know there's not much difference. Sure...there's pros and cons in every language, but Python is objectively slower and has a lot of complications a compiled language avoids entirely.

    That said...if you're not a programmer, but a business owner...how do you determine who is right? Am I right in saying that Python or JavaScript isn't really that much more productive than Java, the python/JS programmers just never bothered to learn another language and "when all you have is a hammer, everything looks like a nail"....or are they right that it's much more productive and I am just an old guy who thinks he knows more than he actually does? (I happen to work with all 3 languages all the time)

    So I get why business owners listen to the experts wh
  • by LostMyBeaver ( 1226054 ) on Monday July 29, 2024 @12:17AM (#64662926)
    We ran hundreds of bank branches, tens of thousand ATMs and processed millions of transactions a day using an NCR mainframe with the approximate CPU capacity of a 10Mhz 80286 throughout 1998. We had extremely high uptimes. I believe I only experienced one service outage in the two years I was there and it was short.

    Every developer wrote optimized code and the computer and OS was precisely tailored for On-Line Transaction Processing. In modern terms, this is serverless programming.

    I've run many tests using MongoDB or CouchDB on 3 Orange Pi 5 Plus and a similar transaction cluster with a simple nginx load balancer configured as failover. The OLTP subscribed to a branch on a GitHub project and pulled on change detection. It then AoT compiled. The languages supported were "anything". As the tool compiled based on a project file which contained build info and routing info.

    The results were, I could run a bank 100 times the size on about $2000 of equipment with high availability. And operations would be pretty straight forward.

    It would require a proper database administrator and that the programmers actually understood Big-O. (meaning how to reduce processing complexity),

    Most modern systems are overkill. We don't educated kids to write transaction systems anymore even thought most compute tasks are. The result is generally big ugly and unruly systems.

    P.S. Most of the mainframe code was Neat-3, basically assembly. My code was primarily TypeScript or WASM. JavaScript compilers and VMs are probably the most efficient runtimes ever in history, especially when you consider their additional ability to sandbox.
  • by Keruo ( 771880 ) on Monday July 29, 2024 @03:05AM (#64663056)

    If we analyze what tailscale actually is, youâ(TM)ll realize itâ(TM)s just IPv6 with extra steps to hide some of the commonly considered complex parts

  • Yeah, pretty much. (Score:5, Insightful)

    by Qbertino ( 265505 ) <moiraNO@SPAMmodparlor.com> on Monday July 29, 2024 @03:36AM (#64663088)

    Disclaimer: Senior Webdev here.

    I've been doing professional non-trivial web-development for 24 years now and most of what I see going on these days is a hideously bloated, largely pointless superfluous mess. Mind you there is some merit to the way the youngsters to things these days: de-normalisation, re-introducing JS to the server (JS was a server-side PL once already in the mid-90ies, only no one remembers that anymore), JSON instead of XML, etc. All of that is good and has seeped into web development in general and has moved it forward.

    However, I observe a massive problem that plagues most projects these days: Most of them are dynamic and have huge build chains for no good reason. Containers aren't used for redundancy and moving workloads but for every developer using their own massive stack. Deciders don't have a clue of what's going on and just want something colorful to click on yesterday. Every dev is doing their own thing and they couldn't care less as long as their docker containers run. Massive Gitlab setups with 5000+ features and functions when only a handful of features are needed. Set up by people who can't even change their upstream repo let along configure git-hooks correctly. The Dilbert-quota very often is through the roof.

    Many projects I see today are literally unmaintainable in their current state or have deteriorated so far that they pose a serious security risk with absolutely no one having any host or system involved under control or even being aware fof the risks involved.

    There are some modern web setups that use SOA correctly, spread workloads and have admins, devops and devs who know what they are doing and making the best of modern "cloud-computing", but that's maybe 10%. Most of what's happening in modern web projects appears to be bullshit work, likely to eventually be consolidated away. Which is why I'm not too keen on things like container virtualization. That's one of those technologies that are used for mostly the wrong reasons.

    My job as someone who has experience and still stays on top of current web-dev fads my job these days very often is damage control, even if most people aren't aware of that.

    So, yeah, a lot of software-development at least in the web is bloated and flaky.

    • All of these products that seem to be wantonly thrown into every project all have fancy professionally written marketing pages tailored to appeal to middle managers. Cargo-cultists and frauds regurgitate this marketing material to sound knowledgeable with the bonus of usually offloading all extra, and much existing complexity onto somebody else. That's how they get raises and "tech lead" positions while doing nothing but copying and pasting Javascript from stack overflow.

  • by cas2000 ( 148703 ) on Monday July 29, 2024 @03:57AM (#64663126)

    It's hard to take any blog or corporation or individual seriously when their web site is yet another abomination with light-grey text on a white background.

    It's as if they're deliberately trying to make their site difficult to read - but it's more likely that they're just clueless and don't understand that the point of communication is to actually communicate.

  • by Tom ( 822 ) on Monday July 29, 2024 @04:11AM (#64663144) Homepage Journal

    People don't understand big numbers. So when you say "500,000" it sounds like a lot.

    But per month? Pfft. That's just over 16,000 per day. I worked at a dot-com company in the early 2000s that served that many requests per minute. At that time, we were industry-leading. But that's over 20 years ago.

    But that doesn't matter. 500,000 still sounds like a lot. And managers, being the simple beings they are, are easily impressed by big numbers. Just like big words. Like we want straight to "hyperscalers", because anything less than "hyper" apparently isn't good enough anymore.

    But we engineers and IT people are also blind, just in different places. We think scaling is about size. To a lot of decision makers, scaling is about costs. It makes a huge difference - accounting wise - whether you buy a ton of hardware or pay a monthly fee. It's different cost categories, feeding different KPIs. And you can rant to your shareholders about flexibility and dynamic dingdong.

    In the end, it's all about convincing people that the product you're selling is what they should buy. And cloud marketing has done a great job. Combined with large companies like Mickeysoft going all cloud for their own reasons (subscription fees instead of one-time purchases), it creates the impression that cloud is a movement and everyone is doing it.

  • And it was decreed: "Thou shalt not reinvent the wheel." Thus, from this commandment, every humble two-wheeled barrow became burdened with countless wheels, each dependent upon myriad others. Need a barrow to ferry the wheat? Behold, one that not only carries the wheat but also counts lunar cycles and is ready to be transformed into a merchant ship.

    Thus, the realm of technology became a tangled web of wheels upon wheels, all bound by the edict of best practices.

  • I wonder how much of this is driven by developers wanting to play with new technologies and add buzzwords to their resume / LinkedIn profile?

  • Besides the obvious which is elimination of traditional IT staff by using 'cloud' is the thought that developers are interchangeable fungible resources. If all you have to use is building blocks to make applications, you don't need real developers. You jusy have assemblers of apps through the process of combining these building blocks. So to summarize, what drives management to continue to allow this expansion of continuous layering of frameworks and what not is the hope that at some point they want to hire
  • by whitroth ( 9367 )

    For over 20 years, I've been complaining about bloat. Heavily, that's due to OOP. I want a clipping of Godzilla's toenail, and you give me all of Godzilla, with a window frame around his toenail.

    Sure, you can write good code in any language. 80% of the code written, like the stuff I've seen in everywhere from tiny 20 person companies to major telcos, is *crap*. They don't know how to get the lower level module, so they take the big one.

    Am I saying most of you write bloated crap? Yes.

I have a very small mind and must live with it. -- E. Dijkstra

Working...