Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Software AI Businesses

Over 100 Public Software Companies Getting 'Squeezed' by AI, Study Finds (businessinsider.com) 37

Over 100 mid-market software companies are caught in a dangerous "squeeze" between AI-native startups and tech giants, according to a new AlixPartners study released Monday. The consulting firm warns many face "threats to their survival over the next 24 months" as generative AI fundamentally reshapes enterprise software.

The squeeze reflects a dramatic shift: AI agents are evolving from mere assistants to becoming applications themselves, potentially rendering traditional SaaS architecture obsolete. High-growth companies in this sector plummeted from 57% in 2023 to 39% in 2024, with further decline expected. Customer stickiness is also deteriorating, with median net dollar retention falling from 120% in 2021 to 108% in Q3 2024.

Over 100 Public Software Companies Getting 'Squeezed' by AI, Study Finds

Comments Filter:
  • How does the public know whether this is a real phenomena or companies merely blaming their slack or recession on AI because they can?

    Bots still make way too many dumb mistakes.

    • Grammar correction: should be "phenomenon" and not "phenomena".

    • Re:Bake or Fake (Score:4, Insightful)

      by fuzzyfuzzyfungus ( 1223518 ) on Monday April 21, 2025 @12:33PM (#65320833) Journal
      I suspect that at least some of it is unrelated.

      One of the reasons why so many vendors embraced 'AI' so desperately is that, just before the 'boom' really took off, things were not looking so good in 'SaaS' world. VCs and stockholders wanted infinite growth forever, obviously, but in 2023-ish, if I'm remembering correctly, average number of SaaS products in service per enterprise declined after something like a decade of growth; and individual companies in the area were seeing other metrics(customer acquisition and churn rates; margins, revenue growth, etc.) looking a bit worrying; without anyone having an obvious idea of what to do about it. The readily available appetite for pure 'cloud' we-do-server-operations-and-capex-so-you-can-do-opex-and-flexible had more or less been satisfied; with mutually beneficial improvements(efficiency from customized CPUs, virtualization-focused DPUs/management processors like Amazon's 'Nitro' stuff) being hard work mostly available to the hyperscalers; and just turning the screws on customers being easy but unpopular and potentially dangerous if it triggered a review of whether your product was actually that worth it; and a lot of the software niches where customers actually liked SaaS better than on-prem or fixed licensing had been fairly quickly mined out; while just Adobe-ing on-prem software into 'SaaS' by pure contractual force was not popular.

      The hope, futile in many cases(but easy to indulge because LLMs are extremely good at producing mediocre responses to badly structured input; so unbelievably simplistic and lazy 'integrations' can be knocked together even more quickly than something like custom reports where someone needs to know the SQL structure) was that Glorious AI, only available in The Cloud As A Service because it's way too expensive for little people to train and operate and we need OpenAI's Trust and Safety to save us from synthetic kiddie porn and terminators, would be a shot of sweet, sweet, "supercycle" growth for a trend that seemed to be fading.

      As we know it didn't go that way. Customers were mostly apathetic about the 'me-too' integrations; plus they were relatively expensive to actually operate(which make them very bad 'filler' to increase the apparent value of a bundle); and the very speed and ease with which everyone was able to go from OpenAI API key to occasionally functional 'feature' made them exquisitely 'moatless', so anyone who was previously having a hard time justifying their product's cost or a customer's seat count ended up exactly where they started.

      I suspect that there are some 'SaaS' guys who were actively harmed(whether it be 'classical' ones that got out-bullshitted and cut by herd animals in management rushing to allocate resources to a bold AI strategy; or 'SaaS' stuff whose whole deal had been 'we plug into the data you don't understand and, um, totally do insights'; which was now in the crosshairs of a technology that's nearly perfect if you care more about the ease of ramming data you don't understand in than about what comes out; and probably more than a few who sabotaged themselves by taking their eye off whatever it was they were actually good at to focus on the shiny new thing instead); but at least some of this can be understood in light of the fact that, within the industry, "AI" was something they hoped would be a savior from conditions that already suggested market saturation and limited growth opportunities.

      Not a direct analog to something like the '3d TV' hype train that literally all the consumer AV people jumped on like terrified lemmings a while back; since this 'AI', whatever its faults, does appear to be its own thing, not an internally developed piece of SaaShole copium; but similar in the sense of an industry collectively freaking out about the end of a period of relatively easy and natural growth that it could find no real way to extend.
    • How does the public know whether this is a real phenomena or companies merely blaming their slack or recession on AI because they can?

      Bots still make way too many dumb mistakes.

      I'l let you in on a little secret, the companies don't give a fuck about the results as long as they can save money. Let's say they buy a customer service bot from Googs and it is utterly worthless. Why does that matter to the company when they already have your money? You can't go to the government anymore for help as mango mussolini and his dorks systematically dismantled anything that would help you. It's a rigged system, a scam. Funnel all the money to the already wealthy and you get left with eith

      • > companies don't give a fuck about the results as long as they can save money. Let's say they buy a customer service bot from Googs and it is utterly worthless.

        One way to interpret this is similar to the offshore-to-India fad where companies did indeed get software made relatively cheaply, but the maintenance costs kept skyrocketing, as duck-wire needed chicken-tape to get it to work. Such companies eventually brought those projects back inhouse, and consultants made boat-loads of money fixing the offs

        • by Tablizer ( 95088 )

          For the record, I'm not saying E. Indians are "bad coders", only that the coding shops were rewarded for delivering initial features, not for making the software maintenance-friendly. So the offshorers got what they paid for and only what they paid for. The bosses didn't understand the importance of long-term maintenance so didn't measure/reward for it.

          • The bosses didn't understand the importance of long-term maintenance so didn't measure/reward for it.

            Or they understood it wasn't worth the cost. Corporations have a single value, the bottom line, and measure decisions against that value. There are certainly employees who have other values but applying those values in ways that hurt the bottom line are a violation of their fiduciary responsibility to the company.

            • Fiduciary responsibility does not, and has never meant, that the bottom line must continue to go up.
              I have to imagine that anyone who believes that dumbass meme has never, never worked in a corporation where they were actually exposed to the C suite.
              • Fiduciary responsibility does not, and has never meant, that the bottom line must continue to go up.

                It means they have a responsibility to act based on the company's best interests, not based on their own interests or values. Corporations interests are solely financial and they define their success financially. To the extent they consider other values its ultimately because they think those values will add to their financial success. i.e. the bottom line.

                You aren't going to find many successful CEO's that have walked into their board and said "This is going to cost us a lot of money with no payback, but i

                • It means they have a responsibility to act based on the company's best interests, not based on their own interests or values.

                  Correct. Which is not what you said, at all.

                  Corporations interests are solely financial and they define their success financially.

                  That's simply incorrect.

                  To the extent they consider other values its ultimately because they think those values will add to their financial success. i.e. the bottom line.

                  Incorrect.

                  Fiduciary responsibility in the US is guided by the Business Judgement Rule.
                  This means, essentially, that as long as the board is acting in good faith, a court will not presume to have better business judgement than the board member, regardless of the outcome.

                  The Interests of the Corporation are not solely value-based. They can also be longevity based. Stability based.
                  To quote Clarence Francis, President of General Foods in

                  • They can also be longevity based. Stability based.

                    Which is not of any financial value?

                    We would serve the company’s interests badly by shifting the fruits of the enterprise too heavily toward any one of those groups.

                    Again what are the "interests" not served? The answer is ultimately the company's financial interest.

                    The payback may be nothing other than consumer sentiment, or good will earned toward policymakers.

                    They could care less about consumer sentiment that doesn't translate into customer purchases. Likewise the good will of policy makers needs to translate into policies that serve the company's financial interests. Corporate donations to candidates are investments, not charity. Its not as if they just want to be liked.

                    While that may or may not be true, that is due to financial interests in boards and executives, and has nothing to do with fiduciary responsibility.

                    It is if they consider the corporation's interest as its

                    • Which is not of any financial value?

                      Not necessarily an increase of it, no.

                      Again what are the "interests" not served? The answer is ultimately the company's financial interest.

                      You're now trying to rewrite your claim of increasing bottom line as "interest".
                      I have demonstrated that not only is this not historical, but it's not a legal interpretation of fiduciary responsibility, either.

                      They could care less about consumer sentiment that doesn't translate into customer purchases. Likewise the good will of policy makers needs to translate into policies that serve the company's financial interests. Corporate donations to candidates are investments, not charity. Its not as if they just want to be liked.

                      Above is literally proof that this standpoint is incorrect.

                      It is if they consider the corporation's interest as its financial success. In fact, if they acted in their own financial interests to the detriment of the company's they would be violating their fiduciary responsibility to the company.

                      It would indeed, but nobody said that. Rather, they've done their best to align the interests of the company with theirs, which is enrichment. This is, as demonstrated above, a relatively new aspect of a

                    • they've done their best to align the interests of the company with theirs, which is enrichment.

                      Yes, in both cases. And the point was that corporations are interested in one value, their enrichment. They are essentially sociopaths. They have no human values beyond avarice. You apparently agree and I will accept your semantic quibbles about my calling that "the bottom line".

                      Not necessarily an increase of it, no.

                      What difference does it make whether it is protecting current financial value or increasing it? The answer is none.

                    • What difference does it make whether it is protecting current financial value or increasing it? The answer is none.

                      Oh, I'd say it can make every difference in the world.

                      Yes, in both cases. And the point was that corporations are interested in one value, their enrichment. They are essentially sociopaths. They have no human values beyond avarice. You apparently agree and I will accept your semantic quibbles about my calling that "the bottom line".

                      A corporation can be interested in one value, their enrichment, but that is not part of their fiduciary duty.
                      Corporations are not inherently sociopathic. They're composed of people, and those people have the widest latitude with regard to what they consider "the interests of the corporation".
                      This why I told you about the Business Judgement Rule.
                      A judge will refuse to hear your argument for why what a CEO did was not in the interests of the company- on

                    • The entire reason I initially said:

                      I have to imagine that anyone who believes that dumbass meme has never, never worked in a corporation where they were actually exposed to the C suite.

                      Is because I regularly attend board meetings (I answer to the board in my corporation), and I can assure you decisions are made all the time just because they're considered morally right.
                      There's no concern about what it'll do for our bottom line, as long as it doesn't sink us or hamper long term plans, which are- believe it or not- not just "MAKE DOLLA BILLS!".

                      A sense of corporate self preservation does not equate to some kind of sociopathic need to hoover up every doll

                    • A judge will refuse to hear your argument

                      I doubt many judges are reading this and I am not making or interested in legalistic arguments.

                      Corporations are not inherently sociopathic. They're composed of people,

                      I thought you were arguing legalism so they are people. The fact is that collectively the decisions in corporations are made based on the financial interests of the corporation. To the extent someone knowingly makes a decision to the financial detriment of the company they are not fulfilling their responsibility to the company. They can't say I took all the money and gave it to the poor because it was the Christia

                    • A sense of corporate self preservation does not equate to some kind of sociopathic need to hoover up every dollar that's possible.

                      Tell me about the decisionz your board made to the detriment of the company's long term financial health because they were morally right.

                      There are quite a few analogues between corporations and people

                      There are also important differences, even if the Supreme Court justices argue otherwise. In fact, the only real value they share is avarice. And lets be clear we are not talking about the individuals who work for the company but the company itself. No matter how empathetic a person is, the corporation's financial interests constrain that empathy

                      .

                      And to be clear I am tal

  • Innovate or Die (Score:2, Informative)

    by kwelch007 ( 197081 )

    This is not new. Not sure AI is really the right direction yet, but in tech, innovation has always been the key to long-term success. Whether that innovation is acquired or developed in-house, well, that's a different question.

  • by jenningsthecat ( 1525947 ) on Monday April 21, 2025 @12:23PM (#65320813)

    Isn't this going to make software even more impenetrable than it already is? Traditional software - even that with lots of tech debt and shitty or non-existent documentation - is amenable to analysis, troubleshooting, and being fixed. Its behaviour is largely deterministic, is it not?

    Is the same true of 'applications' which are mostly queries to an LLM? If I query an LLM today, and issue the same query to the same LLM in a week, will the answer be identical? If not, then it seems to me there's potentially a hell of a problem with "AI agents (are) evolving from mere assistants to becoming applications themselves".

    I have almost zero expertise here, so please tell me what I'm missing in this picture and if I'm right, wrong, or somewhere in between.

    • For LLM codegen, the problem really is the same- trying to figure out someone "else's" code.
      In general, I can say that LLM codegen'd code is usually far better documented, though. If you ask it to crank up the verbosity, it'll happily do so.
      They tend to produce pretty clean code, but definitely not without mistakes.
      In general, I haven't seen where the "prompt' used to create it is saved, because frankly... it's just not that simple.
      The "prompt" in the terms of an LLM is really its entire context window,
      • In general, I can say that LLM codegen'd code is usually far better documented, though. If you ask it to crank up the verbosity, it'll happily do so. They tend to produce pretty clean code, but definitely not without mistakes. In general, I haven't seen where the "prompt' used to create it is saved, because frankly... it's just not that simple.

        Thanks - I suspected that last part about the prompt being saved, and it's good to have confirmation. Regarding documentation, if you as the programmer fix a problem in the code that the LLM generates, is there a mechanism to track the change and any comments about it? And is the LLM in any way "aware" of the changes and comments?

        Do you think things like comments, versions and change tracking will be introduced to make these tools more robust and manageable? Or are we at the point where business cases and f

        • Thanks - I suspected that last part about the prompt being saved, and it's good to have confirmation. Regarding documentation, if you as the programmer fix a problem in the code that the LLM generates, is there a mechanism to track the change and any comments about it? And is the LLM in any way "aware" of the changes and comments?

          Yes, though the way this works is a bit of a machine in itself.
          Good aiding systems will track LLM commits, and yours, in git and help you keep the LLM on the rails with regard to not overwriting fixes you've made.
          The way the whole process works is kind of a metamanagement of the context window during the assisted coding.

          The LLM ingests comments like any other code, and understands that they are comments, contextually.

          Do you think things like comments, versions and change tracking will be introduced to make these tools more robust and manageable? Or are we at the point where business cases and fashions change too fast to bother with niceties like maintenance and legacy?

          It's not technically impossible to, say, track the changes to the context window at eac

    • by allo ( 1728082 )

      Get a local model (or an API provider that guarantees you not to change the model), fix the parameters (including all random seeds) and you get deterministic behavior. The question is, if you're always sure that the first shot is the best. And if the input you process is the same.

      • Thanks for the info. For me, this also raises the question of what happens during a software update. Are bugs or incorrect answers more likely to occur when updating the LLM and/or its 'app' than when updating a compiled program of similar function and complexity?

        • Parent is correct that you typically operate an LLM in deterministic mode (temperature = 0) when doing coding assistance.
          This means the logits are no longer treated as probabilities, and are rather simply converted into a one-hot vector with the highest probability being selected.

          There are 2 parts to the rest of your question.
          The app, and the LLM.

          If the LLM changes, its output will change for any given input.
          However, the app can manage how your input becomes the LLMs input, and as such the output can
        • by allo ( 1728082 )

          There are two aspects. Wrong answers and different answers.

          When you change the environment, you may get different answers, because you have for example another random seed (something generates an additional random number before your LLM runs) or maybe even different rounding errors. That does not mean the answers are wrong.

          And then there is the question about if the answers are right/good. And that mainly depends on the model. Even with all rounding errors you can force some things to behave similarly. If y

    • The code versus comment problem is not solved by AI tools. The architecture of LLMs is such that a comment is merely a piece of text generated in addition to the code, not actually describing the code, but simply being statistically likely to be jointly there. The code itself is not understood by the AI either, only generated to correlate with the context window containing your prompts.

      Note that human comments in a changing codebase are often unreliable and incomplete too. This leads to the usual problem

  • by snowshovelboy ( 242280 ) on Monday April 21, 2025 @12:43PM (#65320865)

    What are the companies and what do they do. Article is behind a paywall. Summary is a bunch of AI buzzwords. No information here.....

    • Take a look at the "Tranquility Reader" extension for Chrome / Firefox, it can display some of those paywalled articles, such as this one.
      • On Firefox without add-ons, current method is to enter "Reader Mode" which is the first icon to the right of the URL. The traditional method (still valid, but ugly) is to use the good old menu bar (still available from the right click) then Display/Page Style/None (which deactivates CSS).

    • What paywall? I read the entire article without issue. The problem must be on your end.

      That said, they don't specifically mention any company. Only the general term of mid-sized SaaS companies.

  • High growth is always risky. This is because high-growth companies focus on short-term gain, at the expense of long-term stability. These are precisely the kinds of companies that would be impacted by AI. On the other hand, companies that focus on--their customers--establishing long-term relationships that are mutually beneficial--these kinds of companies are less affected. Their customers are more likely to stick around.

    Pick your poison. You want a wild ride? Go for high growth. You want stability and pred

  • Ever been to a brand new, cool restaurant that just opened? It's a madhouse. The lines are long, the service is terrible, the food is often not so good either. There's just too much hype. Wait a couple of months until the hubbub dies down, then try that new restaurant. They will have had time to really train their employees, the food will be better, and the lines shorter.

    This AI boom is like that new restaurant. Everybody wants in, so people are making dumb business decisions, buying marketing hype for half

"There is nothing new under the sun, but there are lots of old things we don't know yet." -Ambrose Bierce

Working...