Forgot your password?
typodupeerror
Programming AI

Has the AI Disruption Arrived - and Will It Just Make Software Cheaper and More Accessible? (aboard.com) 88

Programmer/entrepreneur Paul Ford is the co-founder of AI-driven business software platform Aboard. This week he wrote a guest essay for the New York Times titled "The AI Disruption Has Arrived, and It Sure Is Fun," arguing that Anthropic's Claude Code "was always a helpful coding assistant, but in November it suddenly got much better, and ever since I've been knocking off side projects that had sat in folders for a decade or longer... [W]hen the stars align and my prompts work out, I can do hundreds of thousands of dollars worth of work for fun (fun for me) over weekends and evenings, for the price of the Claude $200-a-month."

He elaborates on his point on the Aboard.com blog: I'm deeply convinced that it's possible to accelerate software development with AI coding — not deprofessionalize it entirely, or simplify it so that everything is prompts, but make it into a more accessible craft. Things which not long ago cost hundreds of thousands of dollars to pull off might come for hundreds of dollars, and be doable by you, or your cousin. This is a remarkable accelerant, dumped into the public square at a bad moment, with no guidance or manual — and the reaction of many people who could gain the most power from these tools is rejection and anxiety. But as I wrote....

I believe there are millions, maybe billions, of software products that don't exist but should: Dashboards, reports, apps, project trackers and countless others. People want these things to do their jobs, or to help others, but they can't find the budget. They make do with spreadsheets and to-do lists.

I don't expect to change any minds; that's not how minds work. I just wanted to make sure that I used the platform offered by the Times to say, in as cheerful a way as possible: Hey, this new power is real, and it should be in as many hands as possible. I believe everyone should have good software, and that it's more possible now than it was a few years ago.

From his guest essay: Is the software I'm making for myself on my phone as good as handcrafted, bespoke code? No. But it's immediate and cheap. And the quantities, measured in lines of text, are large. It might fail a company's quality test, but it would meet every deadline. That is what makes A.I. coding such a shock to the system... What if software suddenly wanted to ship? What if all of that immense bureaucracy, the endless processes, the mind-boggling range of costs that you need to make the computer compute, just goes?

That doesn't mean that the software will be good. But most software today is not good. It simply means that products could go to market very quickly. And for lots of users, that's going to be fine. People don't judge A.I. code the same way they judge slop articles or glazed videos. They're not looking for the human connection of art. They're looking to achieve a goal. Code just has to work... In about six months you could do a lot of things that took me 20 years to learn. I'm writing all kinds of code I never could before — but you can, too. If we can't stop the freight train, we can at least hop on for a ride.

The simple truth is that I am less valuable than I used to be. It stings to be made obsolete, but it's fun to code on the train, too. And if this technology keeps improving, then all of the people who tell me how hard it is to make a report, place an order, upgrade an app or update a record — they could get the software they deserve, too. That might be a good trade, long term.

This discussion has been archived. No new comments can be posted.

Has the AI Disruption Arrived - and Will It Just Make Software Cheaper and More Accessible?

Comments Filter:
  • Is it? (Score:5, Interesting)

    by phantomfive ( 622387 ) on Sunday February 22, 2026 @07:46AM (#66003802) Journal

    [W]hen the stars align and my prompts work out,

    That doesn't sound like a frequent occurrence.

    The metaphor "when the stars align" is usually used to indicate something is quite rare, in fact.

    • legal weight (Score:4, Interesting)

      by will4 ( 7250692 ) on Sunday February 22, 2026 @08:49AM (#66003862)

      Are the executives willing to have the AI write the text and numbers in their next government required financial filing?

      The executives are legally required to certify those numbers in the USA by law:- https://en.wikipedia.org/wiki/... [wikipedia.org]

      "Title III consists of eight sections and mandates that senior executives take individual responsibility for the accuracy and completeness of corporate financial reports. It defines the interaction of external auditors and corporate audit committees, and specifies the responsibility of corporate officers for the accuracy and validity of corporate financial reports. It enumerates specific limits on the behaviors of corporate officers and describes specific forfeitures of benefits and civil penalties for non-compliance. For example, Section 302 requires that the company's "principal officers" (typically the chief executive officer and chief financial officer) certify and approve the integrity of their company financial reports quarterly.[10]"

      • Simply pointing out that the large tech companies are heavily pushing AI for everything don't trust it themselves enough on critical government filings when there are legal consequences.

        Software is a different level of safety for many areas, except for medical devices, airplane and transport vehicles, air traffic control, power plant systems, etc.

        The question needed to be asked by the media, wall street analysts, and social organizations is "What are some things which your company does not recommend its AI

    • by gweihir ( 88907 )

      Indeed. But this nicely shows how mentally incapable the AI fans really are.

      • by evanh ( 627108 )

        He even has this little gem: "I don't expect to change any minds; that's not how minds work."

        Funny, I was very much under the impression a solid argument does change minds. Of course, there's very little solidity to his arguments. He repeatedly says how poorly his results are.

        • O it does. Smart people will change their mind when presented with new facts. Dumb people tend to resist new facts that counter their opinion(assuming they even realise it counters).
        • by gweihir ( 88907 )

          Well, only about 20% of all people are open to rational argument. But that is a lot more than none. Sounds to me this asshole knows his arguments are bogus.

      • by gweihir ( 88907 )

        Indeed. But this nicely shows how mentally incapable the AI fans really are.

        The truth hurts, fuckups. And down-moderating the truth doe not change it.

    • Did some scripts with chatgpt... It does help an amateur like me getting things done fast. Things I normally would not even think about, are now a half an hour job. Do I trust it's output? No. I should at least scrutinize the code, dig in to the libraries it uses, ... Would I use it in a professional environment? Hell no!
      • It sounds like you are using it as a kind of replacement for StackOverflow or Google. I use it the same way, and it does seem to work reasonably well at that.
      • Do I trust it's output?

        What is this "trust" everybody keeps talking about?

        We don't "trust" human -generated code either; we have QA processes.

        • Who's "we"? Certainly not you, as you don't seem to understand how garbage code passing unit tests doesn't make it understandable or maintainable going forward.
          • by 0123456 ( 636235 )

            "It doesn't need to be understandable or maintainable because if you want to change the code you just change the prompt and have Claude recreate it from scratch." - AI fanboys, probably.

    • Also, "I knocked out some side projects"

      Side projects. This is a running theme with the AI stories. Like receiving a mere sip of kindness from a complete asshole, it feels like mana from heaven when AI actually delivers something that could be called value.

      And it does deliver value. It just doesn't do it at an confidence or quality rating that merits the projections right now.

      The biggest "you're getting ahead of yourself" moment is the leap to agentic. How about we make it work before we make it work unsupe

      • Right. Side projects "worth 100k over a weekend"... I'd like to see the math of how that value was calculated. Certainly it is bot the money he has earned.

        AI is good at problems I can formulate in under a minute, and not more than one sentence long, like some shell script combining multiple apps. If we start getting into more complex stuff that I need time to think how to formulate, then AI is not much help. So really it is extremely helpful at only very trivial level of complexity, or something that we can

      • Indeed. AI's good at filling an empty page and getting you started - that's for sure. What it categorically cannot do is *engineering*. It can't figure out where you need an audit trail, or where you need extra permissions checks or where you need to think about observability. You can sure prompt to get them, but by the time you've done all that, you may have as well written the code. Most of all though, you need to know that you need those things - I'd say 99% of all people in the world don't know that - s

    • by AmiMoJo ( 196126 )

      My experience is that AI can handle simple stuff, if you carefully specify what you want. It can also help get you started along the right path, doing some initial research for you.

      To get further than that, it needs hand holding from someone who understands where it is screwing up. Someone who can debug its code, and notice when it is doing things poorly.

      For complex stuff it tends to just fail entirely.

      That seems to match the experience of companies that claim AI is changing how they write software. They ar

  • Bias (Score:5, Informative)

    by phantomfive ( 622387 ) on Sunday February 22, 2026 @07:50AM (#66003806) Journal
    The author is biased, since his company is selling AI produced code.

    In this case study [aboard.com], they claim to have built a dashboard for a client that is HIPAA compliant. I don't know how you would verify that the AI had produced HIPAA compliant code. In particular, how do they ensure that it won't give data to people who shouldn't have it? What kind of prompt do you write for that?
    • by evanh ( 627108 )

      Amusingly, his comments are somewhat schizophrenic if you read them. He saying both how broken the results are and how great that is because he gets them done so quickly.

    • Your statement illustrates a misunderstanding of what HIPAA even requires.

      HIPPA is not a compliance program. It is a law and set of regulations. There is no such thing as a way to "certify" software as being "HIPAA Compliant" because it is a meaningless term.

      To be "HIPAA compliant", the entire software + solution stack needs to comply with the regulations.

      In this case, he most likely made a dashboard that redacted PII from the eyes of consumers except on a need-to-know basis - because that is the heart of H

      • There is no need to inspect the code to illustrate this kind of "compliance", you look at the solution and what it provides.

        Yeah, that's moronic lol.

        If you leak data, that breaks the law. It doesn't matter if you looked at the solution and thought it was ok. Bugs matter.

  • No. The process this nullhead describes is nothing more than shoveling shit into a pit.
    • by gweihir ( 88907 )

      To be fair, shoveling shit into a pit often has some positive value, even if that is not very high. The crap this cretin is peddling has negative worth.

  • It might fail a company's quality test says the idiot. You don't turn loose a product w/o testing so it doesn't cause massive damage of or completely shutdown networks !!! But it's immediate and cheap, That doesn't mean that the software will be good. I don't expect to change any minds. After all that he said, he's right !
  • "Will It Just Make Software Cheaper and More Accessible?"

    You know, there is freely available open source software?
    It may make programming cheaper (when it's unpaid in terms of time and concentration) and more accessible, but for software there is already free software for everything you need.

    • but for software there is already free software for everything you need

      His point is, software is now easier to produce. Free software included.

      No guarantee that it will be good - but there is no guarantee of that now either. How good any bit of it will be depends on the level of QA it goes through - just like it depends on that now.

      • No guarantee that it will be good - but there is no guarantee of that now either.

        His point seems to be that bad software is better than no software, an optimistic point.

        Unfortunately, it's not a true point (code that generates 'rm -rf' might be worse than no code, for example). It would be interested to see a (relatively) unbiased analysis of when AI software is better than no software. Unfortunately this guy is a salesperson, and is biased.

        • by jythie ( 914043 )
          Though even odder, he seems to be saying that bad custom software is better than working solutions using existing tools. Spreadsheets might not be as sexy as a custom app that you can show your friends, but they are a pretty reliable and simple way of doing things. its like he wants the prestige of bespoke software to show off with, and doesn't care if it works or not.
          • Though even odder, he seems to be saying that bad custom software is better than working solutions using existing tools.

            Good point.

      • by allo ( 1728082 )

        That's a point, but the statement sounds like they would be filling a void. And they just add tools to the currently available tools, which may help to extend existing (free) software and even create new. But that's not as revolutionary as it is phrased.

      • How good any bit of it will be depends on the level of QA it goes through - just like it depends on that now.

        The QA process currently assumes that at least some people actually know how the code works, and QA is already one of the biggest bottlenecks in the development process.

        For example, it's often very difficult to get your peers to do code reviews so you can commit your updates because they're busy and the work of doing code reviews sucks. The main reason they get done at all is quid pro quo: You have to eventually do code reviews for others or they'll stop reviewing your code.

        If all anyone is doing is reviewi

  • by greytree ( 7124971 ) on Sunday February 22, 2026 @07:57AM (#66003816)
    How many more of these Slashverts will we get here ?
  • What a wonderful world we have ahead. A world where "developers" and users embrace low quality and buggy software. 'Sure, it's dogshit. But that's OK!.'

    And software that wants to ship can ship? Regardless of whether we want it to or not? Sweet.

    We're living in the golden era. Panacea. Nirvana.

    I welcome our new AI overlords.

    • by gweihir ( 88907 )

      We're living in the golden era.

      More like in the deep delusion before it all comes crashing down ...

    • A world where "developers" and users embrace low quality and buggy software. 'Sure, it's dogshit. But that's OK!.'

      That happened decades ago [github.com]. "Bugs are not a big deal. Bugs are not a big deal! [Also, if you think bugs are a big deal, you're probably a Republican]"

      • I read the linked gist, and I call bullshit on the premise. Software bugs might be a lot of things, and reactions to them might follow office politics, but for people writing code we pretty much uniformly don't like them. And that's without regard to the left/right leanings of the coder. Take me for example. My personal politics are left leaning, and yet I've successfully written low-level OS code that absolutely, positively has to work. I take testing and bug hunting seriously, and do everything in my powe
        • Yeah, it takes a little while to understand his metaphor. The connection is this:

          He doesn't care about bugs, and doesn't like people who care about bugs.
          He also doesn't like Republicans.

          Thus they are the same in that way.
  • Share/stock-based companies exist to maximize profit by maximizing revenue and minimizing cost. There is no concession on price, unless the market mistakenly makes a price war; a 'race to the bottom' is against their interests. The price will be whatever maximizes overall revenue, which usually means pricing to high-value businesses, not low-end small consumers (example: GPU and RAM pricing) The difference between existing for shareholder value and existing for service is that service is inclusive and ena
  • by gweihir ( 88907 )

    But it will also make it much less secure, much less reliable, barely maintainable and not only a general waste of time, but of negative worth to use.

    Cheap, accessible, good, chose any two. Or something like it. Using too few dimensions can make any crappy idea look good to the stupid.

  • by SlashDotCanSuckMy777 ( 6182618 ) on Sunday February 22, 2026 @09:14AM (#66003896)

    They already lose hundreds of millions of dollars. The masses won't pay even the $200 they charge now, let alone the larger amounts they would need to charge to become profitable. These AI companies are cooked.

  • The reality filter is getting applied fast.
  • We can skip the whole software enshittification process if software is shit from the beginning.

  • by CRC'99 ( 96526 )

    If this is what it is, I don't want it... Anywhere...

  • by Mirnotoriety ( 10462951 ) on Sunday February 22, 2026 @10:11AM (#66003964)
    ClippyAI: AIgenerated software tends to look plausible while hiding serious problems: it often has more bugs and technical debt, with duplicated, overcomplex code that no one fully understands or owns. It frequently misses security best practices, so common issues like injection and XSS slip in, enlarging the attack surface faster than teams can review and patch.

    Because the model only sees your prompt, not your whole system, its code can clash with existing architecture, break integrations, and perform poorly at scale. Over time, teams risk becoming dependent on these tools, weakening core design and debugging skills while spending increasing effort just auditing, refactoring, and deleting the mess the AI produced in the first place.
  • "Is the software I'm making for myself on my phone as good as handcrafted, bespoke code? No. But it's immediate and cheap"
    Today's AI is just making software worse
    I am hoping for future AI that can help experts make excellent, efficient, bug-free software
    What we have today is just more slop

  • If anything, AI will make pricing dynamically adjustable in real time based on individual customer profiles - meaning prices will be optimized between individuals to maximize profits.

  • by caseih ( 160668 ) on Sunday February 22, 2026 @12:51PM (#66004122)

    Decades ago when magazines were gushing about the prospects of RAD, VB6 was released and had rapid uptake by all sorts of amateurs and non-programmer types. Here on slashdot, and by many professional programmers, it was widely panned as enabling all sorts of low-quality garbage because it very much lowered the barrier to entry, and could generate most of the boiler-plate code. This which was definitely resented by many. But all sorts of useful, one-off utilities (loads of shareware) were done using it by people who would not call themselves programmers. Drag and drop GUI form design and event-driven programming was a powerful concept that is now fairly mainstream post VB6, although I think many VB6 users disappeared after MS abandoned the platform and users when it came out with VB.net.

    Having used AI coding assistants (currently using four different models concurrently) for the last few months, I believe Coding LLM agents are a modern incarnation of VB6. While many here on slashdot poo poo them, in the last few months I've managed to finally do several projects that I've been wanting to tackle for years but just lacked the knowledge and time. I haven't let the bots do everything, but in guiding them carefully I've learned a lot and got things done. I've added features I need to existing open source software, written in languages I have no experience in and toolkits I've never used before. I've used them to convert entire projects from one language to another, or upgrade them to new language and toolkit versions. Recently I was able to bootstrap my learning of KiCad using these tools. LLM agents can create schematics and board layouts from scratch for me to get me started. Also more impressive, some of them can take an image of a component and description of the physical layout (either my own description or from the data sheet) and create both a custom symbol and footprint for it, if it wasn't already in the KiCad libray.

    In short I've made more progress on my projects in the last three months than I have in the entire year before this. Granted while what I'm doing is related to my profession, but I'm not paid to develop software, so I'm not a software (or electrical) engineer nor a professional programmer.

    Like VB6, we can ask, is this a good thing? Will it dilute the profession?

    • Yes, VB did actually make the price of software cheaper, just not...cheap. A lot of software got written that wouldn't have been written, before VB.

      The same is happening now, with AI. I don't think it's going to make writing significant software cheap, but it will certainly enable a lot of one-off code to be written by those spreadsheet-using departments now.

      And I too have gotten some projects done that wouldn't have happened without AI. Like updating the look and feel of my 1990's era website to Bootstrap.

  • by SoftwareArtist ( 1472499 ) on Sunday February 22, 2026 @01:36PM (#66004202)

    All forms of generative AI, not just code generators, are most useful when the answer doesn't need to be right. You get an answer quickly. Often it will be a useful answer but not always. If there's an objective standard for what's correct, it won't always match that standard. Sometimes that's ok. If you're generating pictures to use as clip art, or searching for information that you'll double check, or creating a game for fun, mistakes aren't too important. But you really don't want your bank running on software that works that way.

    I'd don't know if we'll ever overcome that limitation, at least not with the current approach to AI. Some other problems will be easier to fix. For example, current AI code generators work for small projects but fall apart when the code base gets too big. That's improving with time and will keep improving. It just takes bigger models with bigger context windows.

    But the lack of correctness guarantees may be inherent to the whole approach. It requires a rigorous process where every step is provably correct. That's very different from how current models work.

  • That doesn't mean that the software will be good. But most software today is not good. It simply means that products could go to market very quickly.

    is he saying the quiet part out loud?

  • I have a different take on the AI disruption. For unsupported home users of modern Linux use of an "AI-adviser" is necessary. Both of my Ubuntu systems went down this week, and only the *.ai from a major company allowed me to make repairs. Complexity makes trouble-shooting modern Linux impossible ... in the AI-aided trouble-shooting I ran into so many Linux micro-"horrors" that only another computer could deal with them. Took about 30 hours over 4 days ... the *.ai never complained a
  • The simple truth is that I am less valuable than I used to be.

    Smartest thing I've read online this week. And that extends to the main point of great vs. good enough.

    Too many folks are living in denial while their value proposition bottoms out.

  • I'll be lashing old Bessie to the plough while I watch the ruins of their shiny palaces fall out of the sky.
  • This level of statistical ignorance must have correlates...

"Would I turn on the gas if my pal Mugsy were in there?" "You might, rabbit, you might!" -- Looney Tunes, Bugs and Thugs (1954, Friz Freleng)

Working...