Has the AI Disruption Arrived - and Will It Just Make Software Cheaper and More Accessible? (aboard.com) 88
Programmer/entrepreneur Paul Ford is the co-founder of AI-driven business software platform Aboard. This week he wrote a guest essay for the New York Times titled "The AI Disruption Has Arrived, and It Sure Is Fun," arguing that Anthropic's Claude Code "was always a helpful coding assistant, but in November it suddenly got much better, and ever since I've been knocking off side projects that had sat in folders for a decade or longer... [W]hen the stars align and my prompts work out, I can do hundreds of thousands of dollars worth of work for fun (fun for me) over weekends and evenings, for the price of the Claude $200-a-month."
He elaborates on his point on the Aboard.com blog: I'm deeply convinced that it's possible to accelerate software development with AI coding — not deprofessionalize it entirely, or simplify it so that everything is prompts, but make it into a more accessible craft. Things which not long ago cost hundreds of thousands of dollars to pull off might come for hundreds of dollars, and be doable by you, or your cousin. This is a remarkable accelerant, dumped into the public square at a bad moment, with no guidance or manual — and the reaction of many people who could gain the most power from these tools is rejection and anxiety. But as I wrote....
I believe there are millions, maybe billions, of software products that don't exist but should: Dashboards, reports, apps, project trackers and countless others. People want these things to do their jobs, or to help others, but they can't find the budget. They make do with spreadsheets and to-do lists.
I don't expect to change any minds; that's not how minds work. I just wanted to make sure that I used the platform offered by the Times to say, in as cheerful a way as possible: Hey, this new power is real, and it should be in as many hands as possible. I believe everyone should have good software, and that it's more possible now than it was a few years ago.
From his guest essay: Is the software I'm making for myself on my phone as good as handcrafted, bespoke code? No. But it's immediate and cheap. And the quantities, measured in lines of text, are large. It might fail a company's quality test, but it would meet every deadline. That is what makes A.I. coding such a shock to the system... What if software suddenly wanted to ship? What if all of that immense bureaucracy, the endless processes, the mind-boggling range of costs that you need to make the computer compute, just goes?
That doesn't mean that the software will be good. But most software today is not good. It simply means that products could go to market very quickly. And for lots of users, that's going to be fine. People don't judge A.I. code the same way they judge slop articles or glazed videos. They're not looking for the human connection of art. They're looking to achieve a goal. Code just has to work... In about six months you could do a lot of things that took me 20 years to learn. I'm writing all kinds of code I never could before — but you can, too. If we can't stop the freight train, we can at least hop on for a ride.
The simple truth is that I am less valuable than I used to be. It stings to be made obsolete, but it's fun to code on the train, too. And if this technology keeps improving, then all of the people who tell me how hard it is to make a report, place an order, upgrade an app or update a record — they could get the software they deserve, too. That might be a good trade, long term.
He elaborates on his point on the Aboard.com blog: I'm deeply convinced that it's possible to accelerate software development with AI coding — not deprofessionalize it entirely, or simplify it so that everything is prompts, but make it into a more accessible craft. Things which not long ago cost hundreds of thousands of dollars to pull off might come for hundreds of dollars, and be doable by you, or your cousin. This is a remarkable accelerant, dumped into the public square at a bad moment, with no guidance or manual — and the reaction of many people who could gain the most power from these tools is rejection and anxiety. But as I wrote....
I believe there are millions, maybe billions, of software products that don't exist but should: Dashboards, reports, apps, project trackers and countless others. People want these things to do their jobs, or to help others, but they can't find the budget. They make do with spreadsheets and to-do lists.
I don't expect to change any minds; that's not how minds work. I just wanted to make sure that I used the platform offered by the Times to say, in as cheerful a way as possible: Hey, this new power is real, and it should be in as many hands as possible. I believe everyone should have good software, and that it's more possible now than it was a few years ago.
From his guest essay: Is the software I'm making for myself on my phone as good as handcrafted, bespoke code? No. But it's immediate and cheap. And the quantities, measured in lines of text, are large. It might fail a company's quality test, but it would meet every deadline. That is what makes A.I. coding such a shock to the system... What if software suddenly wanted to ship? What if all of that immense bureaucracy, the endless processes, the mind-boggling range of costs that you need to make the computer compute, just goes?
That doesn't mean that the software will be good. But most software today is not good. It simply means that products could go to market very quickly. And for lots of users, that's going to be fine. People don't judge A.I. code the same way they judge slop articles or glazed videos. They're not looking for the human connection of art. They're looking to achieve a goal. Code just has to work... In about six months you could do a lot of things that took me 20 years to learn. I'm writing all kinds of code I never could before — but you can, too. If we can't stop the freight train, we can at least hop on for a ride.
The simple truth is that I am less valuable than I used to be. It stings to be made obsolete, but it's fun to code on the train, too. And if this technology keeps improving, then all of the people who tell me how hard it is to make a report, place an order, upgrade an app or update a record — they could get the software they deserve, too. That might be a good trade, long term.
Is it? (Score:5, Interesting)
[W]hen the stars align and my prompts work out,
That doesn't sound like a frequent occurrence.
The metaphor "when the stars align" is usually used to indicate something is quite rare, in fact.
Re: (Score:3)
Re: Is it? (Score:3)
Re: (Score:1)
I do remember an "AI" program in the 80s which was going to make it easy for people to produce their own software because they just had to describe how the software would work and it would churn out the code for them.
It failed because few people are able to properly describe how software should work.
Re: (Score:2)
Re: (Score:2)
legal weight (Score:4, Interesting)
Are the executives willing to have the AI write the text and numbers in their next government required financial filing?
The executives are legally required to certify those numbers in the USA by law:- https://en.wikipedia.org/wiki/... [wikipedia.org]
"Title III consists of eight sections and mandates that senior executives take individual responsibility for the accuracy and completeness of corporate financial reports. It defines the interaction of external auditors and corporate audit committees, and specifies the responsibility of corporate officers for the accuracy and validity of corporate financial reports. It enumerates specific limits on the behaviors of corporate officers and describes specific forfeitures of benefits and civil penalties for non-compliance. For example, Section 302 requires that the company's "principal officers" (typically the chief executive officer and chief financial officer) certify and approve the integrity of their company financial reports quarterly.[10]"
Re: legal weight (Score:2)
showstaopper and dogfood (Score:2)
Simply pointing out that the large tech companies are heavily pushing AI for everything don't trust it themselves enough on critical government filings when there are legal consequences.
Software is a different level of safety for many areas, except for medical devices, airplane and transport vehicles, air traffic control, power plant systems, etc.
The question needed to be asked by the media, wall street analysts, and social organizations is "What are some things which your company does not recommend its AI
Re: (Score:1)
Indeed. But this nicely shows how mentally incapable the AI fans really are.
Re: (Score:2)
He even has this little gem: "I don't expect to change any minds; that's not how minds work."
Funny, I was very much under the impression a solid argument does change minds. Of course, there's very little solidity to his arguments. He repeatedly says how poorly his results are.
Re: Is it? (Score:2)
Re: (Score:2)
Well, only about 20% of all people are open to rational argument. But that is a lot more than none. Sounds to me this asshole knows his arguments are bogus.
Re: (Score:2)
Indeed. But this nicely shows how mentally incapable the AI fans really are.
The truth hurts, fuckups. And down-moderating the truth doe not change it.
Re: Is it? (Score:2)
Re: (Score:2)
Re: (Score:1)
Do I trust it's output?
What is this "trust" everybody keeps talking about?
We don't "trust" human -generated code either; we have QA processes.
Re: (Score:2)
Re: (Score:1)
"It doesn't need to be understandable or maintainable because if you want to change the code you just change the prompt and have Claude recreate it from scratch." - AI fanboys, probably.
Re: (Score:3)
Also, "I knocked out some side projects"
Side projects. This is a running theme with the AI stories. Like receiving a mere sip of kindness from a complete asshole, it feels like mana from heaven when AI actually delivers something that could be called value.
And it does deliver value. It just doesn't do it at an confidence or quality rating that merits the projections right now.
The biggest "you're getting ahead of yourself" moment is the leap to agentic. How about we make it work before we make it work unsupe
Re: Is it? (Score:2)
Right. Side projects "worth 100k over a weekend"... I'd like to see the math of how that value was calculated. Certainly it is bot the money he has earned.
AI is good at problems I can formulate in under a minute, and not more than one sentence long, like some shell script combining multiple apps. If we start getting into more complex stuff that I need time to think how to formulate, then AI is not much help. So really it is extremely helpful at only very trivial level of complexity, or something that we can
Re: (Score:2)
Indeed. AI's good at filling an empty page and getting you started - that's for sure. What it categorically cannot do is *engineering*. It can't figure out where you need an audit trail, or where you need extra permissions checks or where you need to think about observability. You can sure prompt to get them, but by the time you've done all that, you may have as well written the code. Most of all though, you need to know that you need those things - I'd say 99% of all people in the world don't know that - s
Re: (Score:3)
My experience is that AI can handle simple stuff, if you carefully specify what you want. It can also help get you started along the right path, doing some initial research for you.
To get further than that, it needs hand holding from someone who understands where it is screwing up. Someone who can debug its code, and notice when it is doing things poorly.
For complex stuff it tends to just fail entirely.
That seems to match the experience of companies that claim AI is changing how they write software. They ar
Bias (Score:5, Informative)
In this case study [aboard.com], they claim to have built a dashboard for a client that is HIPAA compliant. I don't know how you would verify that the AI had produced HIPAA compliant code. In particular, how do they ensure that it won't give data to people who shouldn't have it? What kind of prompt do you write for that?
Re:Bias (Score:5, Informative)
Re: (Score:2)
Re: (Score:1)
The good news is, you can now generate that 30k pages in half an hour by getting Claude to do it for you.
Re: (Score:2)
Be sure to have it include instructions for the AI that the auditor is using to give your company a pass.
Re: (Score:2)
Hitrust does not offer HIPPA certification. It offers a Hitrust certification, a audit to determine if you are in compliance with the law. Again there's no such thing as HIPPA certification any more than there is a certification that you aren't a murderer. Hitrust reviews your systems and processes and tells you if you are right there at that moment in time compliant with the requirements set out in HIPPA, nothing more. It has no legal weight other than maybe in a civil dispute for negligence.
Re: (Score:2)
Amusingly, his comments are somewhat schizophrenic if you read them. He saying both how broken the results are and how great that is because he gets them done so quickly.
"HIPAA Compliant" means nothing (Score:2)
Your statement illustrates a misunderstanding of what HIPAA even requires.
HIPPA is not a compliance program. It is a law and set of regulations. There is no such thing as a way to "certify" software as being "HIPAA Compliant" because it is a meaningless term.
To be "HIPAA compliant", the entire software + solution stack needs to comply with the regulations.
In this case, he most likely made a dashboard that redacted PII from the eyes of consumers except on a need-to-know basis - because that is the heart of H
Re: (Score:2)
There is no need to inspect the code to illustrate this kind of "compliance", you look at the solution and what it provides.
Yeah, that's moronic lol.
If you leak data, that breaks the law. It doesn't matter if you looked at the solution and thought it was ok. Bugs matter.
Cheaper and More Accessible? (Score:2)
Re: (Score:2)
To be fair, shoveling shit into a pit often has some positive value, even if that is not very high. The crap this cretin is peddling has negative worth.
It might fail a company's quality test says the.. (Score:2)
Will It Just Make Software Cheaper and More Access (Score:2)
"Will It Just Make Software Cheaper and More Accessible?"
You know, there is freely available open source software?
It may make programming cheaper (when it's unpaid in terms of time and concentration) and more accessible, but for software there is already free software for everything you need.
Re: (Score:1)
but for software there is already free software for everything you need
His point is, software is now easier to produce. Free software included.
No guarantee that it will be good - but there is no guarantee of that now either. How good any bit of it will be depends on the level of QA it goes through - just like it depends on that now.
Re: (Score:2)
No guarantee that it will be good - but there is no guarantee of that now either.
His point seems to be that bad software is better than no software, an optimistic point.
Unfortunately, it's not a true point (code that generates 'rm -rf' might be worse than no code, for example). It would be interested to see a (relatively) unbiased analysis of when AI software is better than no software. Unfortunately this guy is a salesperson, and is biased.
Re: (Score:2)
Re: (Score:2)
Though even odder, he seems to be saying that bad custom software is better than working solutions using existing tools.
Good point.
Re: (Score:2)
That's a point, but the statement sounds like they would be filling a void. And they just add tools to the currently available tools, which may help to extend existing (free) software and even create new. But that's not as revolutionary as it is phrased.
Re: (Score:2)
How good any bit of it will be depends on the level of QA it goes through - just like it depends on that now.
The QA process currently assumes that at least some people actually know how the code works, and QA is already one of the biggest bottlenecks in the development process.
For example, it's often very difficult to get your peers to do code reviews so you can commit your updates because they're busy and the work of doing code reviews sucks. The main reason they get done at all is quid pro quo: You have to eventually do code reviews for others or they'll stop reviewing your code.
If all anyone is doing is reviewi
Man selling software overstates its capabilities 3 (Score:3)
Re: (Score:2)
Re: Man selling software overstates its capabiliti (Score:3, Insightful)
An infinite number as long as we keep discussing them and creating page views and ad impressions.
Re: (Score:2)
Buggy Low Quality Software Is Ok? Sounds Fabulous? (Score:2)
What a wonderful world we have ahead. A world where "developers" and users embrace low quality and buggy software. 'Sure, it's dogshit. But that's OK!.'
And software that wants to ship can ship? Regardless of whether we want it to or not? Sweet.
We're living in the golden era. Panacea. Nirvana.
I welcome our new AI overlords.
Re: (Score:2)
We're living in the golden era.
More like in the deep delusion before it all comes crashing down ...
Re: (Score:2)
A world where "developers" and users embrace low quality and buggy software. 'Sure, it's dogshit. But that's OK!.'
That happened decades ago [github.com]. "Bugs are not a big deal. Bugs are not a big deal! [Also, if you think bugs are a big deal, you're probably a Republican]"
Re: (Score:2)
Re: (Score:2)
He doesn't care about bugs, and doesn't like people who care about bugs.
He also doesn't like Republicans.
Thus they are the same in that way.
No. Capitalism = max revenue, and it has a profile (Score:2)
Yes (Score:2)
But it will also make it much less secure, much less reliable, barely maintainable and not only a general waste of time, but of negative worth to use.
Cheap, accessible, good, chose any two. Or something like it. Using too few dimensions can make any crappy idea look good to the stupid.
Re: (Score:1)
Apologies for what seems like a repost - I just didn't understand the UI here at Slashdot. Can't delete it.
It won't be $200 for long. (Score:3)
They already lose hundreds of millions of dollars. The masses won't pay even the $200 they charge now, let alone the larger amounts they would need to charge to become profitable. These AI companies are cooked.
Re: (Score:1)
Bingo. If Claude can really create $100k of code for $200, how long will the price stay $200?
Re: (Score:2)
That's a big "if". It seems it is creating only $200 products too. Anything more and the products stops working entirely. And no-one is going to want to debug the results.
"The masses won't pay even the $200 they charge" (Score:2)
Their current ARR growth disputes your statement. https://www.saastr.com/anthrop... [saastr.com]
Also, simple logic disputes your statement. $200 / month is total peanuts compared to a human.
They could charge $5000 / month or higher for Claude Code Max and businesses would still pay for it, that is how good it is.
Re: "The masses won't pay even the $200 they charg (Score:2)
Right. If it could replace a human 50k/month would be a staring price.
Re: (Score:2)
Also, simple logic disputes your statement. $200 / month is total peanuts compared to a human.
They could charge $5000 / month or higher for Claude Code Max and businesses would still pay for it, that is how good it is.
And that's why no government should be reducing corporate tax rates and making income tax bear the burden.
Head down til its over. (Score:1)
This is a good thing (Score:2)
We can skip the whole software enshittification process if software is shit from the beginning.
Whut? (Score:2)
If this is what it is, I don't want it... Anywhere...
Minority Report /s (Score:3)
Because the model only sees your prompt, not your whole system, its code can clash with existing architecture, break integrations, and perform poorly at scale. Over time, teams risk becoming dependent on these tools, weakening core design and debugging skills while spending increasing effort just auditing, refactoring, and deleting the mess the AI produced in the first place.
Here's the problem (Score:2)
"Is the software I'm making for myself on my phone as good as handcrafted, bespoke code? No. But it's immediate and cheap"
Today's AI is just making software worse
I am hoping for future AI that can help experts make excellent, efficient, bug-free software
What we have today is just more slop
Nothing will ever get cheaper (Score:2)
If anything, AI will make pricing dynamically adjustable in real time based on individual customer profiles - meaning prices will be optimized between individuals to maximize profits.
RAD and VisualBasic 6 (Score:4, Insightful)
Decades ago when magazines were gushing about the prospects of RAD, VB6 was released and had rapid uptake by all sorts of amateurs and non-programmer types. Here on slashdot, and by many professional programmers, it was widely panned as enabling all sorts of low-quality garbage because it very much lowered the barrier to entry, and could generate most of the boiler-plate code. This which was definitely resented by many. But all sorts of useful, one-off utilities (loads of shareware) were done using it by people who would not call themselves programmers. Drag and drop GUI form design and event-driven programming was a powerful concept that is now fairly mainstream post VB6, although I think many VB6 users disappeared after MS abandoned the platform and users when it came out with VB.net.
Having used AI coding assistants (currently using four different models concurrently) for the last few months, I believe Coding LLM agents are a modern incarnation of VB6. While many here on slashdot poo poo them, in the last few months I've managed to finally do several projects that I've been wanting to tackle for years but just lacked the knowledge and time. I haven't let the bots do everything, but in guiding them carefully I've learned a lot and got things done. I've added features I need to existing open source software, written in languages I have no experience in and toolkits I've never used before. I've used them to convert entire projects from one language to another, or upgrade them to new language and toolkit versions. Recently I was able to bootstrap my learning of KiCad using these tools. LLM agents can create schematics and board layouts from scratch for me to get me started. Also more impressive, some of them can take an image of a component and description of the physical layout (either my own description or from the data sheet) and create both a custom symbol and footprint for it, if it wasn't already in the KiCad libray.
In short I've made more progress on my projects in the last three months than I have in the entire year before this. Granted while what I'm doing is related to my profession, but I'm not paid to develop software, so I'm not a software (or electrical) engineer nor a professional programmer.
Like VB6, we can ask, is this a good thing? Will it dilute the profession?
Re: (Score:2)
Yes, VB did actually make the price of software cheaper, just not...cheap. A lot of software got written that wouldn't have been written, before VB.
The same is happening now, with AI. I don't think it's going to make writing significant software cheap, but it will certainly enable a lot of one-off code to be written by those spreadsheet-using departments now.
And I too have gotten some projects done that wouldn't have happened without AI. Like updating the look and feel of my 1990's era website to Bootstrap.
AI works when it doesn't need to be right (Score:3)
All forms of generative AI, not just code generators, are most useful when the answer doesn't need to be right. You get an answer quickly. Often it will be a useful answer but not always. If there's an objective standard for what's correct, it won't always match that standard. Sometimes that's ok. If you're generating pictures to use as clip art, or searching for information that you'll double check, or creating a game for fun, mistakes aren't too important. But you really don't want your bank running on software that works that way.
I'd don't know if we'll ever overcome that limitation, at least not with the current approach to AI. Some other problems will be easier to fix. For example, current AI code generators work for small projects but fall apart when the code base gets too big. That's improving with time and will keep improving. It just takes bigger models with bigger context windows.
But the lack of correctness guarantees may be inherent to the whole approach. It requires a rigorous process where every step is provably correct. That's very different from how current models work.
what the hell? (Score:2)
That doesn't mean that the software will be good. But most software today is not good. It simply means that products could go to market very quickly.
is he saying the quiet part out loud?
pal in the business (Score:2, Flamebait)
Nice. (Score:2)
The simple truth is that I am less valuable than I used to be.
Smartest thing I've read online this week. And that extends to the main point of great vs. good enough.
Too many folks are living in denial while their value proposition bottoms out.
Judging by the quality of the software output... (Score:1)
Do the same guys who promote AI play Lotteries? (Score:1)