Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
AI Programming

Will 'Vibe Coding' Transform Programming? (npr.org) 113

A 21-year-old's startup got a $500,000 investment from Y Combinator — after building their web site and prototype mostly with "vibe coding".

NPR explores vibe coding with Tom Blomfield, a Y Combinator group partner: "It really caught on, this idea that people are no longer checking line by line the code that AI is producing, but just kind of telling it what to do and accepting the responses in a very trusting way," Blomfield said. And so Blomfield, who knows how to code, also tried his hand at vibe coding — both to rejig his blog and to create from scratch a website called Recipe Ninja. It has a library of recipes, and cooks can talk to it, asking the AI-driven site to concoct new recipes for them. "It's probably like 30,000 lines of code. That would have taken me, I don't know, maybe a year to build," he said. "It wasn't overnight, but I probably spent 100 hours on that."

Blomfield said he expects AI coding to radically change the software industry. "Instead of having coding assistance, we're going to have actual AI coders and then an AI project manager, an AI designer and, over time, an AI manager of all of this. And we're going to have swarms of these things," he said. Where people fit into this, he said, "is the question we're all grappling with." In 2021, Blomfield said in a podcast that would-be start-up founders should, first and foremost, learn to code. Today, he's not sure he'd give that advice because he thinks coders and software engineers could eventually be out of a job. "Coders feel like they are tending, kind of, organic gardens by hand," he said. "But we are producing these superhuman agents that are going to be as good as the best coders in the world, like very, very soon."

The article includes an alternate opinion from Adam Resnick, a research manager at tech consultancy IDC. "The vast majority of developers are using AI tools in some way. And what we also see is that a reasonably high percentage of the code output from those tools needs further curation by people, by experienced people."

NPR ends their article by noting that this further curation is "a job that AI can't do, he said. At least not yet."

Will 'Vibe Coding' Transform Programming?

Comments Filter:
  • by ukoda ( 537183 ) on Sunday June 01, 2025 @07:45AM (#65420269) Homepage
    AI might replace existing programmers because it has been trained on what they have already done before, but what happens when something new comes along? A programmer faced with a new language, OS, API or whatever has to sit down and learn it from documents, not existing examples. Without programmers programmers creating stuff with the new thing there is nothing for the AIs to be trained on.

    AI may be bringing changes, but it's limitations will become pain points for some people in the future.
    • I thought rust was all we needed from now on.
    • by sjames ( 1099 ) on Sunday June 01, 2025 @03:08PM (#65420803) Homepage Journal

      Meanwhile, new analysis and techniques come along (often in areas related to security and resilience to hacks) from time to time that no AI is going to manage.

      Vibe coding is essentially cargo cult programming if you peek behind the curtain.

      AI isn't actually intelligent in the general sense and doesn't actually understand the problem. Vibe programming is when you request something from the LLM and it essentally "says to itself" when programmers are asked for something like that they usually write something like this.

      At best, LLMs can be the new code monkey. They will not consider maintainability or expandability. They have no ability to anticipate that XYZ feature will probably be requested sooner or later, so the design needs to at least be able to accommodate that to avoid a complete re-write.

      Give it a few years and watch as some poor schlep has to try to do something with the steaming pile to get it to do XYZ without requiring a whole new system with data loaded from scratch. You'll have to pay those people handsomely because it will be nasty work nobody wants to do. The AI won't likely be able to help you. We know that when you feed the output of AI into the input, it tends to go crazy and start babbling about quantum fluctuations and giving people 6 fingers.

      • by ToasterMonkey ( 467067 ) on Sunday June 01, 2025 @04:03PM (#65420873) Homepage

        Vibe coding is essentially cargo cult programming if you peek behind the curtain.

        It is exactly that. You could call me an AI/LLM coding proponent I guess, I use it daily but that vibe coding shit is no different than the Ruby on Rails hype train for example. I doubt it will have much impact outside hipster webdev "I made a twitter clone" trash.

        • vibe coding shit is no different than the Ruby on Rails hype train for example

          A lot of successful tech companies have been built on RoR, and other popular frameworks have copied it.

          Nothing serious has been built with Vibe Coding. We'll see when that happens (100 hours and 30,000 lines of code for that website isn't promising, but we'll see).

          • > Nothing serious has been built with Vibe Coding.

            Just PoCs to get budget, then the whole shit show gets handed over to actual human developers who have to pick up the pieces. Everyone looks confused when, after several weeks, the project isn't finished, so they blame the developers, fire them and put the PoC into production.

            The thing is, if you can ask an AI for a software product, then by definition your idea isn't that unique or difficult, so the thing you've created has no value. That realisation is

            • The thing is, if you can ask an AI for a software product, then by definition your idea isn't that unique or difficult,

              I agree with your conclusion, but I don't know what definition you are using to be able to say "by definition."

              • Maybe the wrong term... but since an LLM (even if augmented with various other systems) as currently available can only "go as far" as the things it's been trained on - it can't extrapolate or otherwise go beyond what it's learned. It therefore can't make anything genuinely new (other than maybe smush 2 or more existing ideas together).

    • by ToasterMonkey ( 467067 ) on Sunday June 01, 2025 @03:35PM (#65420821) Homepage

      A programmer faced with a new language, OS, API or whatever has to sit down and learn it from documents, not existing examples. Without programmers programmers creating stuff with the new thing there is nothing for the AIs to be trained on.

      An LLM is actually great at tearing through stuff like that, and translating existing patterns and idioms into new languages, new settings, etc. Waaaaay faster than you would, and they're a great learning aide.

      Creating new idioms, new design patterns, no, LLM probably won't do that, but if a new language, OS, or API was intended to be used in some novel way, there would be examples of it.

      I'm tempted to say new idioms don't come from a vacuum.. but they do, when a clever monkey invents one. At the same time, nobody else knows what it means without examples. An LLM would learn new design patterns of a new language the same way you would, from the same sources.

      A new undocumented API without any examples, yah an LLM isn't going to be much use, it's like using a camera in the dark. There only so much it or you can do with little information.

      • I primarily use AI in the way you say it's good and it's... well I'm very lazy so I keep using it, but when it comes to a new API, it's a bit haphazard at best. If the new API is a bit strange, I've had AI hallucinate a more idiomatic one in it's place.

    • An outcome might be that no-one will bother to make new programming languages - for at least for human coders: If nobody looks at the code, assembly could be generated directly.
      • That's a positive spin on this for sure. What would we need high-level extraction, if no one was looking at the code. Even then, you probably could have the AI take the Assembly or Machine Code, and spt out the C++ or Rust Equivalent code, readable even.

        Me personally, I want to write code, but it would be nice to have AI that helped with optimizing the code I produce. I know why I need it, future cases, it has no idea, but it would be able to tell me if my use of some data type would better if u

    • AI learns just like a human, and will be able to support new things, just like humans, only faster.. but in regard to new languages, maybe it is good if AI can't learn new languages as there are already way too many which really don't add anything really to the pool, like Rust isn't anything special to what's already on the market, except some better memoryhandling, but a good developer doesn't need that really.....
    • Wait, new "stuff" will come with documentation??? That is actually informative??? I thought the way these days is to make a long winded and rambling Udemy course that only touches the most basic and common theoretical use cases.

      On a more serious note, LLMs will probably be able to take said material, either written or audiovisual, as training input.

      But I guess in the end it will be just machines talking to each other, so they will make up their own "stuff".

  • I read the article (Score:5, Interesting)

    by Big Hairy Gorilla ( 9839972 ) on Sunday June 01, 2025 @07:54AM (#65420283)
    What do you get when you take some naive kids pursuing the American dream, add Y Combinator Kool Aid, and an NPR reporter who needs to fill a page with hopeful words?

    30,000 lines of code ? For a recipe website? What effing language(s)? Bloated much? 500 lines of CRUD and 29,500 lines of garbage?
    • What do you get when you take some naive kids pursuing the American dream, add Y Combinator Kool Aid,

      $500K buys a lot of Kool-Aid.

      30,000 lines of code ? For a recipe website? What effing language(s)? Bloated much? 500 lines of CRUD and 29,500 lines of garbage?

      If you measure Bloat in lines of code, yes. But if you measure Bloat in the amount of time and effort needed to maintain and extend, maybe not. Not that AI has achieved that goal yet, but it wouldn't hurt for us to all start using the right metric, so we'll know a good thing when we see it.

      • If you're saying this is a self perpetuating cycle of give me money and don't audit the outputs, then yes, I agree.
        Also, please note as I pointed out below, that this website was pre-built in several hundred WordPress templates.
        So, this whole story and website is pure make work for the sake of ... you know.. "the economy".
      • by machineghost ( 622031 ) on Sunday June 01, 2025 @12:56PM (#65420661)

        A (VC funded) start-up's MVP has one goal: to convince investors to fund it. Once you get funded ($500k in this case) you can hire a couple of real devs, who can write real (ie. maintainable) code.

        But for the "get funded" part, it truly doesn't matter if the MVP is 100k lines of GOTO-riddled BASIC code ... all that matters is that it can convince investors.

        • Once you get funded, you deploy the code you have because you need someone to buy your company. There's nothing so permanent as a temporary solution.
    • Oh what is CRUD? Being capitalized I assume it is an acronym.

    • Such a low effort piece. Barely passable for a general audience, absolutely terrible for anyone with a modicum of understanding. "Fast-evolving artificial intelligence chatbots and other new AI tool"? Give me a break. What did Chloe Samaha actually use for her "vibe coding"? Claude? VS with Copilot? Other? I want to know. I searched, but there is no answer, just that thread bare NPR piece.
      • Looks like a ...ummm... let's call them a "journalist"... needed to write a story ...
        Details aren't very important when your audience can't tell the difference.
        Which is why after posting here and it cause a kerfuffle... You noticed, I noticed...

        Next Story at NPR: How AI and found 30,000 bugs in a Jive Coded App.
  • Yes, but no.. (Score:5, Insightful)

    by luvirini ( 753157 ) on Sunday June 01, 2025 @08:00AM (#65420289)

    ..At least in near future.

    AI will very likely/definitely replace many "coders". That is people that got a mission of "Code a function that does X" and then would copy-paste code from online sources and slightly modify them.

    There are supricingly high numbers of those people. They do not have to understand things, just implement a functionality that is clearly defined.

    Another group that is in danger now/soon are people who do simple reports and similar programming that is basically short programs that do a single functionality.

    We are still pretty far from replacing people who actually know what to do in more complex scenarios, though their work will most likely be assisted quite a lot by the ability to quickly get the types of things listed above done instead of typical handing off to others in the organization.

    In the longer run, no idea when and to what extent the systems will be able to tackle harder problems, that remins to be seen.

    • Re:Yes, but no.. (Score:5, Insightful)

      by rknop ( 240417 ) on Sunday June 01, 2025 @09:14AM (#65420353) Homepage

      It's not clear to me that the people making the hiring/firing decisions, and deciding how many programmers can be replaced by AI, know the difference between the copy-paste coders you're talking about and the people who are doing the harder things.

      And, given the way they think, and given the fact that all of us are subject to a whole host of cognitive biases, some places at least are likely to want to keep on the cheap copy-paste types than the more expensive senior programmers.

      Short term, things will look good. Quarterly reports will be up. It will take longer for companies to realize that they've made a mistake and everything is going to shit, but because of the emphasis on quarterly returns, plus because all of these companies are caught up on the groupthink bandwagon of the AI evangilists, a lot of them as institutions may not be able to properly diagnose why things went to shit. (Even if individuals within the institutions do.)

      I'm in science (astronomy) myself, and the push here is not quite as overwhelming as it is in the private sector. Still, I've seen people who should know better say "an AI can just do that more efficiently".

      • So it's not really any different than companies that outsourced programming or IT because it was cheaper up until they realized it just resulted in a more expensive mess down the road?
    • by tlhIngan ( 30335 )

      I'm not sure even if it will cost less. I saw some vibe coding sessions and I'm sure it was only being done because the user had the top end account and likely the whole session (which ended unsuccessfully) probably cost several thousand dollars in compute time.

      Simple requests that everyone do, sure, can probably be done for less money. But once you need something revised it will probably take many iterations and start racking up the bills.

      And before you know it, after a week, you've racked up tens of thous

      • Excellent point. While we are all typing into chatgpt and getting reams of text back in response, we forget the cost ... in energy... in subsidies... somebody has to pay for all the factored costs.

        We are so distant from the reality and trained on freemium, we forgot all this costs money and consumes energy an causes tons of pollution.

        Thank you for reminding us.
    • by CAIMLAS ( 41445 )

      In the long run, we'll lose out on more people being able to do the "hard" things. Sort of like when schools start hiring on non-excellence criteria, you end up with students who can't do the coursework and the field suffers as a result. That's what's happening here.

      In 5, 10 years when people are like "we fucked up, quick, hire good developers again" - or good voice actors, or good whatever - there won't be anyone in line to take those jobs. They'll have moved on - either finding different things to pay the

  • Yes. 100% (Score:5, Insightful)

    by SlashbotAgent ( 6477336 ) on Sunday June 01, 2025 @08:07AM (#65420297)

    It will definitely transform programming and not in a good way.

  • But like the article says, you still need humans to comprehend and maintain the code. AI can generate a lot of code in a short amount of time - it may not be the most efficient, or even viable, and that is where expert humans come in.

    • Maintain code?

      The LLM can just regenerate it from scratch almost for free.
      • So instead of maintaining, just roll the dice again and hope the AI is perfect next time?
        • by rknop ( 240417 )

          That reminds me of my O(N!) sort algorithm. (Really, it was a student of mine who proposed this, as a joke.)

          (1) Randomize the array
          (2) Is it sorted? If not, goto (1).

          • Re: (Score:2, Informative)

            by Anonymous Coward
            You and your student really invented Bogosort [wikipedia.org] back in the mid-eighties?

            BTW, you should check your homepage, it is not what you might think it is.
        • It doesn't have to be perfect, it only has to be as good as an average programmer.
          • An average programmer couldn't totally rewrite it without making errors, that's my point. Generally then humans will fix bugs as they are found and that is what makes the application better. If the AI is going to totally rewrite it from scratch again, than you repeat all the testing time again, exactly as you would with an average human.
            • AI will be doing the tests too.

              I'm retiring, best of luck to you.
              • We will see how that works once there is enough time for the fixes over fixes to pile up. It will have to be very good indeed to navigate levels of conflicting fixes and not create any more problems of its own.
    • Wrong

      Developers solve problems. The code is an expression of the solution.

      LLMs regurgitate stuff. Sometimes correct, other times garbage. But that is what they produce.

      You're confuding code generation with problem solving. These are, very much, not the same thing.

      But, hey. That mid-level manager with a degree in performing arts can "code" now! Awesome!

  • by test321 ( 8891681 ) on Sunday June 01, 2025 @08:12AM (#65420301)

    It enables to quickly come up with a working prototype for an app idea. If you get funding you hire engineers to rewrite it the proper way.

    • For reasons I won't go into beyond saying it's an educational projects for kids, I needed to write a (very simple) app for an iPhone 3G recently, Problem is: I've never written any kind of app at all. I have remnants of a single college level C++ class from thirty years ago, that's it.

      I was lucky to have a friend 'in the business' - he pointed me in the right way and got me set up in Xcode (4.4, in case you're skeptical), but I was astonished how much AI (specially Chat-GPT) was able to pull me through. Exp

    • by r0nc0 ( 566295 )
      This is exactly correct. And really what the goal seems to be is to remove as many impediments as possible between the idea and execution - who needs a pesky technologist group to implement your ideas that then turns on you because of some sense of shared morality? Such things have no place in a capitalist society so removing the technologists that are currently required to implement these ideas would be ideal.
  • to use a top down management style! What are you trying to do, create AI Dilbert?

  • It changes programming for sure. But I don't really get this story much.

    A 21-year-old's startup got a $500,000 investment from Y Combinator — after building their web site and prototype mostly with "vibe coding".

    Doesn't really mean much. Visual programming, low code & no code is not new and have all been funded. Getting funding on just an idea and a team (but no code) happens all the time. Y-Combinator acceptance these days is all about being a rich kid with Ivy League background, and having a som

  • Betteridges Law (Score:4, Interesting)

    by allo ( 1728082 ) on Sunday June 01, 2025 @08:28AM (#65420319)

    No.

    But some people will use it to their benefit. Did you try GLM? It write you a paint application in one prompt. Without programming skills you won't get it to become photoshop, but if you wanted a free and ad free paint for your iPad (with some smaller features that you were missing from other paints) its great. If you aren't a programmer you will easily get stuck. Still many of the one-shot apps will help you. You want a tool that can batch-sharpen your images? No problem, even as non-programmer if you can install python and prompt a model you have your solution. If you think you will get a programming job with that you're misguided.

    AI raises ceiling and floor. Professionals were not on the floor, but the good ones can now be at a higher ceiling. Who previously had to rely on others to help out with scripts to batch-process images, can now do it themselves. As always, some of the solutions will be suboptimal, cumbersome and professionals will shake their heads how they are less straightforward than they could be. But they work for the users and that's what matters.

    If a user learns to achieve a goal with 100 clicks it causes me headaches how one can do this without getting insane, still it is a working solution for them they use everyday because it achieves their goal better than the existing tools and they are unable to program the better solution. Now they will use less than optimal prompted programs, but they still won't need 100 inefficient clicks, but just run one inefficient script that would cause me headache, but is faster than the strange ritual they used before.

  • by know-nothing cunt ( 6546228 ) on Sunday June 01, 2025 @08:41AM (#65420333)

    how awful the recipes concocted by the AI are. I doubt they're as good as the AI-generated code, as bad as that may be.

  • AI Coding (Score:5, Interesting)

    by MightyMartian ( 840721 ) on Sunday June 01, 2025 @08:54AM (#65420343) Journal

    My experience, mainly with generating SQL queries, is that AI inevitably gets it wrong multiple times, so what I have had to do is more of a kind of meta-programming; giving the model cues and corrections. I have created some pretty sophisticated SQL queries, but there's no way in hell I can just pop the first go-around into my code and have it run. Either it's outright faulty code that will fail, or it's just not producing the correct results.

    Now SQL is a fairly limited and ring fenced language (excluding stored procedures of course). I've never tried it with a general use language, but I imagine those problems will get more pronounced. That's not to say it might not be useful for translating natural language specs into code, but if my experience with SQL is any indicator, it's going to require a lot of massaging. There's probably still productivity boosts to be found here, which will likely have in effect on the number of programmers out there, but to me, it feels more like a layer of abstraction that will require a different kind of programming, rather than replace programming.

    As an example that isn't coding, I have been building models for what I expect is a government procurement next year. This involves taking previous Requests For Qualifications documents, updating them with current knowledge of government expectations, procurement rules, and so forth. Again, building these model RFQs is an iterative process, not simply one of "Take these RFQs from previous procurements, update them with this new information I've uploaded, and give me model RFQs based on these premises I will provide." My test run took about three or four hours of a kind of conversation, where I correct and shape, understanding the cues the LLM needs to produce the desired result, and the better I get at understanding not just the kind of information and cues the LLM requires, but the most effective means of "encoding" that information, the more efficient the LLM is at producing the desired results.

    That sure sounds like programming to me, albeit at a much higher level of abstraction. LLM, at least where it stands, is just another platform, a very powerful one, but as with all programming languages, the larger the command set and the more complex the lexical structures, the more room for bugs, and the more subtle some of those bugs can be.

    • Anecdotal, but one of our younger team members who's still at uni told us that the only lecture without rules against LLM use for programming exercises is the database one, because all the models fail to generate useful SQL anyway.

      That being said, I sometimes try it for TypeScript and C#, and don't usually get results I'd consider useful either.
      • I have definitely produced useful SQL code, and indeed some pretty darned complex queries for transformations and data hygiene, but as I said, it's not a process of "dump spec into LLM model, run SQL on RDBMS", but rather a kind of meta-programming conversation. I imagine specialized LLMs might do a bit better, but I'm generally pretty skeptical of the current generations of AI building sophisticated software. I suspect where LLM's might do well is with interop code, the kind of boiler plate code that takes

      • I have the opposite experience, but I've probably been building and running databases for longer than your coworker has been alive.
      • Strong diasgree on that. I’ve had great results from uploading my schema, uploading a query to optimize (multiple joins, multiple subqueries, etc., that kind of thing), describing the output set I want and letting it come up with a query.

        I would actually say I’ve had the _best_ luck with SQL.

    • by mjwx ( 966435 )

      My experience, mainly with generating SQL queries, is that AI inevitably gets it wrong multiple times, so what I have had to do is more of a kind of meta-programming; giving the model cues and corrections. I have created some pretty sophisticated SQL queries, but there's no way in hell I can just pop the first go-around into my code and have it run. Either it's outright faulty code that will fail, or it's just not producing the correct results.

      Now SQL is a fairly limited and ring fenced language (excluding stored procedures of course). I've never tried it with a general use language, but I imagine those problems will get more pronounced. That's not to say it might not be useful for translating natural language specs into code, but if my experience with SQL is any indicator, it's going to require a lot of massaging. There's probably still productivity boosts to be found here, which will likely have in effect on the number of programmers out there, but to me, it feels more like a layer of abstraction that will require a different kind of programming, rather than replace programming.

      As an example that isn't coding, I have been building models for what I expect is a government procurement next year. This involves taking previous Requests For Qualifications documents, updating them with current knowledge of government expectations, procurement rules, and so forth. Again, building these model RFQs is an iterative process, not simply one of "Take these RFQs from previous procurements, update them with this new information I've uploaded, and give me model RFQs based on these premises I will provide." My test run took about three or four hours of a kind of conversation, where I correct and shape, understanding the cues the LLM needs to produce the desired result, and the better I get at understanding not just the kind of information and cues the LLM requires, but the most effective means of "encoding" that information, the more efficient the LLM is at producing the desired results.

      That sure sounds like programming to me, albeit at a much higher level of abstraction. LLM, at least where it stands, is just another platform, a very powerful one, but as with all programming languages, the larger the command set and the more complex the lexical structures, the more room for bugs, and the more subtle some of those bugs can be.

      I do similar for PowerShell... but it can't detect when it's made an error, like using a reserved variable.

      I'd not run any AI generated code I couldn't and hadn't fully comprehended myself... Although in my job occasionally requires cleaning up after someone runs AI generated code in production... Yanno I really like having to restore a production SQL server at 15:45 on a Friday because some moron doesn't know what they're doing with code they didn't write.

  • by heson ( 915298 ) on Sunday June 01, 2025 @08:57AM (#65420345) Journal
    We have seen this before with Visual Basic and PHP. Novices quickly churning out vast amounts of almost working code that can not be repaired, only replaced. This is just the next generation of the same problem. If you like re-implementing someone elses buggy software in a proper way, you got a bright future ahead.
  • by Tony Isaac ( 1301187 ) on Sunday June 01, 2025 @09:13AM (#65420351) Homepage

    At my current company, I whipped up a prototype SSO application in three days. Then I managed a team that turned it into production-ready code in just...6 months. And that was a very small, special-purpose application that just does the basics of authentication, such as login, reset password, change password, and so on.

    Yes, AI is great at whipping up prototypes. But if my own interaction with AI coding is any indication, AI struggles to harden code or make it robust. For that matter, half the time the code it spits out doesn't even compile without adjustments.

    There's no way this article isn't really a glorified infomercial.

    • I imagine coding for a single stage to orbit application must be very tricky. Why do people casually pepper their comments with their personal lingo incomprehensible to anyone outside their specialty?

      • SSO = Single Sign On. It even has a Wikipedia page.
        https://en.wikipedia.org/wiki/... [wikipedia.org]

        • https://en.m.wikipedia.org/wik... [wikipedia.org]

          I'm not a mind reader. Was I supposed to slog through all that?

          Good for you that you've mastered your TLAs for communicating with people in your circles.

          • I assumed that readers of slashdot were technical, it's normal for technical people to already know what SSO is. It's literally *everywhere* in the software world. You've no doubt used it yourself, any time you use one of those "log in with Google" prompts at random websites around the world, you're using SSO. But hey, I'm happy to help you in your education.

            • by AvitarX ( 172628 )

              Not just the software world.

              The business world in general.

              I mean I guess software in the sense that it's using software, but not software in the sense of making software.

  • by Tony Isaac ( 1301187 ) on Sunday June 01, 2025 @09:26AM (#65420361) Homepage

    So, I decided to try out this Recipe Ninja site that was so quickly drummed up with AI.

    First reaction, it looks nice enough.

    So, I clicked the microphone and said "fettuccini recipes." It correctly heard what I said and showed me a Search Results page with "Active Filters" listing "fettuccini recipes." In the middle of the page, it said "No recipes found matching your search criteria."

    About a minute later, a voice came from the page saying, "I found some recipes for you, would you like to see the first one?"
    I said, "Sure."

    Nothing came after that, either visually or in audio form.

    So in 2 minutes, I personally found some serious issues with the site. Is AI going to fix those issues? I doubt it.

    It's easy to create a prototype. Even AI can do that. Can it harden the site and fix its bugs? I doubt it.

  • The recipe for gnocchi puttanesca in a bag has an AI illustration of a meal cooked in a burlap drawstring bag. The bag in the actual Jamie Oliver recipe is a typical pouch made of foil.

  • Certainly a useful tool. In knowledgeable hands. Currently, you are still likely to need an experienced human to tweak and curate the result. Certainly to maintain it.

    The wild card is how good the LLMs are going to get. We just don't know. They may hit a wall and stay there. Or they may not.

  • by gweihir ( 88907 ) on Sunday June 01, 2025 @09:53AM (#65420391)

    It will produce unmaintainable, insecure and unreliable crap for a few years and then it will quietly die.

    • I'd be careful of this attitude. It strikes of Thomas Watson's, "Only five computers worldwide," and the obsolescence of silent movies, among other examples of trends that were considered passing fads that actually became societal revolutions.
      • by gweihir ( 88907 )

        That is because you are not very smart, but full enough of yourself to feel entitled to give crappy, insightless advice like this.

    • It will produce unmaintainable, insecure and unreliable crap for a few years and then it will quietly die.

      I think it depends on how Ai is used. I decided to learn python a few years ago; my previous coding experience was in Fortran and some VBA. I get how to structure a problem, and flowchart a solution. After a few online classes and community help, I was developing an app for my company. Nothing fancy, but it does the job. I then decided to try Claude to see what it could do to add some features. It easily generated code, but the key for me was to ask it to explain, line by line, what the code did. In so

      • by gweihir ( 88907 )

        Well, some people will do what you do: Use the tool carefully and verify everything. That will work. But that is not really what people mean by "Vibe Coding". What they mean is that AI does all the heavy lifting. In your approach, you do all the mental heavy lifting and the AI does all the lookup stuff. Hence the AI helps you one time-consuming but easy stuff, and that is perfectly fine as you do the heavy mental work of verifying it got it right. Your experience with ChatGPT shows how needed that part you

        • Well, some people will do what you do: Use the tool carefully and verify everything. That will work. But that is not really what people mean by "Vibe Coding". What they mean is that AI does all the heavy lifting

          It's not just coding. I have a friend who uses it verbatim, so he claims, for market analysis and preparing marketting materials. Based on my experience, I would be surprised f it actually produced the quality he claims. When I said my experimenting with it produced outcomes that were sort of correct, useful to id areas to look at but not ready to send out un reviewed, especially since its prose was stilted and full of superlatives. He said the problem was my prompts and that it can't be the AI because i

  • by bill_mcgonigle ( 4333 ) * on Sunday June 01, 2025 @10:04AM (#65420409) Homepage Journal

    At some point something bad will happen and the company will be sued and they will claim they didn't even write the code and the AI vendor will say it's not their fault either.

    Or the bridge their AI designed.

    "Oopsie! "

    I wouldn't trust the Courts to get any of this right either.

    We'll maybe see contracts and insurance enforce human oversight but then people will cheat and settle for less than profits.

    As they say, profit has replaced survival as the Human evolutionary fitness function.

    • Or the bridge their AI designed.

      Those two things are very different. An architect ultimately has to sign off on the bridge, and that architect is going to want to understand the AI's output before they do that. If the output is incomprehensible, then only a literally insane architect would put their pen to the page and agree that they should be held liable for the viability of the design. This doesn't exist in software (I would argue that it should, so would lots of others around here, which we know because they have done) and I don't thi

  • Its is really impossible for me to believe this concept will result in zero exploitable bugs. More likely it will infest every part of the process. What we need though is a nice cool name to go with 'vibe coding'?
  • There, fixed that for ya.

  • As part of the effort to improve the chain of development,

    Boeing is already replacing the outsourced engineers with "vibe coding,"

    They started to implement the next iteration of the 737 MAX with and improvement in the MCAS system completely rewritten with this new magic tool.

  • You can already just ask your favorite AI "get me a recipe for X" or "give me a recipe for these ingredients". Next, whatever that kids idea was, again, user could just go to one of the AIs and ask it directly. Don't need the kid, don't need the website.

    Cmon people.
  • Complex, novel software is complex, irreducibly complex. It can't be completely specified in a simple text prompt.
    The prompt "write a snake game in python" works because snake games exist along with their open source code.
    Managers want to reduce costs and dream of not needing to hire programmers. This will result in a tsunami of crappy code generated cheaply.
    The real promise of AI tools is as assistants that will help experts analyze and manage complexity

  • by Morpeth ( 577066 ) on Sunday June 01, 2025 @12:23PM (#65420605)

    I looked briefly at the site, it's pretty basic fullstack site with a CMS.

    I fullstack web development to high school students (Django framework (Python), mySQL backend), and have had some of my students build similar data-driven sites, in a month or so after they've learned the core skills needed, and they're not working on it 40 hrs a week either since they have 5+ other classes. Granted I have some pretty motivated & sharp kids, but even an average programmer just out of college could build this thing in pretty short order, or should be able to.

    30,000 lines of code for THAT? and a year to build? I call bullshit. Or is that's really 30k lines of code, it's a REALLY verbose inefficient 30k. What's he counting a bunch of json files / dictionaries he's using for data? I'm kidding, I'm assuming there's some kind of actual db behind it, but I just don't see where you'd need anywhere near 30k?

    Clearly he's trying to sell something, oh right this guys who 'knows who to code' is part of the Y-combinator hype machine, yawn...

  • I mean, this site only got any attention at all SOLELY because it was "AI everything". Otherwise, who would have even really gone to the site in the first place given how many hundreds of other recipe sites are out there?

    and the thing about AI marketing is that, well, that's pretty damn obvious when you see it currently, for the text and images (never mind the copyright concerns), and then require more money to actually get out there in advertising.

  • by Tschaine ( 10502969 ) on Sunday June 01, 2025 @02:33PM (#65420781)

    Every time I see something about a company that provides vibe-coding software, I check the "careers" section of their web site.

    And every single time, they have multiple job openings for software engineers.

    That tells you everything you need to know about how close they are to replacing software engineers.

  • by ahoffer0 ( 1372847 ) on Sunday June 01, 2025 @02:44PM (#65420787)

    It's great at writing bash scripts. Saves me a ton of time. I've tried to make it work for larger projects without success. It's okay with two or three or maybe half a dozen files. After that, it collapses on itself in a big mess.

  • I dropped some core production code from 1998 that I've used on a hundred websites into ChatGPT. It found a legit error. I will stay on myside of the screen - thank you very much.
  • Keep it up guys. Yesirree. Give us more of those sweet, sweet, vibe coded apps.

    Meanwhile we're in the back, coding our test automation scripts by hand. We're all just so darned retro!

    • Keep it up guys. Yesirree. Give us more of those sweet, sweet, vibe coded apps.

      Meanwhile we're in the back, coding our test automation scripts by hand. We're all just so darned retro!

      No mention of testing and validation in the original article. There's a mention of "curation." Not sure what that is. Maybe really lazy, cursory "testing." If AI can produce reasonable code that can pass traditional test and validation, then there's no reason not to use it. However, bypassing test and validation is a huge red flag. And using AI for the test and validation would also seem to be problematic, unless there is another subsequent test and validation for the AI-generated version.

  • Just wondering when the AI coding hype train will collide with the need for secure code, optimized code that is also maintainable code.

    Research thus far is finding it full of common vulnerabilities. So... great.... you've replaced a bunch of your programmers, and your code base is 90% AI written these days... how is the quality of that? Or is it simply cheap is the quality we wanted... the problems can be fixed in production.

  • I think that's the case, it's extremely useful. I've been having it go through and suggest improvements to my existing code. It will give out a number of suggestions, if I like one I can just tell it to implement that. I may need to make a tweak or two but it is definitely a work-saver.

Doubt isn't the opposite of faith; it is an element of faith. - Paul Tillich, German theologian and historian

Working...