Forgot your password?
typodupeerror
Open Source Programming

'Vibe Coding Kills Open Source' (arxiv.org) 106

Four economists across Central European University, Bielefeld University and the Kiel Institute have built a general equilibrium model of the open-source software ecosystem and concluded that vibe coding -- the increasingly common practice of letting AI agents select, assemble and modify packages on a developer's behalf -- erodes the very funding mechanism that keeps open-source projects alive.

The core problem is a decoupling of usage from engagement. Tailwind CSS's npm downloads have climbed steadily, but its creator says documentation traffic is down about 40% since early 2023 and revenue has dropped close to 80%. Stack Overflow activity fell roughly 25% within six months of ChatGPT's launch. Open-source maintainers monetize through documentation visits, bug reports, and community interaction. AI agents skip all of that.

The model finds that feedback loops once responsible for open source's explosive growth now run in reverse. Fewer maintainers can justify sharing code, variety shrinks, and average quality falls -- even as total usage rises. One proposed fix is a "Spotify for open source" model where AI platforms redistribute subscription revenue to maintainers based on package usage. Vibe-coded users need to contribute at least 84% of what direct users generate, or roughly 84% of all revenue must come from sources independent of how users access the software.
This discussion has been archived. No new comments can be posted.

'Vibe Coding Kills Open Source'

Comments Filter:
  • Not the Only Model (Score:5, Insightful)

    by Luthair ( 847766 ) on Tuesday February 03, 2026 @02:09PM (#65966868)
    I feel like the model described here is a minority of open source. I feel paid support contracts, source available, and corporations contributing to major projects is much more the norm.
    • by Himmy32 ( 650060 )
      Depends on how it's measured as that may be the norm for the more recognizable large projects, but the small projects are much more numerous and they often have a Ko-fi, OpenCollective, other donation link, or ads on the docs.
      • This is fair but the pitch/summary still seems very misleading, which I think is the above person's real point.

        It's not the funding that keeps open source alive, it's the funding that keeps some smaller open source alive.

        The summary acts like this is some existential threat, it's not. I mean I'm typing this into Firefox, on GNOME, on Fedora, on Linux.

        Virtually no part of that stack is impacted by what they're talking about, but it's phrased like all of those things are about to implode. They're not.

        • It is a multiple year decline in the number of high school to college to newly graduated college degree holders focusing on AI instead of computer science, full stack development and what was the mainstream computer jobs before the current AI round.

          The largest loss is that there will not be anywhere the number of blog posts, stackoverflow questions, etc. for the more recent open source technologies since somewhere from 5% to 50% (pulling numbers out of the air) of the rookie and mid-level experience questio

          • It's also possible that having 10+ years of easily found blog posts keeps the open source world from evolving a better stack, through having a readily accessible source of good enough fixes that function as a brake on innovation. It's difficult to tell a priori.
        • What else are you using other than Firefox, GNOME, and Linux? Note I didn't say "Fedora" because Fedora is a bundle of systems.

          To give you an idea of why this is, actually, an existential threat, here's the relevant xkcd: https://xkcd.com/2347/ [xkcd.com]

          I've been testing the latest Debian FWIW. Right now Firefox doesn't display video on most websites (YouTube excepting.) Why? Because it relies on the ffmpeg libraries, and they're not currently bundled in a form on Forky that Firefox can link to.

          You think GNOME, Firef

    • I suspect a large majority of the money spent towards open source is in the form of support contracts, yes, but large contracts paid by large companies to large projects. The problem is that a majority of the *projects* are small, often single person, and *those* do not have a good way of funding their work. There is no web of small companies paying small projects keeping the greater open source community healthy, and so smaller projects have to look to other ways to fund work.

    • by Mandrel ( 765308 ) on Tuesday February 03, 2026 @07:59PM (#65967648)

      Yes. Source-available commercial software can even be fully publicly forkable [devwheels.com].

      For most software and users the libre aspect of open source is more important than the gratis aspect.

    • I feel like your model of open source is not really open source. You're describing companies or individual programmers whose aim is commercial software, but who decided that they would flood the "market" with free samples to try and hook users into paying for upgrades later. It's the old shareware model in sheep's clothing.

      My model of open source is someone (or a group of people) writing code for themselves, and being generally ok with other people benefiting too. There's no reason to ask for money/donati

  • by Baron_Yam ( 643147 ) on Tuesday February 03, 2026 @02:19PM (#65966902)

    AI is dumb, it cannot innovate. Without humans creating new training data, it will fall behind. AI must find a way to avoid being fatal to the host it feeds on.

    • Unfortunately without the old guard creating packages like tailwind etc then the new training data may very well be vibe coded. It's not just a lack of funding that will hit OSS. It's them getting blatted by AI bug/vuln reports. If OSS is no longer profitable or fun it's unlikely to survive. I do wonder what happens when the AIs ingest their own output. Does it degrade or stay stagnant.
      • Does it degrade or stay stagnant.

        Can do both, either, or also neither.
        It's a complication that has to be accounted for, though.

        It can reduce perplexity, but that can very much be a not good thing.

        What I can say, is that OSS survived just fine before bullshit like enshittified revenue models become popular, and it will survive the AIpocalypse.
        Will OSS change? Yes.

    • The reason people made open source projects and devoted time to maintaining them, was purely because it was enjoyable to do so. In the age of AI, that is gone. I no longer want to release my work for the benefit of others, because now it is only feeding this infernal bullshit machine, which will steal my work and sell it as its own. And if that wasn't bad enough, it will send a deluge of slop bugreports and phishing attacks.
      AI needs to die. The bubble can't burst soon enough.

      • Re: (Score:1, Flamebait)

        In the age of AI, that is gone.

        No, it's not.

        I no longer want to release my work for the benefit of others, because now it is only feeding this infernal bullshit machine, which will steal my work and sell it as its own.

        lol. And that's how I know you were not involved in open source.

        Get the fuck out of here, poser.

      • by CAIMLAS ( 41445 )

        You could always try to do something meaningful and original with your work yourself.

      • Decompilers are not great to use; however, for an AI which already doesn't understand context it probably could do about as well if trained for decompiled work - it will figure out your binary and not need your source. Possibly would as well with that as it does with source code. Sure it won't work as reliably but good enough...

        There are many binaries to train them on. more than source...

    • by CAIMLAS ( 41445 ) on Tuesday February 03, 2026 @05:07PM (#65967292)

      Your post denies the status quo. That isn't how any of this works at all.

      It's a distillation process. AI absolutely can (and is) being trained on AI-generated, AI-augmented, AI-processed, and AI-sintered data.

      I'd suggest you familiarize yourself with where things are today (as opposed to 6 months ago, or 2 years ago). If you haven't reevaluated state of the art in the past 2 months with any depth, you're gravely behind.

      • Looking at AI through rose-colored glasses isn't helpful either. I find the experience and results mixed in terms of productivity. Depending on what metrics are used for productivity and what problem is being solved.
        It may feel like your busy when your engaged in writing prompts and pasting results and getting updates back quickly.

        But measuring over the long term, it's been hard to see much improvement for us. We're not getting projects done sooner. Our team is not able to work on more projects at once. Bug

        • by CAIMLAS ( 41445 )

          I need to only look at my git commit and feature velocity over the past 12 months to know precisely how useful it's been. Proof is in the pudding. I'm not fooling myself by copypasta'ing things. (I very rarely copypasta things. I design things and my prompts take 20+ minutes to complete.)

          I'm shipping actual production product at any increasing rate, and the code and architectural quality is better than most of the people I've worked with in my career (because I'm the gatekeeper).

          AI isn't getting more expens

      • Says the mouth breather regurgitating what marketing told them to upon hearing rejection of AI. Of course such a person wouldn't be capable of understanding the GP's post.
        • by CAIMLAS ( 41445 )

          I'm basing this off my own adoption and experience. I'm generally an extremely cautious late-to-adopt person, which makes this all the more hilarious to me.

  • The Akira License (Score:5, Insightful)

    by Pseudonymous Powers ( 4097097 ) on Tuesday February 03, 2026 @02:21PM (#65966908)
    It does seem that individual programmers would be much less likely to make their contributions to projects of all sizes available as open-source now that it's likely that those contributions will just be digested into an indistinguishable pap that will then be puked up by LLMs owned by oligarchic corporations. Because where's the satisfaction in that? Feels pretty demotivating to me.
  • by 93 Escort Wagon ( 326346 ) on Tuesday February 03, 2026 @02:37PM (#65966940)

    Stack Overflow activity fell roughly 25% within six months of ChatGPT's launch.

    This has nothing to do with "vibe coding".

    I'm also unclear on what "documentation traffic" and "bug reports" has to do with a project earning money. Is this about seeing advertising? Because I'm not going to contribute to a project if that requires me to look at advertising.

    • by Himmy32 ( 650060 )

      If the project is funded through donations, if the donation link is never seen by human eyeballs; the donation link never gets clicked.

      The given example of Tailwind monetized by having the base framework be free, but having a collection of tools including a Component Library be licensed. So if an LLM is pumping out examples rather than the docs, the user isn't going to see the value add subscription.

      If a project has advertising and you see it, then you are already contributing. But if the LLM is has to be "

      • I disagree entirely that "seeing" the value-add is the problem.

        The problem is that the value that is added is simply removed by the LLM.
        There's little reason to pay for the Component Library if an LLM can build them without doing so.

        What they're suffering is the equivalent of having a Ford Production Line paradigm shift in the number of "experts" that no longer need to pay them.
    • "documentation traffic" makes money for Tailwind, because their website has ads for their somewhat related paid products.
    • Stack Overflow activity fell roughly 25% within six months of ChatGPT's launch.

      This has nothing to do with "vibe coding".

      And it also has not much to do with open source, given that the code behind the Stack Overflow website is closed-source.

    • One reason for Stack Overflow's downfall was that ChatGPT was trained on Stack Overflow data. Given the repetitive nature of the questions on Stack Overflow, which likely generated the bulk of their traffic, the outcome was inevitable.
  • What? (Score:5, Insightful)

    by allo ( 1728082 ) on Tuesday February 03, 2026 @02:46PM (#65966956)

    "Open-source maintainers monetize through documentation visits"

    Open source is monetized through documentation visits? Not only most open source programmers know how to use an adblocker, but I also saw very few (thanks god!) documentation pages with ads. What are these people talking about?

    • You go to the documentation site and somewhere is a "contribute" button that asks for a donation. Sometimes people realize that they appreciate the project and donate money. AI does no such thing.

  • That was that website where I couldn't answer questions in an area regarding which I am an expert, unless I had a certain amount of "reputation". Rest in peace.

    • by procrastinatos ( 1004262 ) on Tuesday February 03, 2026 @09:18PM (#65967724)

      Bullshit. Stack Overflow does not have, and never had, a minimum reputation requirement for answering questions. You can create an account and immediately answer a question with 1 reputation (the starting default).

      There is an exception for answering protected questions [stackoverflow.com] (of which there are an exceedingly small percentage). To answer a protected question you need 10 reputation, which corresponds to a single upvote.

      • Bullshit

        And this is exactly the sort of antagonistic and aggressive (and unnecessary) response that turned me off trying to contribute to Stack Overflow

        • And this is exactly the sort of antagonistic and aggressive (and unnecessary) response that turned me off trying to contribute to Stack Overflow

          Bullshit, because writing "bullshit" on Stack Overflow would get flagged as "rude or abusive", and removed almost immediately.
          Besides the opening statement that you seem to take great offense with, I'm just presenting objective facts, and the facts happen to completely contradict the gaslighting presented in the post I was replying to.
          For the rest, I have no stake in the game.

          • I guess you didn't get as far as reading "the sort of ..." rather than focusing on one single example. And SO is a hostile environment. That is precisely why they had to start policing the ruder and abusive elements.
            But it is still full of passive-aggressive responses, senior users who use it to denigrate newbies and unhelpful "already answered" replies that do nothing to make it feel welcoming or supportive.
            • I think you make a fair point. It was somewhat the victim of its own success, in the sense that

              1) its stated goal (providing a Q&A site with lasting value) conflicted with its perceived goal (providing help to anyone able to differentiate a mouse from a keyboard); and
              2) after a while, most of the questions had already been answered, and would routinely turn up in search results.

              Regular users, conditioned by the fact that, for a decade, useful Stack Overflow answers would pop up for pretty much any progr

      • Call it bullshit all you want. I'm sure I'm completely remembering it wrong when I recall writing out an answer and being told I couldn't post it because I didn't have sufficient reputation. And I'm sure I'm completely remembering it wrong that I decided to try posting in other forums there where I could collect some reputation. I'm sure I did that for just no reason and my recollection is totally wrong and you're a fucking genius.

  • by ukoda ( 537183 ) on Tuesday February 03, 2026 @03:08PM (#65967014) Homepage
    If everybody is vibe coding and nobody is writing open source code anymore where does the innovation come from?

    If you think about most success open source project, including Linux itself, they usually start small with people seeing them as a potential future solution to a problem they are working on or just something interesting to play with. They start slowly and grow in usability and interest increases. Eventually it becomes something truly useful and usage becomes widespread.

    AI and vibe coding breaks that process at the early stages because there is no longer the humans looking at new things and taking a chance on something new and unfinished. Open source relies on people looking forward but AI can only look backwards. A future driven by vibe coding looks like it will free of innovation. Sounds boring to me.
    • by CAIMLAS ( 41445 ) on Tuesday February 03, 2026 @04:57PM (#65967274)

      To say that open source is being killed by vibe coding is just... crazy. It's simply wrong.

      There are numerous vibe coding apps now which were written entirely with vibe coding. An entirely new paradigm of development exists today which didn't exist even a year ago - claude code, opencode (omo/slim), codex, cursor, and on and on. Then you've got the agentic stacks, and everything else that's largely open and free. "Come use this vibe coded thing I built this weekend! It's live in production, you can see it working - and here's the git repo!"

      Coding skills are no longer a barrier to iteration and improvement. There are so many cool projects out there now being done by people who have an idea and see a business case and want to fill it.

      Aside from the projects, I've already taken 2 libraries myself and forked them to change (and improve) functionality for my specific use cases. I'm assuming they don't want my changes, but they can always pull them back if they want. They can see I've got a fork. My willingness to deal with "well they may not accept my changes" + slowing my own velocity is low. The repo is public, the commit comments are better than anything I've personally done in the past.

      "AI and vibe coding breaks that process at the early stages because there is no longer the humans looking at new things and taking a chance on something new and unfinished"

      Um... have you even tried vibe coding? You can one-shot a project in 20 minutes. I've done it numerous times - an old project I spent weeks writing specifications for, boom, done. I also now have a very useful data indexer which integrates smb shares with MacOS finder. Any sort of idea can now quickly come to fruition in a couple hours with a good set of prompts. Want to make an antiquated database format convertible to a newer platform, and reimplement the frontend? I once had to take a 15-year old physical SCO system running a proprietary database over to a virtual environment, 10 years back. It was a painful process, because SCO and failing hardware. But today? Once I got to that point I could've reimplemented it anew in a couple days, allowing those companies to expand the software capabilities they paid hundreds of thousands for at the time, to something which suited their current business needs (which were a paper and spreadsheet process).

      A mildly capable office tech could take an existing git repo of their project tree and maintain it/add features and maintain the product well enough using vibe coding, instead of languishing for a decade, like they had to previously.

      You seem to be missing the fact that LLMs have vastly exceeded prior functionality. Today, the frontier models are easily 2x what they were in October. October was easily 2x what they were in May of last year. May of last year? 2x as capable as they were the year prior. We're approaching exponential improvements, and models have been solving previously-unsolved NP hard problems: that's innovation.

      If you have an idea, it can be done with vibe coding today if you have the intelligence and creativity to do it. Simple as. If you don't, you can't - and won't.

      • "LLMs cannot solve NP-hard problems in polynomial time, but they can help generate heuristic solutions or approximations that provide "good enough" answers for practical applications. "

        So says my *.ai ..... I wonder what you had in mind . Is this a "back of the envelop" calculation for helium  atomic energy levels ?
      • by ukoda ( 537183 )
        You talk a lot about fast, quicker, easier. Sure vibe coding is amazing at that. I see nowhere in your response real invocation. A LLM can only give answers based on what it was trained on i.e. the past. I creates nothing new, instead it rapidly pulls together solutions from existing knowledge. That works while there is still humans innovating and creating new stuff for LLMs to learn from, but not if everything is from LLMs. You need AGI for innovation without humans. Basically I think you are confus
        • by Bumbul ( 7920730 )

          A LLM can only give answers based on what it was trained on i.e. the past. I creates nothing new, instead it rapidly pulls together solutions from existing knowledge.

          AI has learned the language from those code examples and repositories. What it does with the language is often (not always, mind you) original.

          LLMs have learned English (and other languages) from vast amounts of written text. It is easy to use LLMs to create work that is original (say, prompt it to create a poem in Shakespearean style about AI utilizing tennis racket to paint a house - or whatever). Similarly with software development, AI has learned the syntax and coding styles for different programming la

      • by jsonn ( 792303 ) on Tuesday February 03, 2026 @06:54PM (#65967522)

        Um... have you even tried vibe coding? You can one-shot a project in 20 minutes. I've done it numerous times - an old project I spent weeks writing specifications for, boom, done.

        Have you even spent 5 seconds thinking about what you wrote here? If you need to spend weeks writing a specification and "vibe coding" it takes 20 minutes, it means you are either completely incompetent at writing specifications or your vibe-coded project simple doesn't do what you wrote in weeks. It's as simple as that.

        • by CAIMLAS ( 41445 )

          It wasn't a concerted "weeks" of effort, it was ideation over the period of weeks until I had a coherent and complete architecture. But thanks for your concern.

          You realize that architecting software correctly takes time, right? Often much more than weeks, for any significant effort. You can't just use react and node for everything.

      • >> If you have an idea, it can be done with vibe coding today if you have the intelligence and creativity to do it. Simple as. If you don't, you can't - and won't.

        I think there are a couple of definitions of 'vibe coding'. One is where a person who is not very familiar with writing software gets to have an app of some kind created by just going through an exploring sequence of pretty vague prompts. It's very cool that people can do that now.

        Another definition is where an experienced programmer can get

        • by CAIMLAS ( 41445 )

          Yep. I've got an "ideas" folder in Apple Notes I've been curating for years - brief ideas I've had while falling asleep and need to jot down, ideas that came to me on the bus or the toilet, something I wish existed, etc.

          I've been slowly working through that folder and implementing them. It's been amazing.

      • Coding skills are no longer a barrier to iteration and improvement. There are so many cool projects out there now being done by people who have an idea and see a business case and want to fill it.

        This landed for me because I’m that person, just without the “coder” label.

        I’m a sysadmin. If you widen the error bars enough to include shell scripting, sure, I “code,” but I’ve never had the patience or focus to be a real developer. Historically, that meant a lot of ideas stayed in the “would be nice” bucket unless they were directly tied to work and justified the learning curve.

        Then I decided I wanted a tiny app to rearrange my desktop icons into a ci

    • by alispguru ( 72689 ) <bob.bane@me.BALDWINcom minus author> on Tuesday February 03, 2026 @05:24PM (#65967322) Journal

      I don't know how the next Python is going to get any traction, if table stakes for adoption is "language is understood by LLMs".

      The current generation of coders won't use it if their LLM of choice doesn't understand it.

      LLMs won't understand it if there's no training data, which comes from users.

      I've always told people that coding would be automated last, if ever.
      Apparently I was wrong, and will have to settle for being part of the last generation of coders that can actually read and understand code without LLM support.

      • Context windows are so big now you can give the AI the specification for your new language or framework right in the prompt.
      • I don't know how the next Python is going to get any traction, if table stakes for adoption is "language is understood by LLMs".

        That’s a real constraint, but it’s not a new one, and it’s not uniquely LLM-shaped. Every language that ever got traction had to clear table stakes that weren’t technical purity: documentation quality, tutorials, books, community examples, tooling, package ecosystem, and the ability for a new user to get from zero to “it runs” without burning a week. LLMs just become another on-ramp, not the whole highway. The paper actually argues that vibe coding isn't a "lagging indic

        • The real risk isn’t that a generation can’t read code. The risk is that we stop expecting them to. If we treat LLMs as training wheels instead of prosthetic eyesight, we get a generation that ships faster and understands deeper. If we treat them as a replacement for learning, we get brittle systems and brittle people. That’s not a technology outcome. That’s a cultural and educational choice.

          The ability to read and write code without support was in decline long before LLMs - FizzBuzz

          • The real risk isn’t that a generation can’t read code. The risk is that we stop expecting them to. If we treat LLMs as training wheels instead of prosthetic eyesight, we get a generation that ships faster and understands deeper. If we treat them as a replacement for learning, we get brittle systems and brittle people. That’s not a technology outcome. That’s a cultural and educational choice.

            The ability to read and write code without support was in decline long before LLMs - FizzBuzz as a low bar dates from Spolsky in 2005.

            If we're hoping for the right cultural and educational choices to save us ... we're screwed.

            I don’t disagree with your pessimism regarding the low bar, but I think you are missing a key distinction between developers and coders. When I was hired as a sysadmin by Raytheon three decades ago, the interview wasn't a test of whether I could follow a manual; my maths-heavy CS degree already checked the regurgitation box. The interview was about the size of Window's symbol table, the differences between Windows and Unix thread management, and a healthy dose of formal logic —the "minimum numb

    • The innovation comes from the guy/girl telling the vibe coding platform what to do.

      That is a no brainer or not?

      The Agents produce code you would otherwise write by hand.

      What is the farking difference if the next for loop is spit out by an AI, I use Eclipse auto complete, or write it in vi by hand?

      None, nothing, nada. It is the exact same sequence of characters.

      • by ukoda ( 537183 )
        We are arguing across different points. Of course those vibe coding can be innovative, my argument is not about them but about the LLM they use. The counter argument is that a LLM is just a tool and a tool is not expected to be innovative. The problem is where this tool beaks the cycle of innovation that open source supports. The open source community evolves and occasionally innovates, building upon itself. LLMs digest that to improve themselves.

        But how does vibe coding contribute back to improving
        • The user of an LLM / coding Agent has to share his work.
          Just like he did (or did not?) when he was not using an LLM / Coding Agent.

          P.S. you do not use LLMs for coding. You use an Coding agent. Very big difference.

  • "One proposed fix is a "Spotify for open source" model where AI platforms redistribute subscription revenue to maintainers based on package usage."

    Proposed fix for what? And what incentive would motivate "AI platform" developers to involve themselves with "subscription revenue to maintainers"?

    Open source, in particular GPL, is communistic, do we assume AI is as well? Because it sure doesn't seem like it. And why does open source funding need to be fixed? RMS created the GPL to compel others to give sour

    • by allo ( 1728082 )

      I wonder if they know how "fair" Spotify distributes the money.

      I'd rather say everyone who can afford it should have a day per year where they consider what open source projects they valued most in the past year and then distribute the amount of money they can afford between these projects. That also allows to reward projects that treat you nicely and show the projects that treated you badly that you remember.

  • Change the Paradigm (Score:5, Interesting)

    by TwistedGreen ( 80055 ) on Tuesday February 03, 2026 @03:26PM (#65967054)

    It's more accurate to say that the entire concept of libraries and frameworks is now obsolete. Why bother building and maintaining libraries of tested code when you can just generate it from scratch every time? The only reason we used libraries was to keep things maintainable and reusable. If you can just get an LLM to generate bespoke code on demand, and have it do exactly what you want and nothing else, then every piece of software can be a snowflake... unique and fragile, but infinitely replaceable. It's a paradigm shift, that's for sure.

    • by ceoyoyo ( 59147 )

      If it comes true. I doubt it will. The reason we maintain libraries is because complexity grows nonlinearly with the size of a system. Keeping the systems small with well-defined interfaces manages that problem.

      AI isn't magic. A computer might have greater capacity for tracking down complex behaviour than a human does but it isn't infinite. And the current systems, just like humans, do a lot better when they have good libraries to stick together than they do if you ask them for a big bare metal monolith.

    • by ukoda ( 537183 )
      Security and bugs come to mind. A new library will have bugs and security holes but if it becomes popular and is well curated then it becomes increasingly stable and secure. If a LLM generates a new equivalent to a library fresh every time it is run then the bug and security risks are fresh every time. I guess we will see how that evolves over time, but right now it is not hard to find news reports of deployed vibe code not really being up to scratch.
    • If you can just get an LLM to generate bespoke code on demand, and have it do exactly what you want and nothing else, then every piece of software can be a snowflake... unique and fragile, but infinitely replaceable.

      I do not think you fully understand why things like libraries exist. Each byte stored takes "disk" space. You have thousands of programs at your beck and call. If every single program recreated the portions of ntdll.dll that it needed, the amount of space required to store it all would balloon rapidly. (sorry for the Microsoft example, but it will be more universally understood)

      In other words, one of the reasons there is shared code is because we live in a real world and need to store things physically.

      But,

  • I am sure that the many proveyers of AI have violated the terms of agreement from the source code it has stolen.
  • 1) We have needed a more sustainable financing model for FOSS that matters for quite a while now.

    2) "Vibe "coding" will not matter in the long run. Its results are just too abysmally bad. People still need a few years to understand as those of low / no skills are always late in understanding the blatantly obvious. (Dunning & Kruger have an explanation for that ...)

    • It does not matter if the shiny web site - which works perfectly fine - but is an abomination of spaghetti HTML/CSS/JS was coded by an incompetent web developer.
      Or was generated by an AI agent based on an LLM fed with abomination code from incompetent web developers.
      The result is the same.

      First case requires you to communicate with a human, and you never know how bad his code is.
      Second case requires you to "communicate" with an AI agent, and you do not care how bad his code is.

      Way number tow, is 10 to 100 t

  • I don't know of a single company sharing proprietary code. So where will vibe be if there is no code to scan?

    On another note. A while back people were worried that open source licensed code would make it in commercial licensed code and would create some legal issues. Isn't this also an issue with vibe generated code?

    To me it seems that it's more of an issue that vibe creates a problem for proprietary code. GPL to vide makes vibe code into GPL. Share and share alike.

  • I love pulling a repo and having Claude explain it and figure out how to run it.
    It used to take forever to get an unfamiliar repo to work because Linux.
    This is actually one of the best uses of LLMs, getting someone else's software to work on my machine.
    The fear of getting knee deep in the weeds because I don't understand the repo completely, is now gone.

  • AI coding (Score:4, Insightful)

    by gabrieltss ( 64078 ) on Tuesday February 03, 2026 @03:37PM (#65967096)
    Being the AI is just using code scraped from public sources, including public GitHub, GitLab etc.. repositories. How are any Copyright licenses being handled I wonder. So say you ask your favorite AI agent to generate code for you, do you know where the code -actually- came from? Was it from a GPL source? If it was and you put it into a proprietary commercial product you and your company could be violating 1 or more licenses.

    I think at the rate this is happening, no source should be closed, or proprietary. All "closed source" companies should be REQUIRED to open up ALL their source code so AI can "index it" and anyone can use it. Else all AI companies should not be allowed to "index" any source code that is not in the public domain or under a license that is very lax.
    • by ukoda ( 537183 )
      All valid points but remember people are expected obey laws, such as copyright, but that does not apply to well funded corporations. They are special and can do whatever the like, the law does not apply to them. You asked about how "Copyright licenses being handled"? They don't 'handle' them, they simply ignore them. This is what has made LLM so powerful, if you have money you can train them on anything, including copyright works, and then simply claim that LLMs are too important to fail so they can not
    • That's not how the licenses are written. The licenses state if you change something you must give back, AI learning from Opensourced code is no different than a human doing the same other than at a faster rate. If you GPLed your code you can't expect anything back from anyone who uses AI to write code that is not a modification of your code. MIT licenses say do what you want with this code, so the rule applies even more to them.

      • by Sigma 7 ( 266129 )

        The licenses state if you change something you must give back,

        No, GPL states that if you distribute anything, you must also provide the source code. Presence or absence of change is irrelevant.

        See section 4, "Conveying Verbatim Copies", section 5, "Conveying Modified Source Versions", and section 6, "conveying non-source forms".

        If a person doesn't accept the terms of the GPL, they may instead treat the software as if the authors said "All Rights Reserved", which means no distribution.

        AI learning from Opens

    • by Bumbul ( 7920730 )

      Being the AI is just using code scraped from public sources, including public GitHub, GitLab etc.. repositories. How are any Copyright licenses being handled I wonder.

      Stop right there, this basic premise is false. AI has learned the language from those code examples and repositories. What it does with the language is often (not always, mind you) original.

      LLMs have learned english (and other languages) syntax from examples, it is easy to use them to create work that is original (say, prompt it to create a poem in Shakespearean style about AI utilizing tennis racket to paint a house - or whatever). Similarly, AI has learned the syntax and coding styles for different PROGR

  • Complete lack of context is an LLM's critical fault and it is not a design flaw. an entire software's lifecycle can be assumed safely behind vague parameters and no sense of best practice. So for loops nested and functions return int with int arguments for example are fine because a newer language likes it. Technically a fork a subversive fork sent to destroy the original host like the monsters in "The Thing" movies.
    • I fork projects to keep my own copy cause I forget where I find things, if you don't want your project forked don't put it somewhere where it can get forked.

  • I'm using AI currently to make Descent 1 and 2 as VR applications, I used AI to repair a broken package and make it usable again. I'm using AI to fix a lot of things that are broken or that need updating in the open source ecosystem (especially with the Wayland transition which is requiring a lot of re-engineering of outdated packages). It's getting good enough it can now generate Meson build files from old autogen configurations. I see AI as the greatest multiplier for open source software. The license is
  • by machineghost ( 622031 ) on Tuesday February 03, 2026 @09:40PM (#65967756)

    This is a textbook example of taking anomalies out of context and drawing some giant (false) conclusion.

    First, Stack Overflow is unique: you can't compare it to any other site or project. Their decline started long before AI (with their decision to encourage chasing newcomers off the site), and AI just hastened it.

    Second, Tailwind had a uniquely bad product/business model. Other OSS projects are doing fine, because they have products (eg. support contracts) people actually want. In contrast, better and free (open source) Tailwind UI libraries exist; the only reason people used Tailwind's (worse) library was because they found it on the docs page.

    Tailwind's case has NOTHING to do with any other project ... unless they too are financially dependent on a terrible product that people only barely want, and will only buy if they see it mentioned on a docs page.

  • As someone who has been a software for 30+ years, and who now uses Claude to do 90% of their work I disagree that it is killing open source. I've not once in 30 years had the time or the energy to make an opensource project. Since using Claude I have made 3, all GPL2 Licensed. They are small, not world changing, but it has empowered me to give back somehow. Opensource projects use to be passion projects, until people went to monetize them.

    Stackoverflow. Let's be honest, 75% of what was there was crap

  • I have been cutting bloat like Tailwind by using LLMs. After all, Tailwind is a tool which makes life easier for me as a programmer at the cost of efficiency. I basically stopped using web toolkits now. I just tell the LLM to build from scratch what I need most of the time.

    I think the best one was when I wrote a parametric CAD program recently (not a web project, C# on the web this time), and I needed a specialized PDF export engine because the existing ones cost money or sucked. So, I told the LLM what I w
  • by rocket rancher ( 447670 ) <themovingfinger@gmail.com> on Wednesday February 04, 2026 @03:34AM (#65968168)

    This paper reads like any one of dozens of papers I had to digest for game theory classes back in college. Granted, that was thirty years ago and “optimization theory” has replaced game theory in the course catalogs, but the bones are the same: Nash is still hiding under the floorboards, tapping out equilibria with a broom handle. What the authors are really doing here is describing a potential tragedy of the commons, and dressing it in modern clothing. In their setup, open source is the shared pasture: maintainers are the shepherds doing the unglamorous work of reseeding and mending fence lines, and users are the cows. Vibe coding adds a new kind of cow, one that grazes constantly and at scale while leaving fewer of the footprints that normally pay the shepherds back: attention, bug reports with reproduction steps, patches, docs corrections, donations, consulting leads, the whole informal economy that kept a lot of projects alive. If that return channel dries up, the equilibrium shifts: fewer shepherds bother staying out in the rain, the pasture degrades, and everyone ends up worse off even though the short-term output looks amazing.

    Nothing about that is conceptually novel. What’s novel is the pressure profile. I watched Red Hat go from an interesting way to monetize Linux in 1994 to a $34B IBM acquisition a quarter century later, which tells you there’s real money in selling stability, support, and risk management around a free codebase. But this paper is pointing at a different failure mode: not “open source can’t be monetized,” but “open source can be consumed so efficiently that the incentives to maintain it get vacuumed away.” The paper’s real kicker is what they call the software-begets-software effect. We’ve all seen this: a healthy ecosystem of libraries makes building the next tool trivial. That’s a virtuous cycle that helped FOSS explode. But the authors’ math shows this loop has a reverse gear. If vibe coding starves maintainers of the attention currency they need to keep the lights on, the ecosystem doesn't just stagnate—it contracts. Entry falls, variety shrinks, and the cost of building new software starts to climb because the foundation is rotting. We’re essentially using AI to strip-mine the very topsoil we need for the next harvest.

    The models in the paper may be a bit too tuned to represent all of FOSS, sure. But where they’re right, they’re right in the way you can't really argue against. If vibe coding siphons off funding, leaving some critical cluster of FOSS coders unwatered long enough, FOSS could be on a fast track to that tragedy of the commons.

  • This hits all software industry all alike. The important part to mention is: It hits all software industry. The free software movement is not affected, those who follow the free software mindset just work on what they need as before. It is also not about sharing of code that one created, it is just about the business model of software industry that is hit.

    If you realize, that for traditional software industry, it is a little programming, then huge marketing, and after that endless cash in, that model wil

  • AI companies are discouraging people from contributing to Open Source projects because their work is scraped and reused for someone else's business model. IMHO, The social implications are far more erosive than any legal or financial reasons.
  • I think the problems Tailwind is facing are a perfect confluence of factors. CSS in general has gotten a lot better over the last several years; you need way less detailed knowledge to implement a visual design than you used to. The CSS spec is also very well-documented, and there's basically infinity CSS out there for models to train on.

    Tailwind's value proposition is that they make it easier to implement a consistent-looking visual style without writing a bunch of CSS; in particular, they handle the trick

  • I had a fairly negative knee-jerk reaction to all the “AI scraping FOSS for training data is bad" comments surfacing in this thread. I realize that was because my FOSS instincts are still basically Stallman-era: public code is the point, reuse is the point, and the GPL exists to make sharing legally certain. If you put code out there under an open license, people reading it, learning from it, and building on it is not a bug. It’s the whole design. That should (and maybe legally does) include u

Measure twice, cut once.

Working...