Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
Programming AI Open Source

Microsoft Open Sources Copilot Chat for VS Code on GitHub (nerds.xyz) 9

"Microsoft has released the source code for the GitHub Copilot Chat extension for VS Code under the MIT license," reports BleepingComputer. This provides the community access to the full implementation of the chat-based coding assistant, including the implementation of "agent mode," what contextual data is sent to large language models (LLMs), and the design of system prompts. The GitHub repository hosting the code also details telemetry collection mechanisms, addressing long-standing questions about data transparency in AI-assisted coding tools...

As the VS Code team explained previously, shifts in AI tooling landscape like the rapid growth of the open-source AI ecosystem and a more level playing field for all have reduced the need for secrecy around prompt engineering and UI design. At the same time, increased targeting of development tools by malicious actors has increased the need for crowdsourcing contributions to rapidly pinpoint problems and develop effective fixes. Essentially, openness is now considered superior from a security perspective.

"If you've been hesitant to adopt AI tools because you don't trust the black box behind them, this move opensources-github-copilot-chat-vscode/offers something rare these days: transparency," writes Slashdot reader BrianFagioli" Now that the extension is open source, developers can audit how agent mode actually works. You can also dig into how it manages your data, customize its behavior, or build entirely new tools on top of it. This could be especially useful in enterprise environments where compliance and control are non negotiable.

It is worth pointing out that the backend models powering Copilot remain closed source. So no, you won't be able to self host the whole experience or train your own Copilot. But everything running locally in VS Code is now fair game. Microsoft says it is planning to eventually merge inline code completions into the same open source package too, which would make Copilot Chat the new hub for both chat and suggestions.

Microsoft Open Sources Copilot Chat for VS Code on GitHub

Comments Filter:
  • "If you've been hesitant to adopt AI tools because you don't trust the black box behind them, this move opensources-github-copilot-chat-vscode/offers something rare these days: transparency," writes Slashdot reader BrianFagioli"

    It's no surprise a /. reader would say this, the question is whether it's ignorance or duplicity that motivates it.

    There is no transparency here, open source or otherwise. It's the LLM that's the black box that people don't trust, not the agent that maps the LLM into the task. One

    • by Anonymous Coward

      Well now you know what is sent to the model. And if that's not enough, you can replace the model with one you control.

    • by gweihir ( 88907 )

      It's no surprise a /. reader would say this, the question is whether it's ignorance or duplicity that motivates it.

      Why can't it be both?

      Obviously, this is not "transparency" in any meaningful sense. But the LLM cheerleaders are not very smart, so it probably will convince some of them.

  • It is worth pointing out that the backend models powering Copilot remain closed source. So no, you won't be able to self host the whole experience or train your own Copilot.

    No, but unless the API is super opaque, it should make it simpler to implement your own back end. should you be so inclined.

  • Transparency? (Score:3, Insightful)

    by Cley Faye ( 1123605 ) on Saturday July 05, 2025 @06:06PM (#65499992) Homepage

    I wasn't really worried about how my IDE would be able to read, edit, and write file, nor how it could highlight some differences, or how it would grab something I typed and send it to a backend.
    I'm worried about that backend, receiving everything needed to supposedly make decisions about the code, being fully closed, operated by an unreliable third party, with said third party promising to play fair as the only security net.

    More open source is great, but considering this a move to improve transparency and trust into AI "agent" or whatever is a joke. "you can audit everything up to the part you're suspicious about", eh?

    • by gweihir ( 88907 )

      Well, in a commercial setting, several concerns apply. First, it may be illegal for you to actually have the code you work on leave your enterprise. This can be soft (breach of contract) or hard (criminal act), or in between. Second, that backend may give your code to others. That may cause a lot of issues. If you did this voluntarily, trade-secret protection may be gone. If there were vulnerabilities in there, attackers may get access to and then craft exploits. I am sure other problems may exist. And thir

      • It's slightly less of a concern if you are working on open-source. At least, having it publicly visible is not problematic.

        Massive legal issues remain. Is your new code GPL, MPL, BSD, or WTFPL ? Same for all the code you are stealing from the training data. Unless the LLMd can learn to separate all that, I predict a ton of lawsuits. It won't be as straightforward as lawsuits from writers, musicians, and graphic artists, though.

  • "openness is now considered superior from a security perspective."

    Nice they got that finally, it's been like that for about 30 years now :)

Riches: A gift from Heaven signifying, "This is my beloved son, in whom I am well pleased." -- John D. Rockefeller, (slander by Ambrose Bierce)

Working...