Microsoft Warns Its Windows AI Feature Brings Data Theft and Malware Risks, and 'Occasionally May Hallucinate' (itsfoss.com) 65
"Copilot Actions on Windows 11" is currently available in Insider builds (version 26220.7262) as part of Copilot Labs, according to a recent report, "and is off by default, requiring admin access to set it up."
But maybe it's off for a good reason...besides the fact that it can access any apps installed on your system: In a support document, Microsoft admits that features like Copilot Actions introduce " novel security risks ." They warn about cross-prompt injection (XPIA), where malicious content in documents or UI elements can override the AI's instructions. The result? " Unintended actions like data exfiltration or malware installation ."
Yeah, you read that right. Microsoft is shipping a feature that could be tricked into installing malware on your system. Microsoft's own warning hits hard: "We recommend that you only enable this feature if you understand the security implications." When you try to enable these experimental features, Windows shows you a warning dialog that you have to acknowledge. ["This feature is still being tested and may impact the performance or security of your device."]
Even with these warnings, the level of access Copilot Actions demands is concerning. When you enable the feature, it gets read and write access to your Documents, Downloads, Desktop, Pictures, Videos, and Music folders... Microsoft says they are implementing safeguards. All actions are logged, users must approve data access requests, the feature operates in isolated workspaces, and the system uses audit logs to track activity.
But you are still giving an AI system that can "hallucinate and produce unexpected outputs" (Microsoft's words, not mine) full access to your personal files.
To address this, Ars Technica notes, Microsoft added this helpful warning to its support document this week. "As these capabilities are introduced, AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs."
But Microsoft didn't describe "what actions they should take to prevent their devices from being compromised. I asked Microsoft to provide these details, and the company declined..."
But maybe it's off for a good reason...besides the fact that it can access any apps installed on your system: In a support document, Microsoft admits that features like Copilot Actions introduce " novel security risks ." They warn about cross-prompt injection (XPIA), where malicious content in documents or UI elements can override the AI's instructions. The result? " Unintended actions like data exfiltration or malware installation ."
Yeah, you read that right. Microsoft is shipping a feature that could be tricked into installing malware on your system. Microsoft's own warning hits hard: "We recommend that you only enable this feature if you understand the security implications." When you try to enable these experimental features, Windows shows you a warning dialog that you have to acknowledge. ["This feature is still being tested and may impact the performance or security of your device."]
Even with these warnings, the level of access Copilot Actions demands is concerning. When you enable the feature, it gets read and write access to your Documents, Downloads, Desktop, Pictures, Videos, and Music folders... Microsoft says they are implementing safeguards. All actions are logged, users must approve data access requests, the feature operates in isolated workspaces, and the system uses audit logs to track activity.
But you are still giving an AI system that can "hallucinate and produce unexpected outputs" (Microsoft's words, not mine) full access to your personal files.
To address this, Ars Technica notes, Microsoft added this helpful warning to its support document this week. "As these capabilities are introduced, AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs."
But Microsoft didn't describe "what actions they should take to prevent their devices from being compromised. I asked Microsoft to provide these details, and the company declined..."
urg (Score:5, Insightful)
Tech companies: Security is such a huge priority that we'll load our software with power and memory wasting countermeasures that annoy the hell out of you. You may hate being that using two-factor authentication requires you to grab your phone for a text message before you log into anything, but it's all in the name of security! You should learn with it, it's all for the best!
Also tech companies: It's so important to lard our work with generative AI features that a little security compromise is fine!
Re:urg (Score:5, Interesting)
because.. (Score:5, Informative)
AI is not magic (Score:2, Informative)
AI is the broadest category, not the finest. AI means the chess computer, it means ELIZA, it means the next word prediction on your phone, it means generative AI, it means expert systems, it means Markov chains ... you're thinking about AGI.
Re: AI is not magic (Score:2)
Sounds like an escape clause. (Score:5, Insightful)
"Using Windows AI may cause data theft and malware risks, so don't come to us when it happens, you were warned (not that you had the choice to disable the AI...)"
Re: (Score:2)
it's definitely legalese.
Re:Sounds like an escape clause. (Score:5, Insightful)
Not any different from using Windows without AI.
Re: (Score:3)
Au contraire! For attackers, this is an exciting new feature that will offer endless new functionality and may finally prevent users from sabotaging their efforts by actually having a clue and being careful.
Re:Sounds like an escape clause. (Score:4, Informative)
(not that you had the choice to disable the AI...)
Meanwhile the first paragraph of the summary above includes
"and is off by default, requiring admin access to set it up."
Along with several warnings from Microsoft to only activate this if you really want to take the risk.
If I wanted to use this "feature" I'd want a standalone machine with no actual real data, but I don't so it's moot anyway.
Re:Sounds like an escape clause. (Score:5, Insightful)
Re: (Score:2)
If they do that they'll have to take responsibility for the consequences when shit happens, and it's not as though they can weasel out with "we didn't know" - this warning makes it clear that they are aware of what can go wrong.
Re: (Score:2)
If they do that they'll have to take responsibility for the consequences when shit happens...
Yup, just like most companies today face meaningful punishments when they have security bre-- BWAHAHAHAHAHA. Sorry, I couldn't continue that with a straight face.
Re: (Score:1)
"Using Windows may cause data theft and malware risks, so don't come to us when it happens, you were warned"
FTFY
Obvious answer (Score:5, Informative)
Obviously, uninstall Windows. Because one can't uninstall AI crap-ware MS Recall and MS Co-pilot.
Re:Obvious answer (Score:5, Informative)
An advice that is almost 30 years late, but welcome.
Re: (Score:2)
Obviously, uninstall Windows. Because one can't uninstall AI crap-ware MS Recall and MS Co-pilot.
I nuked my nice fast Windows 10 laptop that was no longer eligible for W11 update. Now it works flawlessly and fast. My new W11 laptop is fast, but W11 is buggy. This operating system feels and acts like early beta.
Maybe Microsoft could think about getting W11 to function first instead of providing users with roulette wheel malware.
Re: (Score:2)
The days of when a programmer could instantly get respect by saying "I work for Microsoft" are long gone. That's not to say everyone there is dumb, obviously not, but there's a lot of chaff around the wheat if you know what I mean. And structurally the company has not been set up for delivering quality products since they got rid of their testers 25 years ago. The feeling was, well it's all distributed by the internet so we can just patch it if the original engineer doesn't catch all of his own bugs. On top of the fact that AI-generated code has been pushed hard in recent years, the trend is not going in the direction you want.
At a really, really fundamental level the comp/promotion system at Microsoft is broken for quality software. You get promoted for "impact." Fixing bugs is not considered impact.
What is interesting to me is that my people in emergency comms are going to fail at some point. And they wouldn't fail if they were using either Linux or MacOS.
But there is another comm mode in use that is made only for Windows. I'm going to suggest that it be abandoned. Another Cassandra moment for me - the agencys heads will asplode when I do that.
Interesting times (Score:5, Interesting)
And dangerous for dumb people. Remember that "malware installation" usually means lateral movement and then compromise of the whole organization these days, because AD security sucks and then it is often misconfigured on top of that.
I would not trust this on a hardened Linux with network access. Windows? Do you want to get hacked?
Also note that they only put that in there because the lawyers told them they had to. This means this technology represents a fundamental and systematic novel risk they do NOT have under control. The usual limitations of warranty are not enough. Providing or using this feature will likely fall under gross negligence. Microsoft can get out of the resulting liability by explicitly warning its users that this is not a feature you can use securely and that result quality is very much not ensured. Or in other words, this is a very dangerous toy, not a professional product.
That they feel they need to add a warning with this uncommon level of clarity is very telling. I am sure all the MS fans and all the nil wits will still ignore it. So let me add this, because it will be relevant: We told you so.
Re: (Score:2)
Also note that they only put that in there because the lawyers told them they had to. This means this technology represents a fundamental and systematic novel risk they do NOT have under control.
Exactly. This means that an average user taking reasonable precautions would be impacted. No dumb users falling for exploits are necessary to exploit this. What a mess.
In the age of widespread AI with superuser permissions we need to create a secure and authenticated prompt. There now must be a difference between what used actually typed in as a query/prompt and text that AI may have across that contains query/prompt. This means security redesign.
Re: (Score:2)
Exactly. This means that an average user taking reasonable precautions would be impacted. No dumb users falling for exploits are necessary to exploit this. What a mess.
Indeed. This is about average users behaving in reasonable ways not being able to be reliably secure anymore.
Re: Interesting times (Score:3)
I'd be less concerned about the malware on windows knowing the typical home user. It's just yet another method, joining all the infected game hacks (or non-functional ones that are just malware claiming to be Roblox or Minecraft mods) , infected "useful" plug in with the same story, MS office documents that still represent about a third of all pfishing, or Microsoft letting .zip files run as executables if you just renamed the exe BINARY. The malware is something people already do, so it is not surprising.
Re: (Score:2)
The home-user does not really matter in this. It is corporate users that provide basically all MS profits.
It's far worse than M$ admit (Score:5, Insightful)
The LLM can and will make a mess all on its own. There is no need for external malice to get screwed by the LLM.
Re: (Score:3)
Well, MS made sure to live up to everybody's expectations by not only making this a security mess, but also a reliability mess! So much quality. So much winning. So much improvement.
Lawyers must be rubbing their hands ... (Score:2)
in anticipation of fees to be earned from class action lawsuits. Win or lose these parasites will come out on top.
Re: (Score:3)
With these disclaimers? Doubtful.
Maybe they trained (Score:1)
Maybe they trained Copilot on Microsoft Bob data. If you enter your password wrong three times it offers to change it for you so you can get in.
Feature Brings Data Theft and Malware Risks, and (Score:5, Insightful)
As a long time Linux user I can't tell if they talking about AI or just regular Windows.
Re: (Score:3)
Re:Feature Brings Data Theft and Malware Risks, an (Score:4, Informative)
Linux is customizable and has a huge variety of distributions with different focus. Of course it gets support for different AI systems (and already has), but you choose what you want and what you don't. The comparison "Linux will follow" rarely makes sense, because the difference is not how it is built right now or will be built together, but the philosophy of how users are allowed to customize it.
Re: (Score:2)
You misunderstand: I was being a smug neckbeard about Linux.
With that aside: yeah well maybe GNOME. And systemd! Bring on the hate whoop! whoop!
But seriously though the system is in my control. I have a GPU and pytorch installed. I can install an LLM coding bot if I want (I don't). I can use any of the pure local ones, etc etc.
Re: (Score:2)
Gnome? What's that? I do remember throwing off systemd because it gave me problems in the first hour of having it on a system. If it looks like crap and smells like crap ...
So far I have noticed zero disadvantages of my approach.
Re: (Score:2)
With that aside: yeah well maybe GNOME.
I don't think you are using "maybe" correctly here. Maybe is used to indicate there is some level of doubt.
So what else is new...lol (Score:2)
Isn't this an internal problem? (Score:5, Insightful)
Why is this even being discussed in public? If a pc is running windows, that means all the hardware and software now belongs to microsoft, along with any data typed in, results of wifi scans, audio and video from any peripherals, everything from attached storage. It all belongs to microsoft and will all be stored, used and sold by microsoft as it sees fit.
AI stealing data is just an inefficiency as it is data duplication in the silos.
Re: (Score:1)
Not in Europe. They could do it, but then their executives should probably never visit the EU again and they should close all dependencies here.
LLMs don't hallucinate (Score:5, Funny)
Nope, LLMs don't hallucinate. Their algorithmic output is deterministic and just gives the output based on the input, the training data, and various settings and configuration values.
So (repeating myself): AI don't hallucinate from time to time. Every answer they ever give is equally made up. What people call hallucinations are merely cases where the made up answers are ostensibly wrong.
Any AI may apologise when it's pointed out that their answer was incorrect, even if in fact, it happened to be correct.
Re: (Score:1)
Your brain is also deterministic, if you aren't religious. Hallucinations just describe consistent plausible but wrong results. A single strange idea crossing your mind is quickly dismissed just like a wrong word in an LLM output. But sometimes your brain or the LLM may produce a chain of wrong but consistent results, which are harder to detect as wrong. They hallucinate.
Re: (Score:2)
Agree about the meaning of "hallucinate" in this context, but...
You can't be sure your brain is deterministic. It may well have features that operate at the quantum level, with the implied genuine uncertainty. Transistors are normally scaled to avoid that problem. This isn't exactly "free will" in any normal sense, but it *is* non-deterministic behavior, at least as far as we can tell. (Yeah, superdeterminism is a valid interpretation of quantum theory, and so is the multi-world interpretation and a few
Re: (Score:2)
Just as a reminder, LLMs get randomized to better resemble a person. People are not deterministic in any way (unless you are a quasi-religious physicalist fuckup that mistakes religion for Science), and hence LLMs are made to not be either.
Re: (Score:2)
Re: (Score:2)
Yes.
Business Fuel. (Score:2)
Microsoft Warns Its Windows AI Feature Brings Data Theft and Malware Risks, and 'Occasionally May Hallucinate'.
With friends like that behind the corporate firewall, where’s Pablo Escobar the HR Director when you need him.
Seriously. A sneeze-activated cocaine dispenser on the CEOs desk sounds better for business than that shit. And ironically is what currently works to keep their stock price higher than giraffe pussy.
Imagine if you were hiring, (Score:5, Funny)
and on some candidate resume, it says that in their professional capacity they have/will commit some data theft, inject malicious code into the software they are operating, and occasionally hallucinate on the job.
You hiring this person?
Re: (Score:2)
Are they cheap and do not talk back? Dream employee!
Re: (Score:2)
Funny, but ironically, if humans were brutally honest on their resumes, they'd had to admit exactly that.
Sounds like a replay of the furor over VBA (Score:3)
When micorsuck introduced VBA, touting it as a panacea, Dvorak said it would lead to increasingly difficult to avoid exploits.
Here we go again.
Microsuck does not care how about shitty their OS security is because everyone is stuck with it.
Re: (Score:2)
Well, there is a "late stage monopoly effect". It is when the product gets so bad that you cannot base your business on it anymore. I guess MS is close to that point now.
Re: (Score:2)
After reading the post about them finally admitting w11's core functions are hosed, i certainly agree.
Re: (Score:2)
Then add Azure getting hacked and being massively insecure, several times now. Add that many people are looking into leaving o365 because MS blocked a user for political reasons.
MS is done for. They just will take quite a while dying. But there is no realistic chance they can turn things around anymore.
...asked Microsoft to provide these details... (Score:2)
What is Microsoft even doing? (Score:2)
Microsoft ended support for Windows 10, and Windows 11 is riddled with half-assed AI features that even Microsoft is warning about. What are they even doing at this point?
Microsoft created a monster (Score:2)
Cut the head off the snake (Score:3)
Sure. (Score:2)
And yet, they're going to FORCE you into it... (Score:2)
I re-state a question I have asked before:
Just how much more abuse will their customers take? When will users of Microsoft (or Apple) operating systems finally say ENOUGH? How much is too much? How far is too far? When will people finally say "all we wanted was a sold reliable OPERATING SYSTEM!" and turn their backs on these corporate dictators?
YOU own a computer and you want an OS for it... but THEY say when you MUST update, what features you MUST accept, where your files will be stored, who can see those
In layman's terms (Score:2)
"We know it sucks, we know it's bad for you, we know you don't want it. We're doing it anyway."
What happened to the comments..gone !! (Score:2)