Forgot your password?
typodupeerror
AI IT

Employees Are the New Hackers: 1Password Warns AI Use Is Breaking Corporate Security (nerds.xyz) 57

Slashdot reader BrianFagioli writes: Password manager 1Password's 2025 Annual Report: The Access-Trust Gap exposes how everyday employees are becoming accidental hackers in the AI era. The company's data shows that 73% of workers are encouraged to use AI tools, yet more than a third admit they do not always follow corporate policies. Many employees are feeding sensitive information into large language models or using unapproved AI apps to get work done, creating what 1Password calls "Shadow AI." At the same time, traditional defenses like single sign-on (SSO) and mobile device management (MDM) are failing to keep pace, leaving gaps in visibility and control.

The report warns that corporate security is being undermined from within. More than half of employees have installed software without IT approval, two-thirds still use weak passwords, and 38% have accessed accounts at previous employers. Despite rising enthusiasm for passkeys and passwordless authentication, 1Password says most organizations still depend on outdated systems that were never built for cloud-native, AI-driven work. The result is a growing "Access-Trust Gap" that could allow AI chaos and employee shortcuts to dismantle enterprise security from the inside.

This discussion has been archived. No new comments can be posted.

Employees Are the New Hackers: 1Password Warns AI Use Is Breaking Corporate Security

Comments Filter:
  • by oldgraybeard ( 2939809 ) on Saturday November 01, 2025 @05:00PM (#65766550)
    Well Duh!
    • Re: (Score:2, Funny)

      by FudRucker ( 866063 )
      Good, I hope it destroys the kleptocrstic capitalism that infected the USA (and the rest of the world) into a neo-fasist wageslave system (slavery-2.0), and the world can move to something that values humanity over profits
    • by ffkom ( 3519199 )
      Given how many vibe-coded "MCP" services are being deployed these days without any thoughts on security being spent, it won't even take any employee interaction to turn "AI Use" into the next big security disaster.
    • by thegarbz ( 1787294 ) on Sunday November 02, 2025 @05:56AM (#65767218)

      Well Duh!

      Not at all. There are AI tools that exist which obey corporate security requirements. For example if I make a CoPilot agent to work on searching all files in a document, it will refuse to open those classified as Secret, and those unclassified. That's why so many companies are pushing CoPilot as the main AI tool while blocking ChatGPT even though underneath they both run virtually the same models.

      The downside is CoPilot is fucking rubbish. Can't comment on ChatGPT but I suspect it's no better.

      • Like everything cloud! "obey corporate security requirements" until they don't! Or more likely, just pretended to!
        • You pretend like the cloud treats you (the person who uses it for free) the same as as a mega corp who pays millions of dollars and signs a contract in the presence of lawyers and procurement teams. Don't do that. You and the corporate world are not the same.

  • Fix the policy (Score:4, Insightful)

    by registrations_suck ( 1075251 ) on Saturday November 01, 2025 @05:23PM (#65766586)

    If your stupid policies didn't interfere with me doing my work, I wouldn't have to break them.

    1). Windows sucks. You give me a windows box. So, I use my own non-windows machine to do my work.

    2). You gave me a laptop with a small screen. I can't deal with a small screen. So I use my own laptop with a big screen.

    3). You don't approve the software I want to use. So I buy my own software and run it on my own machine.

    4). I don't want to be spied upon. If you insist on monitoring the machine you issue me, I just won't use it.

    See how that works?

    When your policies piss me off, I will ignore them whenever I'm able to do so. If that creates security problems for you, I don't much care.

    Yes, that makes me a bad employee. Tough shit.

    • Fix your attitude. (Score:5, Informative)

      by Anonymous Coward on Saturday November 01, 2025 @05:49PM (#65766612)

      TL; DR - 6. When you fail to explain your job requirements while demonstrating great immaturity. You're supposed to be a trusted employee?

      Self-replicating auto-installing malware, isn't something the Vulcan-speaking IT dork pulled out of his AD&D book. And you sneaker-netting company-sensitive information around won't be viewed kindly when you are the source of the corporate data leak. The company that used to have a competitive advantage before you leaked it, won't appreciate your attitude either.

      See how that doesn't work when you're the one who broke every IT policy resulting in security compliance invalidation for a key customer, all because you decided to use whatever hardware and software you wanted, in whatever manner you want?

      Oh, you're more than capable of maintaining security? OK. Let's pretend your attitude about IT compliance extends to the new idiot Marketing Director then. User arrogance is infectious. Especially among the incapable. And that doesn't make you merely a bad employee. It makes you a fucking liability. You sum up every reason InfoSec policy needs to be zero tolerance policy enforced electronically and audited regularly.

      • I didn't claim to be capability of managing security and I admitted to being a bad employee. What more do you want?

      • by test321 ( 8891681 ) on Saturday November 01, 2025 @08:49PM (#65766826)

        The thing is that this attitude is commonplace, so if you care about security then you have to factor in that it happens. And the harder you hit people on the head for violating the rules, the harder they will try to evade them. If you want compliance, you have to make sensible rules. Also, most points cited above had nothing to do with security.

        1) IT could give you better laptops, or larger screens. But that would require a different contract and cost money. Note that employees are willing to put their own money to get it. They love the job, they're even willing to pay for the privilege of working here. IT castrates them, so they bypass the rules.

        2) IT could hand out linux laptops. With appropriate choices, that wouldn't make the company less secure. They could choose a supported RHEL, SUSE, Ubuntu. That would be at least as safe as Microsoft, most likely SAFER than Microsoft. The particular distro maybe wouldn't be one's favourite, but linux fans would be willing to accept that much more than Microsoft. The reason is doesn't happen isn't security. It's that it would give them work as they're more used to manage a single contract and click some options in a Microsoft administration software. Also there could be a competence problem. Some small IT teams might have little interest or no clue at all about FOSS.

        3) The other reason for Windows is the ability to spy upon people; or at least, employees have this perception, and this drives them elsewhere. Spying upon people isn't about security. Things like "recall" aren't about security. MS Teams telling my boss how many messages I answer and how fast I answer them isn't about security.

        4) A number of FOSS software are just fine to do the job. Using e.g. Inkscape or LibreOffice doesn't compromise your company security. Or if your company was previously using AutoCAD and now subscribed SolidWorks, many people will still want to use the older software. What IT is doing in trying to save pennies is driving people to install cracked software on private laptops.

        All that security problems are the fault of dysfunctional IT policies by incompetent IT who succumb to ease of using Microsoft. Provide linux laptops! Run a poll to see what software people want! Promote FOSS, provide laptop with a bunch of them by default! That would give IT some WORK, and that's why they don't like it.

        • Yes, this was exactly what I was trying to point out with my candor. Thanks for getting it.

          Your point about sensible rules is particularly salient.Here is a real example, concerning the workplace of someone I know very well.

          It runs very realistic phishing expeditions. My associate, a hard working and diligent employee, failed a third time and they docked her bonus.

          Her boss's advice, and shit you not, was to avoid this problem by not responding to email at all. That's the official advice!!

          So why even bother

          • Re: (Score:2, Interesting)

            by Anonymous Coward

            Yes, this was exactly what I was trying to point out with my candor. Thanks for getting it.

            Your point about sensible rules is particularly salient.Here is a real example, concerning the workplace of someone I know very well.

            It runs very realistic phishing expeditions. My associate, a hard working and diligent employee, failed a third time and they docked her bonus.

            Her boss's advice, and shit you not, was to avoid this problem by not responding to email at all. That's the official advice!!

            So why even bother having email?

            It's the dumbest fucking thing imaginable, especially when "the studies" show that phishing expeditions are not actually effective against real attacks. So at the end of the day, these very realistic games are just a mechanism for docking bonuses.

            She does not respond to emails anymore, and has had no problems since.

            Employees like that are the reason entire companies become forced (often by mandate/court order) to start phishing their child employees who won’t GET it until they’re standing behind all their other co-workers in the unemployment line. You know another way to get a we’ll- earned bonus? Pay attention and do your job. ALL of it. Phishing isn’t hard to identify. I’m betting the pass/fail rate wasn’t anywhere near enough to justify quitting the job because of “unf

            • She showed me the phishing emails in question. They were legitimately "tough". If we're not super awesome (yeah, I said it!), I might have been tricked too. Her particular position was such they were legit tougher than they would have been for a typical employee too.

              But she solved the problem. Now she just doesn't respond to emails. Everything is a phone call. And her dysfunctional organization is fine with that, as that is apparently what most employees are doing - send email, but never respond to it.

    • small screen can be an ADA issue and they may be forced to get you an bigger one if you push it.

      • Yeah, I considered. I definitely would "qualify".

        And I would have put out the effort if it would have not just gotten me a different windows machine.

    • by vbdasc ( 146051 )

      Well TBH, if I was your manager, I'd prefer to hire someone perhaps less capable but more disciplined than keep you on the roster. And IMHO, most managers would think exactly the same.

    • When your policies piss me off, I will ignore them whenever I'm able to do so. If that creates security problems for you, I don't much care.

      Yes you're a keyboard warrior. On the flip side most people will care when they have a mortgage and mouths to feed instead of just running down the clock on a terminal illness.

      If you are a corporate security problem, that is easily fixed by the corporation in a way that doesn't involve changing any of its policies.

    • by ddtmm ( 549094 )
      You're fired.
  • Wrong title. (Score:5, Insightful)

    by Gravis Zero ( 934156 ) on Saturday November 01, 2025 @05:28PM (#65766592)

    A more honest title would be:

    Security Experts Find that Corporate AI Use Puts Corporations At Risk

    Humans are going to be lazy, this is what humans do. If you don't want them to feed sensitive information into AIs then...
    A) Stop making it possible.
    B) Stop telling them that AI makes things easier.

    Touting AI as something that makes work easier and saying it's a "no-no" to put certain information in it is just begging them to violate that rule. Managers/Executives that are encouraging the use of AI have nobody to blame but themselves.

    • by Anonymous Coward

      C) Provide an in house AI

      • I might be willing to use it, assuming I trust the corporation I work for more than Google / OpenAI / Anthropic (I don't). Oh, and also if it was at least as good as those 3.
        • by Anonymous Coward

          Why won't you trust your corporation more than Google with code you write for your corporation?

      • Providing shitty Copilot only makes people look elsewhere. Once you promoted AI you opened Pandora's box. People want the one they like most, they will use it whether it's the one provided by the company or not. The only way is to subscribe at minimum the top 3 options after running an internal poll.

    • It's called a "win-win" scenario, either you make your work go faster or they can fire you. Either way they win.
    • by gweihir ( 88907 )

      Indeed. Which is why some companies and organizations have a complete prohibition on using LLMs and that gets monitored and enforced.

  • InfoSec 101. (Score:5, Insightful)

    by geekmux ( 1040042 ) on Saturday November 01, 2025 @05:30PM (#65766594)

    More than half of employees have installed software without IT approval, two-thirds still use weak passwords, and 38% have accessed accounts at previous employers.

    Long ago we understood no employee should have the local rights (no local admin, restricted software policy, etc.) to install software to prevent that problem from sidestepping IT approval. This includes IT staff logged in to desktop accounts for day to day work. And who can forget self-executing auto-installing malware. It ain't just the click junkie behind the keyboard we're worried about anymore. 'Nuff said.

    A couple of decades worth of Top 20 Worst Passwords lists never changing should have reinforced the ignorant problem of weak users and shit passwords, resulting in countless hacks. We solved that problem long ago with domain-level security policy mandating complex passwords for all accounts, with additional complexity required for all administrative accounts.

    If accessing accounts at previous employers is in fact a standing policy that is not allowed, then 36% of employees should have been flagged in an employee internet anomaly report and already been put on a PIP.

    This isn't rocket science anymore. The fuck are we even debating here in the year 2025.

    • Common sense died years ago. You are going to have a hard time accepting what passes for common sense today, but I'll try. Any MBA can tell you, you're wasting your time on security and passwordy stuff ... moving fast and breaking things is the way to go grandpa, just ignore the risks and focus on the rewards.

      There. Now you know how people, managers, employees think today. No need to thank me. Nobody else has. They tend to become irate when I tell them about the risks. True story. Offtopic, but I'm not popu
    • While I don't disagree with you, there always a way around things, and employees with find them. For example. employer will not pay for a useful tool developers like to use because it saves time (AI or non-AI), because some bean counter said you can use a chain of free open source software to accomplish the same thing (with 3x the number of steps/clicks, but hey, it's free, right?) or some corporate subscription which yields substandard results. This leads to employees finding security holes and never discl
      • So, if an employer values compliance over productivity, they should make it a criteria in their performance reviews, and the employee who is producing only a third of the output using company approved tools should get a much bigger bonus than the employee who produced 3x as much getting creative paying for their own tools. As long as employers value productivity over policy compliance, good (as per performance eval criteria) employees will always find ways around the policies.

        The way it is supposed to work

    • by tlhIngan ( 30335 )

      Except you can install programs without admin rights - Chrome for example was one of the most well-known examples. It took a while for Chrome to have a version that could be installed system wide (with admin rights). And Microsoft does it as well - Visual Studio Code can be installed with both user and system rights.

      And nevermind portable apps - programs that run from USB sticks or such which require no installation at all - run the EXE and go.

      • Except you can install programs without admin rights - Chrome for example was one of the most well-known examples. It took a while for Chrome to have a version that could be installed system wide (with admin rights). And Microsoft does it as well - Visual Studio Code can be installed with both user and system rights.

        And nevermind portable apps - programs that run from USB sticks or such which require no installation at all - run the EXE and go.

        No system gets USB boot rights. No standard user gets USB media rights. A limited few get read-only. Power users authorized for more are scrutinized to the point of being part-time InfoSec staff, and audited as such.

        As far as unauthorized executables go, that's down to software whitelisting at the anti-malware level and ruthless sandboxing of all software downloads. Sandbox release requires IT review and approval.

        The more "well-known" examples will pop up on the InfoSec radar like a prairie dog cocaine

    • by tlhIngan ( 30335 )

      Password complexity rules don't mean a single thing - because changing passwords means people come up with a system. I can give you a system with capitals, small, numbers and symbols: Month name (January, February, March, etc), the symbol above the month number (like exclamation mark for January, and underscore for November, and plus for December), and the 2-4 digit for the year.

      Heck, I even knew of people who simply cycled the passwords over a week to get around the "can't use the last 5 passwords" restric

      • Password complexity rules don't mean a single thing - because changing passwords means people come up with a system. I can give you a system with capitals, small, numbers and symbols: Month name (January, February, March, etc), the symbol above the month number (like exclamation mark for January, and underscore for November, and plus for December), and the 2-4 digit for the year.

        Heck, I even knew of people who simply cycled the passwords over a week to get around the "can't use the last 5 passwords" restriction. They'd add a 1, then the next day change it to 2, and by the end of the week change their password back to the original.

        It's basically a cat and mouse game, and policies that may try to enhance security can fall flat and instead decrease security because people find a template for making passwords.

        I actually never said changing the password. I've managed that mandated policy for years and years as well, and seen similar results. I could probably still guess the passwords that certain executives are still using today (incorporating the year of course) because of their known password "systems". You're right. It does create insecurity via security.

        I prefer mandating complexity and suggesting longer more easily remembered passphrases now, but without enforcing a hard password change policy. Almost

  • Do you know how many interstitials I have to deal with when trying to log into a corp approved app. Five, Bob. Five interstitials. And this is for every app, every day. And sometimes I have to type a password. Or generate a code. Do you know how impressed my manager will be when I tell him my work isn't done because of all the time I spent dealing with CorpSecs hurdles? That's right, Bob, not at all. So do you think I give a damn, Bob, about unapproved AI tools? No, I do not.

    • If you are typing a password in 2025, something is terribly wrong or outdated in your company's security. If passwords must be used, they should be in some type of password manager. For corporate apps, SSO should be the default. Add that to your reauthentication fatigue and it may appear the CISO should be fired from a cannon.
      • Should be.
        Could be.
        Would be.

        And yet, is not.

        The password hell at my company is believable.

        And so yes, I will ignore and bypass policies every chance I get. I'm the least of their problems.

      • YOU are terribly wrong if you think passwords are outdated security. It's always been the combo of something you know and something you have.

  • Won't using 1Password - as opposed to using a local credential store - also risk compromise of the enterprise?

    My impression of 1Password is that it is centralised store of encrypted passwords - isn't that a hacker magnet? Hackers could obtain the encrypted store and attempt to decrypt at leisure. Or hack 1Password's communication interfaces and endpoints.

    I'd be much more comfortable if all 1Password did was enable the syncing of credential stores directly between devices, never keeping a copy . That way I

  • I had data leak once (Score:5, Interesting)

    by gweihir ( 88907 ) on Saturday November 01, 2025 @10:48PM (#65766942)

    I did test whether ChatGPT could solve a specific exam question for an exam a few days later. It could not. A few days later, it suddenly could and that was on the accounts of the students. For the exam, I was prepared for something like that.

    But this was a key experience: Never give anything confidential or proprietary to an LLM under any circumstances. It may well show up in answers to other people a few days later.

  • So in a nutshell, just like cloud should be local, AI should be as well. And for the same reasons.

  • by unami ( 1042872 ) on Sunday November 02, 2025 @08:00AM (#65767310)
    IT requires it, I grudingly accept it. But having to reach for a second, battery powered device, usually every time it's urgent and then dealing with microfuck authenticator that sometimes sends the notification, and sometimes not, is frustrating and also produces quite some cost in lost labor every day across hundreds of employees. Nobody adds that up and compares it to the potential cost of a big cybersecurity emergency every few years. But I bet, security wouldn't always result in being the cheaper option.
  • More than half of employees have installed software without IT approval, two-thirds still use weak passwords, and 38% have accessed accounts at previous employers

    What is a "weak" password? They probably mean passwords with high complexity, like upper and lower case characters, symbols, numbers, and the like. Even the NIST now recognizes that such complex rules do not make passwords stronger. The only thing that makes a password stronger, is the length.

    But in reality, passwords themselves are the weak link. The minute a human has to remember some secret, your security is compromised. Humans are the weak link.

"The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts." -- Bertrand Russell

Working...