Forgot your password?
typodupeerror
Businesses Technology

Tech Firms Aren't Just Encouraging Their Workers To Use AI. They're Enforcing It. (msn.com) 101

Tech companies ranging from 300-person startups to giants like Amazon, Google, Meta, Microsoft and Salesforce have moved beyond encouraging employees to use AI tools and are now actively tracking adoption and, in several cases, tying it to performance reviews. Google is factoring AI use into some software engineer reviews for the first time this year, and Meta's new performance review system will do the same -- it can track how many lines of code an engineer wrote with AI assistance.

Amazon Web Services managers have dashboards showing individual engineer AI-tool usage and consider adoption when evaluating promotions. About 42% of tech-industry workers said their direct manager expects AI use in daily work as of last October, up from 32% eight months earlier, according to AI consulting firm Section. At software maker Autodesk, CEO Andrew Anagnost acknowledged that some employees had been using initially blocked coding tools like Cursor stealthily -- and warned that AI holdouts "probably won't survive long term."
This discussion has been archived. No new comments can be posted.

Tech Firms Aren't Just Encouraging Their Workers To Use AI. They're Enforcing It.

Comments Filter:
  • by rsilvergun ( 571051 ) on Wednesday February 25, 2026 @06:31PM (#66010406)
    Then you are training your replacement. Now maybe that'll work out and maybe it won't but that's what they're trying to do.
    • I completely agree.

      Long time ago I mentioned that the Microsoft Recall feature that takes a screenshots of everything that you do on your computer is just training data for their Copilot PI.

      Now with organizations and the executives forcing PI adoption and mandating it as part of performance reviews, they generally want everybody to train their PI replacements so they can start the next batch waves of layoffs and the next economic crisis bigger than anything else that we've seen in the past.

      • That's the number you get to if you include everyone looking for work and everyone who gave up looking for work and then finally everyone who is technically employed but could not afford to feed and clothe themselves.

        I suspect what's going to happen is we're going to hit 25 to 30% real unemployment in the next several years with something like 50 to 70% functional unemployment.

        When that happens we are the become a brutal dictatorship like China and or we start world War 3. I think we are probably go
      • "Train Your Own Replacement" https://slashdot.org/story/04/... [slashdot.org] "Yahoo reports on how some employers are asking the workers they're laying off to train their foreign replacements - having them dig their own unemployment graves. 'Almost one in five information technology workers has lost a job or knows someone who lost a job after training a foreign worker, according to a new survey by the Washington Alliance of Technology Workers.' It looks like a real dilemma where if you refuse to hire your replacement, yo

        • by ls671 ( 1122017 )

          t looks like a real dilemma where if you refuse to hire your replacement, you are fired without severance and are ineligible for unemployment benefits,

          Just train them with false information.

    • by Brain-Fu ( 1274756 ) on Wednesday February 25, 2026 @07:03PM (#66010468) Homepage Journal

      More realistically, they believe that using AI means "getting more work done faster." They take that as gospel truth with no qualifiers.

      So, if you aren't using AI then clearly you are wasting company time and money, and hence shouldn't be promoted and maybe should be "transitioned out."

      But they are making the obvious mistake of turning a metric into a goal. Employees will game the system. People will "engage with AI" to hit their numbers without using it in a useful way that saves time, especially if they are working on projects which, due to the specifics of the project, AI can't help with.

      So, all this will really do is eliminate the honest and talented employees in favor of ones who can't succeed without AI (due to lack of talent and knowledge), and/or are willing to use it deceptively to advance their position.

      Are those the kind of people you want working for you? For big corporations, yes, since those are the kind of people who are most similar to corporate leadership in terms of talent and ethics.

      • by dgatwood ( 11270 ) on Wednesday February 25, 2026 @07:33PM (#66010518) Homepage Journal

        So, all this will really do is eliminate the honest and talented employees in favor of ones who can't succeed without AI (due to lack of talent and knowledge), and/or are willing to use it deceptively to advance their position.

        You forgot people who, rather than use AI, find ways to jam it into other people's workflows, so that those other teams get bogged down in slop, while the teams doing it get credit for expanding the use of AI. :-)

          • by dgatwood ( 11270 )

            You scare me

            I'm not even saying that they are intentionally trying to destroy the other teams' productivity. They're asked to increase the use of AI, and they do so by finding ways to put AI in front of more people. The fact that their team looks better and other teams look worse may just be a happy accident.

            But the point I was trying to make is that the system of rewards is set up to reward them for putting AI in things — usually without regard to whether it is actually beneficial or harmful — so they pu

      • by troff ( 529250 )

        > They take that as gospel truth with no qualifiers.

        What still stuns me is: surely there has to be SOME beancounter somewhere ACTUALLY COUNTING THE BEANS...

        • by sjames ( 1099 )

          Or knows that when you're getting nearly free beans in cargo container sized loads, sooner or later a bill will arrive and it'll be eye watering.

        • Re: (Score:2, Insightful)

          by Anonymous Coward

          I believe it's the same short-term mentality that's ruining everything else. CEO can show a graph of how fast AI adoption is rising in the company at a shareholder meeting or investor presentation and say that this means their company is forward-thinking and embraces cutting-edge technology and more corporate speak, so you can totally invest more in us. Also, they, the CEO, have personally driven that AI adoption and maybe, dear shareholders, that deserves a little bonus, or an increase to an already obscen

        • As if the productivity of a programmer is so easily measured.

          Even if you just compare 2 programmers across a month of working on a project, and one of both is making significantly more features than the other. If the other is writing more maintainable code, and in the end can keep up his pace for longer while having to fix fewer & shallower bugs, who's then the most productive?

          And how productive is the senior developer that's doing more codereviews and is also a bit of an architect on the project?

          Progra

      • by scumdamn ( 82357 ) on Wednesday February 25, 2026 @09:18PM (#66010654)
        I use it as an advanced intellisense but I ignore most of what it suggests because it doesn't follow our coding style or is wrong.. I'm still showing as compliant in the metrics
      • The reason for the disparity in belief here is compound:

        - It's legitimately a skill you need to learn, a mix of technical writing and indeed programming. It's not just writing some quick text and boom magic you have an app. Don't believe the marketing bullshit.
        - Some large corps give their devs unlimited token use. Personal users have in 1 month what a FANG dev can use in 1 day. Personal users can't use it enough to learn it.
        - It's been getting better _rapidly_ and if you tried it 6 months ago, it's worth t

    • I don't actually see this happening. Yes, they do want to get more done with fewer people, but I don't think they're outright trying to replace their people.

      • If they are not outright trying to replace their staff with AI, it is only because they believe AI isn't ready yet.

        Nothing would please them more than being able to fire their entire staff and just command a virtual AI assistant to run their entire business for them, while they keep all that salary-money for themselves instead. This is 100% their goal.

        Any issues about poverty or not having anyone who can buy their products or what-not are political matters that will be solved in political forums, so they a

        • Luckily, those of us who actually use AI, know that it will not be "ready" to replace humans for a long, long time. Yes, I do believe it will boost productivity. But AI does stupid stuff, *all* the time.

          • by dgatwood ( 11270 )

            Luckily, those of us who actually use AI, know that it will not be "ready" to replace humans for a long, long time. Yes, I do believe it will boost productivity. But AI does stupid stuff, *all* the time.

            That's okay. Just train another AI to look at the output of the first AI and rate it. Then, if the rating is too low, or if it breaks tests, roll it back. This is a necessary first step towards the AI learning to delete the tests before submitting like human coders do.

          • Luckily, those of us who actually use AI, know that it will not be "ready" to replace humans for a long, long time. Yes, I do believe it will boost productivity. But AI does stupid stuff, *all* the time.

            IMHO the issue here is similar to eating junk food. It's really difficult to justify paying a lot more for meals, or spending time cooking if fast food operations are optimised for selling £5 meals.
            The buyer will just not have attention span for our explanation of how bad fast food is and then all profitable operations need to become fast food.

            • I'd counter that it's more like eating dirt. Junk food is still made of...food. Many kinds of dirt do have some nutritional value, but it's not food.

              With AI-slop code, a small amount can actually work, sometimes. But if you get anywhere beyond proof-of-concept, AI falls flat without an expert human behind it, correcting all the many mistakes, large and small, that AI makes. I mean, GitHub Copilot sometimes forgets which file it's trying to edit, creating a brand new one rather than editing the active docume

              • by bazorg ( 911295 )

                The buyer will just not have attention span for your explanation of how bad eating dirt is and by the time they figure it out, you're out of a job and they are still tempted to double down on AI so that they don't lose face :(

                • Sure, there will be companies like that. They are generally owned by Private Equity. But for every company that thinks users will be happy to eat dirt, there are new companies starting up that know people want real food. These new companies will be happy to scoop up smart people who know that AI generates mostly slop.

    • by allo ( 1728082 )

      Using != Training. These are two completely different stages. And using AI also doesn't provide useful training data. The explanation of the AI is interesting (and doesn't need further training obviously), the question "Explain x" itself is boring.

  • So glad!!!! (Score:5, Insightful)

    by dskoll ( 99328 ) on Wednesday February 25, 2026 @06:45PM (#66010426) Homepage
    for (i=0; i<=2000; i++) { printf("I am SO GLAD I am retired!\n"); }
  • Micro-management (Score:5, Interesting)

    by unixisc ( 2429386 ) on Wednesday February 25, 2026 @06:50PM (#66010434)

    I thought that micro-managing employees is a bad idea. While I can understand departments or teams having common workflow disciplines, forcing individual employees to use particular tools (which is what AI really is, or at least should be) is nothing short of micro-managing them. At which point, their managers are not doing their own work, which does presumably exist, aside from micro-managing employees reporting to them

    • by sjames ( 1099 )

      If they have time to micro-manage, they have time to do something more productive like scrub the toilets.

    • I thought that micro-managing employees is a bad idea.

      A lot of incompetent programmers got hired, the kind who not only can't produce elegant code, but also can't recognize it.

      The result was managers started micromanaging them.

      At these companies, the managers won, the rest of us suffer. At these companies, productivity is down (even if "agile velocity" is up), but they'd rather have control than productivity.

    • I get the distinct impression there are semi-secret lines of communications between CEOs - as in BBSes or Microsoft Teams groups or something - where they all encourage each other to do these ludicrous things and shore up support for things, from forced AI to RTO, that are objectively stupid.

      I think that's what you're seeing here. Not a single person the real world can possibly have missed the sheer revulsion average people have over LLMs being shoved in their faces all the time, and how their companies get

  • Imbeciles (Score:5, Insightful)

    by ewhac ( 5844 ) on Wednesday February 25, 2026 @07:01PM (#66010462) Homepage Journal

    The argument proffered by management appears to boil down to nothing more than, "Well, everyone else is jumping off the Empire State Building, so what's your problem?

    Also: These lemmings are in for a FAFO-fueled rude awakening when they discover all the slop they've checked in and shipped/deployed, being machine-generated, is uncopyrghtable. "Um, actually... It's just like using a C compiler, transforming the programmer's intent to runnable code, so..." *SMACK!* Wrong. Compilers are deterministic. You can draw a straight line between the source code (and therefore the programmer's creative choices and intent) and the resulting binary and, given the same input, will generate the same output every time (indeed, if you do get different output, it's a bug) LLMs are anything but -- they'll give you different answers depending on what you may or may not have asked before, the phase of the moon, and which vendor paid to have the LLM preferentially yield responses using their commercial framework.

    In short, this is a bone-headed move, and when it came time for the managers' performance review, I'd give a negative score to anyone imposing mandatory LLM use.

    • It's just like using a C compiler, transforming the programmer's intent to runnable code, so..." *SMACK!* Wrong. Compilers are deterministic

      Well, since you mentioned C, UB would like a word ;-)

    • by Junta ( 36770 )

      Even before the codegen craze, business folks had largely already decided the code doesn't matter, it's control of the data that really matters.

      Have the clients data stored and controlled by you, Doesn't matter if they manage to faithfully clone your software, you have their data and export of the data is a big PITA so that clone is useless since they can't get the data.

      Also to the extent that the codegen output isn't copyrightable, the human material mixed in is, and it is all a jumbled mix and no one can

  • by Frobnicator ( 565869 ) on Wednesday February 25, 2026 @07:14PM (#66010488) Journal

    Google is factoring AI use into some software engineer reviews for the first time this year, and Meta's new performance review system will do the same -- it can track how many lines of code an engineer wrote with AI assistance.

    Good old "how many lines of code" as a proxy for value.

    If that's all they've got to measure success, they're going to be absolutely terrible teams, and quickly terrible code bases.

    Negative lines of code are often the best changes.

    They get what they measure. Bloated code. Nonsense code. Blocks of code that does processing but adds no value, simply there to inflate the lines of code that came from the AI prompts.

    Lines of code are how I get a raise and bons? Let's aim for 10,000 LOC for even the most trivial changes. With some creative prompts we can code up a new minivan before lunchtime.

    • by Wolfling1 ( 1808594 ) on Wednesday February 25, 2026 @07:28PM (#66010516) Journal
      This. +1

      So, we're back to measuring code monkey performance by lines of code. Didn't that go out in the 90s?

      Hey, ChatGPT, give me a 100,000 lines of code to add two numbers together.
      • "Hey, ChatGPT, give me a 100,000 lines of code to add two numbers together." well seems like someone made their bonus this year!
        • Only 100,000? Those are rookie numbers. You should see what Mr Copypasta and Mr NoFunctions wrote!
        • "Hey, ChatGPT, give me a 100,000 lines of code to add two numbers together." well seems like someone made their bonus this year!

          Apparently this is an 18.4 quintillion line case function for two 64-bit integers. Just sign over 51% of the stock and I can start ai on it now.

        • by vbdasc ( 146051 )

          If it was just 100,000, then modern software wouldn't be that damn slow.

      • My former narcissist cofounder used LOC written as a metric for my participation, and only me, nobody else in the company among the devs. He just hated me because I questioned his understanding of running a business, and I spent 4 years to build the tech the company was formed around, and also helped land the biggest paying client for.

        • Hope you made it out ahead.
          In my 30 years of experience in the small business world, people like that often eat people like you (and me) for lunch.
          In my experience, there are certain CEOs that see "board approval required to change employment status" employees as enemies and do whatever fucked up thing they can think of to get rid of them, particularly if they're noisy.
        • Sadly that's what you get from people who don't understand. They can't understand the code so have no chance of seeing the quality of it so fallback to the only metric remaining to them quantity. If only they spent their time on what they're supposed to be good at.
      • Obligatory: https://www.folklore.org/Negat... [folklore.org]
        "-2000 Lines Of Code
        Author: Andy Hertzfeld
        Date: February 1982
        Characters: Bill Atkinson
        Topics: Software Design, Management, Lisa
        Summary: It's hard to measure progress by lines of code ..."

      • Didn't that go out in the 90s?

        Yes, it did; however, poor education and arrogance has brought it back again. We are forever stuck on a treadmill of learning the same things over and over.

    • "it can track how many lines of code an engineer wrote with AI assistance"

      In my experience the IDE utility (Google's Antigravity, Windsurf, etc) has a dashboard that keeps track of how much code it "writes" compared to how much you did. This includes however much code you accepted in order to fix a bug, to refactor a module, or to document whatever you asked for. It also could be whatever experimental code you caused AI to write as a quick toss-off, which you can now easily do. So it can mount up pretty qui

      • by AuMatar ( 183847 ) on Wednesday February 25, 2026 @10:24PM (#66010744)

        No, those who aren't willing are actually following the science. Every measurement so far, every actual study has shown AI code generation is 20% or more slower for senior engineers. Even scaleai, a company founded and run by Meta's AI chief, shows the same in their data (https://scale.com/leaderboard/rli). Possibly it will someday get there, but it sure as hell isn't ready yet.

        • >> every actual study has shown AI code generation is 20% or more slower for senior engineers

          Here's a Stanford study that doesn't;
          https://medium.com/@manusf08/d... [medium.com]

          "the study found that AI provides an average productivity gain of about 15–20% across all industries and sectors"

          For myself it is far more. At least 4x, especially with the most recent LLM models.

          • by AuMatar ( 183847 )

            A study you can't even directly link to? Yeah, I call bullshit.

            And my personal experience is it's at least a 50% slow down. I have yet to ever have it do anything not completely trivial that wasn't badly insecure and broken, and when using them to provide static analysis the false positive rate is around 90%.

  • I'm gonna need a bigger garage!

  • I cannot believe that the highly paid employees of for profit companies are being forced to use AI to improve their own products so they can compete better in the market. What a bunch of fucking savages
    • by Junta ( 36770 ) on Wednesday February 25, 2026 @08:20PM (#66010582)

      The issue is that they are meddling with how the job is done, rather than providing access to the tools.

      Usually you don't track and penalize people for failing to use a tool that is available, you make it available and evaluate performance. If not using the tool has a bad effect, then it will show in performance.

      Tracking how much material to get an LLM to emit is crazy, since you can just prompt it to generate thousands of lines of irrelevant code that is never even used, You offer a cheap way to game performance metrics with zero relevance to the work at hand assured. Ask it to generate a big website about poodles. Your metrics go up, but no business value in a site about poodles. The metrics on the dashboard at my work are unable to tell where the generated code really went, it just knows it was generated and "accepted", but that has nothing to do with even committing the code, and even if it did, it might not track the remote that the commit is pushed to.

      It is bad enough with how people gamed lines of code and the industry broadly recognizes that as a stupid metric now. LLM codegen stats are even worse and more trivial to game.

  • by Randseed ( 132501 ) on Wednesday February 25, 2026 @08:01PM (#66010558)
    Oh fuck all of us.
  • You're going to make people use it whether it's any good or not.

  • by 93 Escort Wagon ( 326346 ) on Wednesday February 25, 2026 @08:17PM (#66010572)

    At software maker Autodesk, CEO Andrew Anagnost acknowledged that some employees had been using initially blocked coding tools like Cursor stealthily -- and warned that AI holdouts "probably won't survive long term."

    "... of course, the plan is that the AI users won't survive long term either."

  • They should just measure global performance metrics and if that clearly stratifies based on AI use they'll have some picture. But a major problem I foresee is regardless of what you think of AI, not all coders operate the same way. Some may benefit from extensive usage, others will be hobbled, even on the same tasks. There is unlikely to be a single way to use it that simply translates to more/less is better.
  • by Tony Isaac ( 1301187 ) on Wednesday February 25, 2026 @08:23PM (#66010584) Homepage

    These tech execs have tried ChatGPT or seen Google AI search summaries, and are enthralled. They think, "Is this what my competitors are using?" And they are bound and determined not to let the competition get ahead of them.

    These free AI tools have always been a loss leader. It seems to be working, it's turning companies into bigtime purchasers of the paid versions.

  • 1. Force employees to use AI
    2. Brag to press how AI is making co. lean & profitable
    3. Investors fall for it, thinking co. is cutting edge
    4. PROFIT!

  • Hi. I actually work for a high-tech company, full of extremely competent engineers.

    Yes, we've all been mandated to use Cursor, and pay no mind to token usage. And we've all been more productive actually. We now spend more of our time thinking out loud about what we want, and creating plans (always plan first!), then allowing the AI to craft solutions bit by bit. And yes, we guide the AIs, and we force them to create unit tests for every single line of code, and my goodness if they don't do a darn good job a

    • by Anonymous Coward

      How'd you get the experience needed to figure out how to plan that out and know how to keep it on track?

      • been on the same project 30+ years, shipped a couple dozen releases, earned my way, as did the others on my team. we don't hire mediocre people

    • by WaffleMonster ( 969671 ) on Wednesday February 25, 2026 @10:19PM (#66010740)

      Hi. I actually work for a high-tech company, full of extremely competent engineers.

      Yea sure you do buddy.

      This from the same joker who previously said he wants a "total ban" on AI and provided a hyperlink to a list of "scientific experts" who made it their lifes work to understand AI...e.g. Steve Bannon of Epstein files fame, Glenn Beck and Susan Rice.

      Now you are here selling it.

      Yes, we've all been mandated to use Cursor, and pay no mind to token usage. And we've all been more productive actually. We now spend more of our time thinking out loud about what we want, and creating plans (always plan first!), then allowing the AI to craft solutions bit by bit. And yes, we guide the AIs, and we force them to create unit tests for every single line of code,

      So you guide AIs AND you force them? How does one "force" an AI to do anything? What does this even mean?

      Yes, I've heard the story where they just print out "the test passed" in order to get a pet on the head, but we are actually skilled engineers, not dumb dumbs, and we know to watch out for that sort of thing, and correct it with a rule, so it never happens again. (First rule: don't say "you're absolutely right!")

      So, I feel like there's a lot of bashing going on here and not a lot of Reasonable Thinking about actual usefulness. The thing is actually incredibly useful and surprisingly competent, in the right hands. In the hands of someone who knows how to write good code, they can shepherd this "fresh out of college intern", and get them to write reasonably good code, and in fact end up shepherding maybe 5 interns at once.

      So, I feel like there's a lot of empty rhetoric and no objective information content in your message. You say things that are inherently nonsense e.g. "create unit tests for every single line of code". The notion you can prompt an AI that does not output "reasonably good code" to output "reasonably good code" is absurd on its face. I guess if you force it to write good code then it had better do it.

      • You’re attacking my credentials and character instead of addressing the engineering substance. Calling someone a “joker” isn’t an argument, it's "ad-hominem". You silly goose! ;-) Now, you seem smart, so I have to assume you already know what ad-hominem is, yet you chose to do it anyway, revealing your character? or?

        I’m not going to disclose my employer to satisfy an anonymous commenter, but my claims don’t depend on who I am. They depend on whether the workflow works. Th

    • Hi. I actually work for a high-tech company, full of extremely competent engineers.

      See, I don't thing you're credible.

      I don't think I've met anyone anywhere who thinks their co-wokers are all overflowing with competence. I've worked in big tech companies, academia, government and tiny startups. About the only place one ever feels that is a startup of about 5 people.

      Big tech companies have the usual mix of brilliant, skillful, competent, OK, warm bodies, people for whom you wonder how they get dressed in the

      • You’re reading far more into a throwaway line than was intended.

        “Full of extremely competent engineers” was not a claim that every single human in the building is a flawless genius. It was shorthand for: the people I work with are generally skilled professionals operating at a high technical bar.

        Of course large organizations contain a distribution. Every sufficiently large system does. That’s not controversial — it’s statistics.

        But it does not follow that because variatio

    • I'm not a programmer and I haven't found Copilot to be helpful for anything except trivial cleanup of Excel sheets, but last week I was talking with my brother who works for one of the big software companies and I was surprised that he told a story almost exactly the same as yours (minus the extremely competent coworkers).

      He also observed that junior programmers were being entirely cut out, and that at this time no one had an answer for what to do about that. When I asked about his own job in a year or t

      • That concern about junior engineers is real in some places, but it isn’t universal. At my company, junior engineers are not being cut out. They’re being actively invested in — explicitly trained to use AI as part of their onboarding and long-term development. The expectation isn’t “let the model think for you.” It’s “learn to specify, critique, verify, and iterate faster.”

        This turns AI from a replacement into a multiplier. A junior who previously needed

  • If they try that with me, I'll just thank them for promoting me to overpaid consultant to clean up the ai mess.

  • by butt0nm4n ( 1736412 ) on Wednesday February 25, 2026 @09:09PM (#66010646)

    Make your employees dependent on AI. Make your company dependent on AI.

    As a coder, Co-Pilot is very useful, it gives me contextual advice, on point where I need it, it's far better than Google.

    It helps me to be lazy and dependent. I don't need to learn an API anymore. I don't need to learn the philosophy of a new library. That skill of navigating complexity, I don't need to develop that anymore.

    Everything I enjoy now, that makes my coding life easier was built on that human skill of navigating complexity and what AI is telling me is the product of that effort. LLM's will always be derivative, that's how they work. As soon as I encounter something novel, un documented, especially in the ball ache of integration, AI is not gong to be able to help me. The more I use it, the worse I'll get at those tasks that the AI cant help me with. And if we get to point where we are not producing "new" , LLM's are stuck where they are.

    Every prompt I engage with AI sends my code and my intentions to a third party. They tell me, "your secrets are safe with me" in the contract, but we know that's bollocks. Every prompt refines the LLM, so every prompt is in the LLM and shared with everyone else. Oh no, Big Tech says, "no we wouldn't do that". Sorry I don't believe that, you trained your product on other people's content, that's your business model, you'd be nuts to change that,

    How many times do we have to get bit by big tech before we tell them to do one, Grow your own LLM's in house. Rob them for a change.

  • by Etylowy ( 1283284 ) on Wednesday February 25, 2026 @10:07PM (#66010730)
    We've recently learned our employer is going to use tokens used as a metric when deciding bonuses and now a third of devs are running cron jobs burning through them on random nonsense
  • by alhaz ( 11039 ) on Wednesday February 25, 2026 @10:39PM (#66010758) Homepage

    My employer's client has recently been quite firm on the point that we should be "leveraging AI", and more specifically that we should be using "copilot in github".

    We're several years into automating regression testing of a large complex web application with UFT aka Unified Functional Testing. This application was built using the oldschool mainframe business model where the customer has to become captive by way of you owning all of their data.

    The functional code we author is VB-ish, doesn't do shit outside of UFT, and frankly UFT is a kit to allow you to build a test harness so without the functional code it is just like a box of spirograph toys with no plan.

    When you are inside the UFT system and click 'save' after making a one-line change, 8 or 10 encrypted binary files are updated in the filesystem.

    "copilot in github", whatever that is, can't read our code.

    Oh yeah - I'm a geezer and every time i have thought, "the git commandline is confusing, this must be easier in the gui" the gui was not easier. I use git bash exclusively, and since i know how to use "git stash" i am the git expert on the team. I'm embarrassed for all of us, honestly.

    • Oh yeah - I'm a geezer and every time i have thought, "the git commandline is confusing, this must be easier in the gui" the gui was not easier. I use git bash exclusively, and since i know how to use "git stash" i am the git expert on the team. I'm embarrassed for all of us, honestly.

      Git is the poster child for leaky abstractions, but it's what we have and for what it's worth I agree. I am also kind of fascinated by the sheer level of effort put in by some people, who are solid programmers, to avoid learni

  • by jvkjvk ( 102057 ) on Wednesday February 25, 2026 @11:05PM (#66010774)

    So, I set my code generation preference for the AI to "very verbose" through prompting it someway.

    That way I get more money.

  • Don't comply with the requirement to use AI and you'll find yourself out of a job. Comply with the requirement to use AI and you'll find yourself out of a job.
    • by Coius ( 743781 )

      Not if you poison the well. That way when they fire you, they will regret the chaotic evilness you trained it to recreate.

  • Forcing employees to use a tool means you didn't give them a tool they want to use, which means you didn't giver them a good tool. If Ai was worth using, employees would flock to it.

Computers are useless. They can only give you answers. -- Pablo Picasso

Working...