Tech Firms Aren't Just Encouraging Their Workers To Use AI. They're Enforcing It. (msn.com) 101
Tech companies ranging from 300-person startups to giants like Amazon, Google, Meta, Microsoft and Salesforce have moved beyond encouraging employees to use AI tools and are now actively tracking adoption and, in several cases, tying it to performance reviews. Google is factoring AI use into some software engineer reviews for the first time this year, and Meta's new performance review system will do the same -- it can track how many lines of code an engineer wrote with AI assistance.
Amazon Web Services managers have dashboards showing individual engineer AI-tool usage and consider adoption when evaluating promotions. About 42% of tech-industry workers said their direct manager expects AI use in daily work as of last October, up from 32% eight months earlier, according to AI consulting firm Section. At software maker Autodesk, CEO Andrew Anagnost acknowledged that some employees had been using initially blocked coding tools like Cursor stealthily -- and warned that AI holdouts "probably won't survive long term."
Amazon Web Services managers have dashboards showing individual engineer AI-tool usage and consider adoption when evaluating promotions. About 42% of tech-industry workers said their direct manager expects AI use in daily work as of last October, up from 32% eight months earlier, according to AI consulting firm Section. At software maker Autodesk, CEO Andrew Anagnost acknowledged that some employees had been using initially blocked coding tools like Cursor stealthily -- and warned that AI holdouts "probably won't survive long term."
If your boss is forcing you to use AI (Score:5, Insightful)
Microsoft Recall training data for Copilot. (Score:2)
I completely agree.
Long time ago I mentioned that the Microsoft Recall feature that takes a screenshots of everything that you do on your computer is just training data for their Copilot PI.
Now with organizations and the executives forcing PI adoption and mandating it as part of performance reviews, they generally want everybody to train their PI replacements so they can start the next batch waves of layoffs and the next economic crisis bigger than anything else that we've seen in the past.
We are already at 25% functional unemployment (Score:3)
I suspect what's going to happen is we're going to hit 25 to 30% real unemployment in the next several years with something like 50 to 70% functional unemployment.
When that happens we are the become a brutal dictatorship like China and or we start world War 3. I think we are probably go
Made similar points to James P. Hogan on nuclear (Score:4, Insightful)
a favorite sci-fi author who wrote an essay "Know Nukes" promoting nuclear energy and was otherwise dismissive of renewables (granted this was decades ago, so he might have had a different opinion if he was still alive). What I sent him in 2004 on that:
===== to James P. Hogan
I don't want to alienate you, so I'm certainly willing to agree to disagree (having read "Know Nukes" etc.); still, because I know you are an open minded guy, here is why I disagree on the nuclear issue (in this social system).
At the start, I'd say I am a bit of an environmentalist (with an M.A. [consolation prize] in Ecology/Evolution), although I'd certainly entertain some seemingly off-the-wall notions like disposing of nuclear waste by spreading it throughout environmentally sensitive areas -- given it would keep the tourists out and the animals and plants and other creatures there overall might do OK anyway. That approach seems to be working for areas around nuclear facilities which, because of lack of hunting and habitat destruction, are generally doing quite well biologically -- since for most animal species habitat destruction is far worse problem than an increased risk of cancer etc.. I knew someone who studied turtles around a nuclear facility with some contamination over a decade ago and he thought they were doing well IIRC. Still, for people, cancer risk might be evaluated quite differently (although I read some rumblings now that elevated death rates around Chernobyl might have more to do with stress than radiation); but clearly there is an extent to which more radiation is good for you as the body needs a certain level of challenge for optimal health. And certainly there are many other cancer and health risks we gloss over in the USA (like the US obesity epidemic or car crashes), so the radiation risk needs to be compared to those.
If we were living in Chironia or Kronia [sci-fi places Hogan wrote stories about], with personal responsibility and organizational transparency, then I think nuclear power would probably be an OK thing and not be too concerned about it (i.e. it was a risk but a well managed one, like flying in an airplane). A decision on how to generate power for various situations still might be subject perhaps to various tradeoffs nuclear material handling risks vs. renewable risks (people falling off roofs, etc.), considering in totality how the rest of the system was set up. I would undoubtedly in that situation have a lot of faith in the people doing that work. I would expect them to be very proud of their safety record.
But, the issue is we are not yet there as a society. Today's nuclear industry has a very specific track record of lies, deceit, safety violations, murder of whistle blowers, government subsidies (direct and through indemnification/insurance), corruption, close links to secretive organizations, and insufficient attention to security against attacks. So, I think any suggestion that could entail expanding the nuclear industry as-it-actually-is is very problematical, because of these social problems.
Note, I am not saying the nuclear industry could not hypothetically be made better technically (which I think is implicit in your arguments) especially if it became more automated and used vastly better designs. My first big science fair project decades ago was (intended as) a robotic radioactive material transporter. I've also hung around Red Whittaker's robot lab which made robots that went into Three Mile Island, and helped with one tiny mockup of one (Workhorse, which became Rosie) which helped get the contract as the TMI staff pushed the mockup around the scale model they have of TMI to see everything it could reach.
Compare the use of robotics with using human "jumpers"; a family friend who was a plumber was a jumper, working only a short time to fix a leaking pipe spewing radioactive water, and he died of cancer (no proof they are connected; but he is the only jumper I ever knew, and he made a big deal of being sent home in a paper suit). I also knew a fello
Re: (Score:2)
Pretty sure you have replied to a troll bot (someone has made a bot trained on RSilverguns posts and has let is loose on here for some reason, somebody is obviously not a fan of his left wing politics!). But Re: Nuclear - I am opposed to it in my county, the UK, for one reason - the UK Govt and our society in general it seems, has lost the ability to successfully manage large scale complicated projects. And with AI coming into the picture how do we know parts of a new plan haven't been hallicinated into
AI shaped by similar socio-economics as nuclear (Score:2)
Thanks for the informative reply. And likewise to bring this more on-topic, if we are seeing all these cost-cutting risk-taking behaviors with companies creating and managing nuclear reactors (privatizing gains while socializing costs and risks), what does that suggest about the likely outcome of job-replacing AI systems created and managed by the same economic and cultural forces?
What does an AI "meltdown" even look like? Perhaps this (of many possibilities)?
"An AI Takeover Scenario" https://www.youtube.co [youtube.com]
Dupe; saw same Slashdot story in 2004... (Score:3)
"Train Your Own Replacement" https://slashdot.org/story/04/... [slashdot.org] "Yahoo reports on how some employers are asking the workers they're laying off to train their foreign replacements - having them dig their own unemployment graves. 'Almost one in five information technology workers has lost a job or knows someone who lost a job after training a foreign worker, according to a new survey by the Washington Alliance of Technology Workers.' It looks like a real dilemma where if you refuse to hire your replacement, yo
Re: (Score:2)
t looks like a real dilemma where if you refuse to hire your replacement, you are fired without severance and are ineligible for unemployment benefits,
Just train them with false information.
Re: (Score:2)
Re:If your boss is forcing you to use AI (Score:5, Insightful)
More realistically, they believe that using AI means "getting more work done faster." They take that as gospel truth with no qualifiers.
So, if you aren't using AI then clearly you are wasting company time and money, and hence shouldn't be promoted and maybe should be "transitioned out."
But they are making the obvious mistake of turning a metric into a goal. Employees will game the system. People will "engage with AI" to hit their numbers without using it in a useful way that saves time, especially if they are working on projects which, due to the specifics of the project, AI can't help with.
So, all this will really do is eliminate the honest and talented employees in favor of ones who can't succeed without AI (due to lack of talent and knowledge), and/or are willing to use it deceptively to advance their position.
Are those the kind of people you want working for you? For big corporations, yes, since those are the kind of people who are most similar to corporate leadership in terms of talent and ethics.
Re:If your boss is forcing you to use AI (Score:5, Insightful)
So, all this will really do is eliminate the honest and talented employees in favor of ones who can't succeed without AI (due to lack of talent and knowledge), and/or are willing to use it deceptively to advance their position.
You forgot people who, rather than use AI, find ways to jam it into other people's workflows, so that those other teams get bogged down in slop, while the teams doing it get credit for expanding the use of AI. :-)
Re: If your boss is forcing you to use AI (Score:2)
Re: (Score:2)
You scare me
I'm not even saying that they are intentionally trying to destroy the other teams' productivity. They're asked to increase the use of AI, and they do so by finding ways to put AI in front of more people. The fact that their team looks better and other teams look worse may just be a happy accident.
But the point I was trying to make is that the system of rewards is set up to reward them for putting AI in things — usually without regard to whether it is actually beneficial or harmful — so they pu
Re: (Score:2)
> They take that as gospel truth with no qualifiers.
What still stuns me is: surely there has to be SOME beancounter somewhere ACTUALLY COUNTING THE BEANS...
Re: (Score:3)
Or knows that when you're getting nearly free beans in cargo container sized loads, sooner or later a bill will arrive and it'll be eye watering.
Re: (Score:2, Insightful)
I believe it's the same short-term mentality that's ruining everything else. CEO can show a graph of how fast AI adoption is rising in the company at a shareholder meeting or investor presentation and say that this means their company is forward-thinking and embraces cutting-edge technology and more corporate speak, so you can totally invest more in us. Also, they, the CEO, have personally driven that AI adoption and maybe, dear shareholders, that deserves a little bonus, or an increase to an already obscen
Re: (Score:2)
As if the productivity of a programmer is so easily measured.
Even if you just compare 2 programmers across a month of working on a project, and one of both is making significantly more features than the other. If the other is writing more maintainable code, and in the end can keep up his pace for longer while having to fix fewer & shallower bugs, who's then the most productive?
And how productive is the senior developer that's doing more codereviews and is also a bit of an architect on the project?
Progra
Re: If your boss is forcing you to use AI (Score:4, Insightful)
Software engineering as a career is changed (Score:1)
The reason for the disparity in belief here is compound:
- It's legitimately a skill you need to learn, a mix of technical writing and indeed programming. It's not just writing some quick text and boom magic you have an app. Don't believe the marketing bullshit.
- Some large corps give their devs unlimited token use. Personal users have in 1 month what a FANG dev can use in 1 day. Personal users can't use it enough to learn it.
- It's been getting better _rapidly_ and if you tried it 6 months ago, it's worth t
Re: (Score:2)
I don't actually see this happening. Yes, they do want to get more done with fewer people, but I don't think they're outright trying to replace their people.
Re: (Score:2)
If they are not outright trying to replace their staff with AI, it is only because they believe AI isn't ready yet.
Nothing would please them more than being able to fire their entire staff and just command a virtual AI assistant to run their entire business for them, while they keep all that salary-money for themselves instead. This is 100% their goal.
Any issues about poverty or not having anyone who can buy their products or what-not are political matters that will be solved in political forums, so they a
Re: (Score:3)
Luckily, those of us who actually use AI, know that it will not be "ready" to replace humans for a long, long time. Yes, I do believe it will boost productivity. But AI does stupid stuff, *all* the time.
Re: (Score:2)
Luckily, those of us who actually use AI, know that it will not be "ready" to replace humans for a long, long time. Yes, I do believe it will boost productivity. But AI does stupid stuff, *all* the time.
That's okay. Just train another AI to look at the output of the first AI and rate it. Then, if the rating is too low, or if it breaks tests, roll it back. This is a necessary first step towards the AI learning to delete the tests before submitting like human coders do.
Re: If your boss is forcing you to use AI (Score:2)
Luckily, those of us who actually use AI, know that it will not be "ready" to replace humans for a long, long time. Yes, I do believe it will boost productivity. But AI does stupid stuff, *all* the time.
IMHO the issue here is similar to eating junk food. It's really difficult to justify paying a lot more for meals, or spending time cooking if fast food operations are optimised for selling £5 meals.
The buyer will just not have attention span for our explanation of how bad fast food is and then all profitable operations need to become fast food.
Re: (Score:2)
I'd counter that it's more like eating dirt. Junk food is still made of...food. Many kinds of dirt do have some nutritional value, but it's not food.
With AI-slop code, a small amount can actually work, sometimes. But if you get anywhere beyond proof-of-concept, AI falls flat without an expert human behind it, correcting all the many mistakes, large and small, that AI makes. I mean, GitHub Copilot sometimes forgets which file it's trying to edit, creating a brand new one rather than editing the active docume
Re: (Score:2)
The buyer will just not have attention span for your explanation of how bad eating dirt is and by the time they figure it out, you're out of a job and they are still tempted to double down on AI so that they don't lose face :(
Re: (Score:2)
Sure, there will be companies like that. They are generally owned by Private Equity. But for every company that thinks users will be happy to eat dirt, there are new companies starting up that know people want real food. These new companies will be happy to scoop up smart people who know that AI generates mostly slop.
Re: (Score:2)
Using != Training. These are two completely different stages. And using AI also doesn't provide useful training data. The explanation of the AI is interesting (and doesn't need further training obviously), the question "Explain x" itself is boring.
So glad!!!! (Score:5, Insightful)
Re: (Score:2)
Did you mean to print that 2000 times, or 2001?
Re: (Score:2)
I imagine 2001 just so you know they mean it.
Re:So glad!!!! (Score:4, Funny)
2001... a retirement odyssey...
Re: (Score:2)
+1
Re: (Score:2, Insightful)
This message brought to you by the last generation with the option to retire.
Re: So glad!!!! (Score:2)
Well, I got mine so fuck you, I'm still retired and voting for the AI bros party and you owe me.
Re: (Score:3)
I'm within about 5 years of being able to take what I would consider to be a fairly comfortable early retirement. Which means I've got a few options.
1. Manage to stay where I am for the next 5 years. ... but civilization still kinda survived).
2. If #1 fails, take a couple of years off and then see if the job market has changed in favour of older developers who don't rely on AI (because the vibe-coded house of cards fell in a screaming heap
3. Die a bit earlier than intended.
Re: So glad!!!! (Score:2)
I am probably going to need to go with #3
Re: (Score:2)
for (i=0; i<=2000; i++) { printf("I am SO GLAD I am retired!\n"); }
ChatGPT, turn this into an app and tell me how great my app is.
Micro-management (Score:5, Interesting)
I thought that micro-managing employees is a bad idea. While I can understand departments or teams having common workflow disciplines, forcing individual employees to use particular tools (which is what AI really is, or at least should be) is nothing short of micro-managing them. At which point, their managers are not doing their own work, which does presumably exist, aside from micro-managing employees reporting to them
Re: (Score:2)
If they have time to micro-manage, they have time to do something more productive like scrub the toilets.
Re: (Score:2)
I thought that micro-managing employees is a bad idea.
A lot of incompetent programmers got hired, the kind who not only can't produce elegant code, but also can't recognize it.
The result was managers started micromanaging them.
At these companies, the managers won, the rest of us suffer. At these companies, productivity is down (even if "agile velocity" is up), but they'd rather have control than productivity.
Re: (Score:2)
I get the distinct impression there are semi-secret lines of communications between CEOs - as in BBSes or Microsoft Teams groups or something - where they all encourage each other to do these ludicrous things and shore up support for things, from forced AI to RTO, that are objectively stupid.
I think that's what you're seeing here. Not a single person the real world can possibly have missed the sheer revulsion average people have over LLMs being shoved in their faces all the time, and how their companies get
Imbeciles (Score:5, Insightful)
The argument proffered by management appears to boil down to nothing more than, "Well, everyone else is jumping off the Empire State Building, so what's your problem?
Also: These lemmings are in for a FAFO-fueled rude awakening when they discover all the slop they've checked in and shipped/deployed, being machine-generated, is uncopyrghtable. "Um, actually... It's just like using a C compiler, transforming the programmer's intent to runnable code, so..." *SMACK!* Wrong. Compilers are deterministic. You can draw a straight line between the source code (and therefore the programmer's creative choices and intent) and the resulting binary and, given the same input, will generate the same output every time (indeed, if you do get different output, it's a bug) LLMs are anything but -- they'll give you different answers depending on what you may or may not have asked before, the phase of the moon, and which vendor paid to have the LLM preferentially yield responses using their commercial framework.
In short, this is a bone-headed move, and when it came time for the managers' performance review, I'd give a negative score to anyone imposing mandatory LLM use.
Re: (Score:2)
Well, since you mentioned C, UB would like a word ;-)
Re: (Score:2)
I hope you're joking; because UB isn't an unavoidable fact of life in C.
Re: (Score:1)
Re: (Score:3)
Even before the codegen craze, business folks had largely already decided the code doesn't matter, it's control of the data that really matters.
Have the clients data stored and controlled by you, Doesn't matter if they manage to faithfully clone your software, you have their data and export of the data is a big PITA so that clone is useless since they can't get the data.
Also to the extent that the codegen output isn't copyrightable, the human material mixed in is, and it is all a jumbled mix and no one can
LOC written as a performance metric? (Score:5, Insightful)
Google is factoring AI use into some software engineer reviews for the first time this year, and Meta's new performance review system will do the same -- it can track how many lines of code an engineer wrote with AI assistance.
Good old "how many lines of code" as a proxy for value.
If that's all they've got to measure success, they're going to be absolutely terrible teams, and quickly terrible code bases.
Negative lines of code are often the best changes.
They get what they measure. Bloated code. Nonsense code. Blocks of code that does processing but adds no value, simply there to inflate the lines of code that came from the AI prompts.
Lines of code are how I get a raise and bons? Let's aim for 10,000 LOC for even the most trivial changes. With some creative prompts we can code up a new minivan before lunchtime.
Re:LOC written as a performance metric? (Score:5, Insightful)
So, we're back to measuring code monkey performance by lines of code. Didn't that go out in the 90s?
Hey, ChatGPT, give me a 100,000 lines of code to add two numbers together.
Re: LOC written as a performance metric? (Score:3)
Re: LOC written as a performance metric? (Score:2)
Re: (Score:2)
"Hey, ChatGPT, give me a 100,000 lines of code to add two numbers together." well seems like someone made their bonus this year!
Apparently this is an 18.4 quintillion line case function for two 64-bit integers. Just sign over 51% of the stock and I can start ai on it now.
Re: (Score:2)
If it was just 100,000, then modern software wouldn't be that damn slow.
Re: (Score:2)
My former narcissist cofounder used LOC written as a metric for my participation, and only me, nobody else in the company among the devs. He just hated me because I questioned his understanding of running a business, and I spent 4 years to build the tech the company was formed around, and also helped land the biggest paying client for.
Re: (Score:2)
In my 30 years of experience in the small business world, people like that often eat people like you (and me) for lunch.
In my experience, there are certain CEOs that see "board approval required to change employment status" employees as enemies and do whatever fucked up thing they can think of to get rid of them, particularly if they're noisy.
Re: LOC written as a performance metric? (Score:2)
LOC questioned in the 1980s at Apple (Score:3)
Obligatory: https://www.folklore.org/Negat... [folklore.org] ..."
"-2000 Lines Of Code
Author: Andy Hertzfeld
Date: February 1982
Characters: Bill Atkinson
Topics: Software Design, Management, Lisa
Summary: It's hard to measure progress by lines of code
Re: (Score:2)
Didn't that go out in the 90s?
Yes, it did; however, poor education and arrogance has brought it back again. We are forever stuck on a treadmill of learning the same things over and over.
Re: (Score:1)
"it can track how many lines of code an engineer wrote with AI assistance"
In my experience the IDE utility (Google's Antigravity, Windsurf, etc) has a dashboard that keeps track of how much code it "writes" compared to how much you did. This includes however much code you accepted in order to fix a bug, to refactor a module, or to document whatever you asked for. It also could be whatever experimental code you caused AI to write as a quick toss-off, which you can now easily do. So it can mount up pretty qui
Re:LOC written as a performance metric? (Score:4, Informative)
No, those who aren't willing are actually following the science. Every measurement so far, every actual study has shown AI code generation is 20% or more slower for senior engineers. Even scaleai, a company founded and run by Meta's AI chief, shows the same in their data (https://scale.com/leaderboard/rli). Possibly it will someday get there, but it sure as hell isn't ready yet.
Re: (Score:2)
Except it really isn't. The stuff from 2 years ago is miles ahead from the stuff 5 years ago. The stuff from 1 years ago is yards ahead of 1 year ago. The stuff from 6 months ago is nearly indistinguishable from the stuff from a year ago. Diminishing returns is hitting hard and rapidly. Many experts think that LLMs are closing in on their technical limitations, regardless of how much extra compute its given. It will need entirely new techniques to actually become much more than it is now. Which may h
Re: (Score:2)
>> every actual study has shown AI code generation is 20% or more slower for senior engineers
Here's a Stanford study that doesn't;
https://medium.com/@manusf08/d... [medium.com]
"the study found that AI provides an average productivity gain of about 15–20% across all industries and sectors"
For myself it is far more. At least 4x, especially with the most recent LLM models.
Re: (Score:2)
A study you can't even directly link to? Yeah, I call bullshit.
And my personal experience is it's at least a 50% slow down. I have yet to ever have it do anything not completely trivial that wasn't badly insecure and broken, and when using them to provide static analysis the false positive rate is around 90%.
Re: (Score:2)
The link to the study is right there in the report but hey, don't use AI. I sure don't care.
300 people is a startup? (Score:2)
I'm gonna need a bigger garage!
Eat your own dogfood? (Score:1)
Re:Eat your own dogfood? (Score:4, Interesting)
The issue is that they are meddling with how the job is done, rather than providing access to the tools.
Usually you don't track and penalize people for failing to use a tool that is available, you make it available and evaluate performance. If not using the tool has a bad effect, then it will show in performance.
Tracking how much material to get an LLM to emit is crazy, since you can just prompt it to generate thousands of lines of irrelevant code that is never even used, You offer a cheap way to game performance metrics with zero relevance to the work at hand assured. Ask it to generate a big website about poodles. Your metrics go up, but no business value in a site about poodles. The metrics on the dashboard at my work are unable to tell where the generated code really went, it just knows it was generated and "accepted", but that has nothing to do with even committing the code, and even if it did, it might not track the remote that the commit is pushed to.
It is bad enough with how people gamed lines of code and the industry broadly recognizes that as a stupid metric now. LLM codegen stats are even worse and more trivial to game.
WTF? (Score:3)
Well when you dump a trillion dollars on something (Score:2)
You're going to make people use it whether it's any good or not.
Re: (Score:2)
The part Anagnost didn't say... out loud (Score:3)
"... of course, the plan is that the AI users won't survive long term either."
Population variance (Score:1)
FOMO (Score:3)
These tech execs have tried ChatGPT or seen Google AI search summaries, and are enthralled. They think, "Is this what my competitors are using?" And they are bound and determined not to let the competition get ahead of them.
These free AI tools have always been a loss leader. It seems to be working, it's turning companies into bigtime purchasers of the paid versions.
Don't need the "???" (Score:2)
1. Force employees to use AI
2. Brag to press how AI is making co. lean & profitable
3. Investors fall for it, thinking co. is cutting edge
4. PROFIT!
well, there IS another side... (Score:2)
Hi. I actually work for a high-tech company, full of extremely competent engineers.
Yes, we've all been mandated to use Cursor, and pay no mind to token usage. And we've all been more productive actually. We now spend more of our time thinking out loud about what we want, and creating plans (always plan first!), then allowing the AI to craft solutions bit by bit. And yes, we guide the AIs, and we force them to create unit tests for every single line of code, and my goodness if they don't do a darn good job a
Re: (Score:1)
How'd you get the experience needed to figure out how to plan that out and know how to keep it on track?
Re: (Score:2)
been on the same project 30+ years, shipped a couple dozen releases, earned my way, as did the others on my team. we don't hire mediocre people
Re:well, there IS another side... (Score:4, Interesting)
Hi. I actually work for a high-tech company, full of extremely competent engineers.
Yea sure you do buddy.
This from the same joker who previously said he wants a "total ban" on AI and provided a hyperlink to a list of "scientific experts" who made it their lifes work to understand AI...e.g. Steve Bannon of Epstein files fame, Glenn Beck and Susan Rice.
Now you are here selling it.
Yes, we've all been mandated to use Cursor, and pay no mind to token usage. And we've all been more productive actually. We now spend more of our time thinking out loud about what we want, and creating plans (always plan first!), then allowing the AI to craft solutions bit by bit. And yes, we guide the AIs, and we force them to create unit tests for every single line of code,
So you guide AIs AND you force them? How does one "force" an AI to do anything? What does this even mean?
Yes, I've heard the story where they just print out "the test passed" in order to get a pet on the head, but we are actually skilled engineers, not dumb dumbs, and we know to watch out for that sort of thing, and correct it with a rule, so it never happens again. (First rule: don't say "you're absolutely right!")
So, I feel like there's a lot of bashing going on here and not a lot of Reasonable Thinking about actual usefulness. The thing is actually incredibly useful and surprisingly competent, in the right hands. In the hands of someone who knows how to write good code, they can shepherd this "fresh out of college intern", and get them to write reasonably good code, and in fact end up shepherding maybe 5 interns at once.
So, I feel like there's a lot of empty rhetoric and no objective information content in your message. You say things that are inherently nonsense e.g. "create unit tests for every single line of code". The notion you can prompt an AI that does not output "reasonably good code" to output "reasonably good code" is absurd on its face. I guess if you force it to write good code then it had better do it.
Re: (Score:2)
You’re attacking my credentials and character instead of addressing the engineering substance. Calling someone a “joker” isn’t an argument, it's "ad-hominem". You silly goose! ;-) Now, you seem smart, so I have to assume you already know what ad-hominem is, yet you chose to do it anyway, revealing your character? or?
I’m not going to disclose my employer to satisfy an anonymous commenter, but my claims don’t depend on who I am. They depend on whether the workflow works. Th
Re: (Score:2)
Hi. I actually work for a high-tech company, full of extremely competent engineers.
See, I don't thing you're credible.
I don't think I've met anyone anywhere who thinks their co-wokers are all overflowing with competence. I've worked in big tech companies, academia, government and tiny startups. About the only place one ever feels that is a startup of about 5 people.
Big tech companies have the usual mix of brilliant, skillful, competent, OK, warm bodies, people for whom you wonder how they get dressed in the
Re: (Score:2)
You’re reading far more into a throwaway line than was intended.
“Full of extremely competent engineers” was not a claim that every single human in the building is a flawless genius. It was shorthand for: the people I work with are generally skilled professionals operating at a high technical bar.
Of course large organizations contain a distribution. Every sufficiently large system does. That’s not controversial — it’s statistics.
But it does not follow that because variatio
Re: well, there IS another side... (Score:1)
I'm not a programmer and I haven't found Copilot to be helpful for anything except trivial cleanup of Excel sheets, but last week I was talking with my brother who works for one of the big software companies and I was surprised that he told a story almost exactly the same as yours (minus the extremely competent coworkers).
He also observed that junior programmers were being entirely cut out, and that at this time no one had an answer for what to do about that. When I asked about his own job in a year or t
Re: (Score:2)
That concern about junior engineers is real in some places, but it isn’t universal. At my company, junior engineers are not being cut out. They’re being actively invested in — explicitly trained to use AI as part of their onboarding and long-term development. The expectation isn’t “let the model think for you.” It’s “learn to specify, critique, verify, and iterate faster.”
This turns AI from a replacement into a multiplier. A junior who previously needed
Dollar sign dollar sign dollar sign (Score:1)
If they try that with me, I'll just thank them for promoting me to overpaid consultant to clean up the ai mess.
How to shoot yourself in the foot (Score:3)
Make your employees dependent on AI. Make your company dependent on AI.
As a coder, Co-Pilot is very useful, it gives me contextual advice, on point where I need it, it's far better than Google.
It helps me to be lazy and dependent. I don't need to learn an API anymore. I don't need to learn the philosophy of a new library. That skill of navigating complexity, I don't need to develop that anymore.
Everything I enjoy now, that makes my coding life easier was built on that human skill of navigating complexity and what AI is telling me is the product of that effort. LLM's will always be derivative, that's how they work. As soon as I encounter something novel, un documented, especially in the ball ache of integration, AI is not gong to be able to help me. The more I use it, the worse I'll get at those tasks that the AI cant help me with. And if we get to point where we are not producing "new" , LLM's are stuck where they are.
Every prompt I engage with AI sends my code and my intentions to a third party. They tell me, "your secrets are safe with me" in the contract, but we know that's bollocks. Every prompt refines the LLM, so every prompt is in the LLM and shared with everyone else. Oh no, Big Tech says, "no we wouldn't do that". Sorry I don't believe that, you trained your product on other people's content, that's your business model, you'd be nuts to change that,
How many times do we have to get bit by big tech before we tell them to do one, Grow your own LLM's in house. Rob them for a change.
we're fighting back (Score:4, Funny)
AI doesn't even know what my job is. (Score:3)
My employer's client has recently been quite firm on the point that we should be "leveraging AI", and more specifically that we should be using "copilot in github".
We're several years into automating regression testing of a large complex web application with UFT aka Unified Functional Testing. This application was built using the oldschool mainframe business model where the customer has to become captive by way of you owning all of their data.
The functional code we author is VB-ish, doesn't do shit outside of UFT, and frankly UFT is a kit to allow you to build a test harness so without the functional code it is just like a box of spirograph toys with no plan.
When you are inside the UFT system and click 'save' after making a one-line change, 8 or 10 encrypted binary files are updated in the filesystem.
"copilot in github", whatever that is, can't read our code.
Oh yeah - I'm a geezer and every time i have thought, "the git commandline is confusing, this must be easier in the gui" the gui was not easier. I use git bash exclusively, and since i know how to use "git stash" i am the git expert on the team. I'm embarrassed for all of us, honestly.
Re: (Score:2)
Oh yeah - I'm a geezer and every time i have thought, "the git commandline is confusing, this must be easier in the gui" the gui was not easier. I use git bash exclusively, and since i know how to use "git stash" i am the git expert on the team. I'm embarrassed for all of us, honestly.
Git is the poster child for leaky abstractions, but it's what we have and for what it's worth I agree. I am also kind of fascinated by the sheer level of effort put in by some people, who are solid programmers, to avoid learni
Wow (Score:3)
So, I set my code generation preference for the AI to "very verbose" through prompting it someway.
That way I get more money.
Catch-22 (Score:2)
Re: (Score:2)
Not if you poison the well. That way when they fire you, they will regret the chaotic evilness you trained it to recreate.
If you can't make a tool they want to use... (Score:2)