
AI Use Damages Professional Reputation, Study Suggests (arstechnica.com) 89
An anonymous reader quotes a report from Ars Technica: Using AI can be a double-edged sword, according to new research from Duke University. While generative AI tools may boost productivity for some, they might also secretly damage your professional reputation. On Thursday, the Proceedings of the National Academy of Sciences (PNAS) published a study showing that employees who use AI tools like ChatGPT, Claude, and Gemini at work face negative judgments about their competence and motivation from colleagues and managers. "Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs," write researchers Jessica A. Reif, Richard P. Larrick, and Jack B. Soll of Duke's Fuqua School of Business.
The Duke team conducted four experiments with over 4,400 participants to examine both anticipated and actual evaluations of AI tool users. Their findings, presented in a paper titled "Evidence of a social evaluation penalty for using AI," reveal a consistent pattern of bias against those who receive help from AI. What made this penalty particularly concerning for the researchers was its consistency across demographics. They found that the social stigma against AI use wasn't limited to specific groups. "Testing a broad range of stimuli enabled us to examine whether the target's age, gender, or occupation qualifies the effect of receiving help from Al on these evaluations," the authors wrote in the paper. "We found that none of these target demographic attributes influences the effect of receiving Al help on perceptions of laziness, diligence, competence, independence, or self-assuredness. This suggests that the social stigmatization of AI use is not limited to its use among particular demographic groups. The result appears to be a general one."
The Duke team conducted four experiments with over 4,400 participants to examine both anticipated and actual evaluations of AI tool users. Their findings, presented in a paper titled "Evidence of a social evaluation penalty for using AI," reveal a consistent pattern of bias against those who receive help from AI. What made this penalty particularly concerning for the researchers was its consistency across demographics. They found that the social stigma against AI use wasn't limited to specific groups. "Testing a broad range of stimuli enabled us to examine whether the target's age, gender, or occupation qualifies the effect of receiving help from Al on these evaluations," the authors wrote in the paper. "We found that none of these target demographic attributes influences the effect of receiving Al help on perceptions of laziness, diligence, competence, independence, or self-assuredness. This suggests that the social stigmatization of AI use is not limited to its use among particular demographic groups. The result appears to be a general one."
Well... (Score:5, Interesting)
And there is the problem. (Score:5, Insightful)
Who says that you are not doing the work? In my experience the AI tools simply can't do all the work for you. Not even close. They just save you time here and there by having quick answers to questions, quick examples, or the ability to generate boilerplate for you.
Though my experience has been limited to code-generation. I can't speak for other uses. But the bigger the code-generation task, the more the AI gets wrong. You have to review everything it writes as if it was written by an entry-level developer, and for a lot of work it's just easier to do it yourself like you always have.
So, these judgments are unfounded. Of course, there may ALSO be people who try to use AI to overcome their own lack of competence, but that will show glaringly in the lack-of-quality of their work.
I will also add that this door swings both ways. Refusing to utilize new technology will have judgments of being a Luddite, or of being an old dog that can't learn new tricks, start coming your way. And those judgments can cost you job, promotion, and raise opportunities.
Re:And there is the problem. (Score:5, Informative)
I can confirm that the AI limitations you cite, apply to uses outside of code generation. For example, I've used it a number of times to help me generate slide decks for a talk I was giving. While it was helpful, such as by coming up with useful outlines, I still had to do significant editing and error correction.
Re: (Score:2)
I see that there's a difference between writing a document and understanding what you wrote and the use of an illustration image in the document.
If I use AI to generate an illustration image and specify that it's AI generated but the text is purely my creation then I don't see any issue.
Re: (Score:1, Troll)
this is about right. Elon attempting to hand huge roles in the government over to AI shows what he's really worth.
Re: (Score:1)
who's the troll who works here, labelling this shit as a troll comment, that's what i want to know
Re: (Score:1, Troll)
"Who says that you are not doing the work?"
The entire premise is you're not doing the work.
"In my experience the AI tools simply can't do all the work for you."
Unless you're doing ALL the work, you're not doing the work. This is not the claim you need it to be.
"They just save you time here and there by having quick answers to questions"
Answers of questionable value, and you're not using AI for "doing work" then.
"But the bigger the code-generation task, the more the AI gets wrong."
Which would be why your pr
Re: (Score:1)
>Unless you're doing ALL the work, you're not doing the work. This is not the claim you need it to be.
Ok shit for brains. Hope you flipped every electron yourself when making this post. Or you didn't do it yourself.
This shit take is so incredibly stupid I'm actually in awe. By this take anything humans made using tools "wasn't made by humans" since the human didn't do every single little thing.
>Which would be why your professional reputation suffers when you use it.
Ahh I see the stupidity goes even de
Re: (Score:2)
AI is like a power tool. I'm sure that people who did stuff by hand scoffed at someone whipping out a Dremel for fine work.
However, it is easy to be a hack with power tools. Someone can do some hacky stuff on their 1911, call themselves a gunsmith. It isn't the Dremel tool that is at fault here.
AI is very useful. However, it is just use of it that matters. You can be a hack with it, or you can use it to give high quality deliverables.
For example, when it comes to coding, one can tell a chatbot to creat
Re: (Score:2)
If you're not doing the work, you're proving you can be automated out of your job...
And if you’re confident in today’s toddler-grade AI to be a valid replacement for a grown-ass experienced adult worker, you’re proving how quickly “management” can be replaced with nothing but scripts.
When you really think about it, those doing the actual physical work, are likely the LAST to be replaced. If your “work” is nothing but making business decisions, a fucking algorithm could probably replace you. Yeah. I’m talking about you, Overpriced Executive
Re: (Score:3)
When you really think about it, those doing the actual physical work, are likely the LAST to be replaced.
We've already got robots and machines for that, especially if it's repetitive work.
Re: (Score:1)
"And if you’re confident in today’s toddler-grade AI to be a valid replacement for a grown-ass experienced adult worker..."
He didn't say that though.
"...you’re proving how quickly “management” can be replaced with nothing but scripts."
That's been happening forever. And management isn't being replaced, there's needs to be someone to take the money.
"When you really think about it, those doing the actual physical work, are likely the LAST to be replaced."
No, what will happen is t
Re: Well... (Score:1)
Apparently this doesn't apply to slashdot moderators....
Re: (Score:2)
It's the same old farts saying the same old shit as before.
"The typewriter is killing penmanship, letters and words now have no soul and feeling to them."
"Word processors are making people dumber, at least with a typewriter you had to think about what you wanted to say because you can't just delete whole sentences or paragraphs."
"Digital artwork is is killing REAL art like painting since you can just undo brush strokes."
"Digital photography is soulless and takes away from the REAL skill of setting every lit
Even worse (Score:1)
Calling it "AI" when it isn't is even more damaging.
Re: (Score:3, Insightful)
In my experience, it's just a few slashdot posters who thing this is an issue. Pointing out that AI isn't actually "intelligent" (artificially or otherwise) is like pointing out that characters in a movie aren't real people. While both are true, that truth doesn't stop us from being entertained by movie characters, or from being assisted by AI. There is no *useful* difference between "true" artificial intelligence, and what we have today.
Re: (Score:2, Insightful)
"There is no *useful* difference between "true" artificial intelligence, and what we have today."
Huh? You might want to clarify that statement!!!! The sci-fi view of true AI is nothing like what we have.
Re:Even worse (Score:5, Insightful)
Which sci-fi AI are you referring to?
Star Trek's "computer" is able to answer all kinds of random questions. ChatGPT can also already answer most of the types of questions asked of the "computer" on the show. Is there some specific feature of sci-fi AI that you feel is missing today, other than qualitative measurements like accuracy and breadth of topics?
The OP wasn't referencing sci-fi, but "true" AI, which would presumably be some kind of actual intelligence, but artificial. What we have instead is a fancy pattern-matching algorithm that can answer many kinds of questions in English, and do all kinds of interesting stuff. My point is that it doesn't matter that it's a fancy pattern-matching algorithm, the things that it can do is not meaningfully different than what we would expect "true" AI to do. The fact that the underlying technology or methodology isn't actually "intelligent" isn't materially significant, because it simulates intelligence so well.
Re:Even worse (Score:4, Informative)
To understand the comment, you have to understand the context. The similarity between today's AI, and movies, is that both are illusions. Movies provide the illusion of human activity and conversation and experiences. AI provides an illusion of intelligence. Both are useful for their respective purposes.
Re: Even worse (Score:2)
"There is no *useful* difference between "true" artificial intelligence, and what we have today"
Wat
What we have now is not intelligent, AGI works actually be intelligent. What we have now cannot be trusted to give us useful answers. AGI can not be trusted not to eliminate us for not being useful. They are wholly different things.
Re: Even worse (Score:5, Insightful)
Would you say that humans can be trusted to give useful answers? Consider the group of humans now running our government, for example. I'd say that the inability to be trusted, is common to both AI and to human intelligence.
Agreed, what we have now is not intelligent, it simply gives the illusion of being intelligent. But the illusion is so good that it does the same kinds of useful things that a "real" intelligence would be able to do.
Re: (Score:2)
Wow, I didn't think it was possible, but you've found a way to put even less thought into your nonsense ramblings.
The kinds of mistakes humans make and the kinds of "mistakes" LLMs make are nothing alike. To equate the two is as absurd as one of your posts.
Re: (Score:2)
Hmmm maybe I'm an AI, if my posts are so absurd!
It's true, human mistakes are different in some respects. But both humans and AI "hallucinate" (i.e., make up answers), and both humans and AI cannot be *trusted* to give useful answers. Both *usually* give useful answers, but can they be trusted? No.
I never said that AI and humans are exactly alike, but there are certainly some resemblances. One thing that's different, is that when you bring up a valid point contradicting what AI says, it doesn't start attack
Honestly so far I find them kind of useless (Score:3)
That works because the math is well understood and literally hundreds if not thousands of years old in most cases.
The problem I had with using AI when I gave it a whirl is that a just picks whatever the most popular answer is from pretty much all time.
The problem with that is if you're saying trying to get it to tell you how to write a chunk of code with a specific application and that application has declined in popularity as programming languages tend to do then it's going to give you an answer that is way way out of date because the most popular answer is going to be for older technology from when the programming language was at its peak. And I can sometimes go back a decade or more.
I suspect if I was writing code in more cutting-edge stuff maybe it would work better I don't know the last couple of projects I did involved pretty -run of the mill programming languages
Re: (Score:2)
Any maths solutions is exactly the same as what you discovered. The LLM just gives you common answers like a search engine would.
It all comes down to what it has scraped, legally or not.
Re:Honestly so far I find them kind of useless (Score:5, Interesting)
> Honestly so far I find them kind of useless
Like most things, it takes a while to get good at using AI based development tools. Garbage in garbage out applies here as well. You have to provide good context and frame your prompt in a way that another human would understand.
Many times I already have an idea of what I want. What libraries I want to use and the very rough architecture of the solution. I provide this as input and ask the AI to help me refine it and create a final and a comprehensive implementation plan. I then review the phased implementation plan and ask AI to write the software for each phase. I check the output of each phase before moving to the next.
Good software engineering skills are still required. AI will make errors and get off track. Your job is to know what it is doing and put it back on track. I believe a good engineer can 5x their output with the aid of AI development tools. But, at the very beginning, it may be less than 1x. You do not have the option to quit in the learning phase. Or you will eventually be replaced by a 5x engineer.
I'm asking pretty basic stuff here (Score:2)
I do agree that eventually yeah the AI is going to replace the majority of programmers and white collar workers in general. And I don't think it's going to take a long time. But I do think we're about 3 to 5 years away from the worst of it.
I have no idea what's going to happen then because we built our economy around hig
Re: (Score:3)
Not even asking for production code just asking for hello world style code for specific scenarios so I can see how certain things work and go off and implement them myself as needed
How is this better than what you'd turn up with a quick search? Odds are good that's what you'll get from the AI anyway, just mangled in unpredictable ways.
I do agree that eventually yeah the AI is going to replace the majority of programmers and white collar workers in general.
Not with anything even remotely resembling the tech we have today.
And I don't think it's going to take a long time. But I do think we're about 3 to 5 years away from the worst of it.
"Just 10 years away" was what people used to say. The old joke was "Just 10 years away since 1960!" In early 2023, that number turned into "6 months" then "this time next year". While I'm not surprised to see it continue to increase to "3 to 5 years", I'm a little surprised that anyone
Re: (Score:2)
I think AI are somewhat useful. They help with quite basic tasks. I actually don't think they are helpful at cutting edge.
In programming, I often equate them to the level of a smart freshmen with stack overflow. If the problem is simple enough or is out there already, it's going to give you the right answer. And in programming most of the code we write is easy. There are harder bits, but a lot of it is easy.
So you can essentially use it as an always-here basic assistant. So all the trivial utility function
Wishful thinking (Score:4, Informative)
I work at a small 10-person company. Here, regardless of how good you are, if you do not use AI, your output will be far far less than an engineer who does. All checked-in code goes through layers of review process and QA. We recently went through the process of hiring another software engineer, and any candidate who did not have significant experience developing with the aid of AI tools, was eliminated early.
Any software engineering team that is not using AI will not be competitive. Yes, you still have to know your stuff. But days of manually typing in every single line of code are numbered. Anybody who thinks otherwise and refuses to level up on AI based development tools will eventually lose their job.
You can downvote me all you want. Just know that you are sticking your head in the sand.
Re: (Score:3, Interesting)
As long as you also understand copyright is nulled. You are trading your source code, and your contracts, for others to freely develop from. Public LLMs are kind of an open shared repository in that respect.
Re: Wishful thinking (Score:2)
Zero of what you said makes any sense. There isn't even a kernel of truth in there.
Nothing changes wrt copyright, you or the company you work for retain all the same rights with or without AI tools. Open source AI toolchains ... make absolutely no difference.
Where did you get any of that information?
Re: (Score:3)
You haven't been paying attention. [reuters.com]
The courts have decided on more than one occasion, and the copyright office agrees, that copyright requires human authorship.
We're probably still a long way from the issue being 'settled', but at the moment it looks like AI generated nonsense isn't afforded copyright protection.
Re: (Score:2)
I interviewed a candidate last week. The interview before me, Coworker A, was running late and for some reason spent almost ten minutes of my slot regaling the candidate with how A used an LLM to refactor and comment Python, compete with sound effects ("I told it that it gave me this code, and that I wanted more comments, and 'boop' comments popped out.") My coworkers think highly of A, but that damaged his reputation with me for wasting two others' time with a largely off-topic brag.
Two days later, I saw
Re: (Score:2)
I don't know if this happened, but it has the stink of having happened so assuming it did...there are some interesting insights.
First, your Coworker A wanted to see comments added to source code. He wanted aids to make the code more understandable, yet making the effort to understand the code himself, or ask the author, and write comments based on understanding, have generative AI with no understanding of the code is good enough. This is the mentality that leaded to think AI can actually do programming.
Se
Re: (Score:2)
apologize for lack of proof reading, so tired of autocorrect changing every word.
Re: (Score:2)
Your company has the disease
That is a very good way to describe it.
Re: (Score:2)
Your company will fail, and deservedly so.
"Any software engineering team that is not using AI will not be competitive."
Depending on the definition of "competitive".
"But days of manually typing in every single line of code are numbered. Anybody who thinks otherwise and refuses to level up on AI based development tools will eventually lose their job."
The intention is for every engineer and programmer to "lose their job".
"You can downvote me all you want. Just know that you are sticking your head in the sand."
Re: (Score:2)
Anyone who uses LLM/AI to write their code is taking a huge gamble with their company. The first AI copyright case to be decided was decided against the AI company. If this pattern follows (and this is the gamble) in the court system, anyone who has used AI code is potentially opening themselves up to copyright infringement lawsuits. All it takes is one disgruntled employee (or one employee offered a substantial reward) to snitch on the company to end the company. If the tide in court turns pro-AI, which wo
Re: (Score:2)
I see these sorts of comments repeatedly, yet they are so far from my experience it's like they are from a different world. The code generated by AI's is so wrong for me, it's not worth my time to review it. Why review something you are 9 times out of 10 going to throw away? Since I've never personally seen AI work well, all I have is youtube videos of people using it successfully. Mostly, there people are developing web apps. Sometimes they are developing in Python. Neither are languages I use a lot
What sort of AI use? (Score:3)
(1) I've used AI as a reference, for example which sorting algorithm is best for this type of data. Then when I get an answer I go read the wiki for that algorithm. It sort of replaced looking up things in an old textbook.
(2) I've used AI to generate boilerplate UI code, it doesn't seem terribly different from GUI design tool generated code. For example asking for the code for that implements a small collection of buttons in some particular UI API.
(3) I ask the AI to generate code implementing the core functionality of the app we are developing. I then spend an enormous amount of time reviewing the code, adding defensive code, create unit tests with edge cases,
(4) I ask the AI to generate code implementing the core functionality of the app we are developing. It's awesome, I complete scum taskx so quickly I am the star member of the team in my bosses eyes. Sure the next scrum will include a ton of bugs related to this code, and overall these eat up any time saved during implementation. However senior management doesn't see the discriptions of the scrum tasks. They just see a summary of tasks completed per scrum by each team. They think we are rock stars and are impressed by our high percentage of AI generated code. We are so much more efficient than the other teams in their eyes.
(3) gets laid off end of year for being in the bottom 20% of the performance metrics.
The stigma is really just fear (Score:1)
There's a lot of John Henrys out there who are afraid that machines are going to take their jerbs. And yeah, they probably will.
AI use is perceived in being complicit in a scheme that's ultimately going to undermine a lot of people's employability, so it's a bit like the digital equivalent of going out for a cold beer with a scab and telling him "you're just misunderstood!"
The thing is, what's the alternative? Artificially holding back progress like the Amish, so we can maintain a day's full of busywork f
Re: (Score:2)
"AI use is perceived in being complicit in a scheme that's ultimately going to undermine a lot of people's employability..."
No, AI is not intelligent, it is not, and cannot be, "complicit" in anything. Casting it as intelligent and capable of devious human behavior IS part of the conspiracy, though, and that conspiracy is real.
"The thing is, what's the alternative?"
LOL wut? Build fires to scare off the animals at night like we do now? You realize we have what we have WITHOUT AI, right? What's the ALTERN
Re: (Score:2)
No, AI is not intelligent, it is not, and cannot be, "complicit" in anything.
I was specifically referring to people who are using AI tools to help (or in some cases, perform most of the work involved) with their job tasks. The implication is just as the FP in this discussion had already said: If you're proving that AI can do your job, you're proving that you could be replaced by AI. When others see that happening, they'll probably wonder why you're willingly embracing a technology that's likely to put you out of a job.
Your problem is that you've accepted the very premise that AI is progress.
In their early history, automobiles were difficult to start, r
Re: (Score:2)
Your problem is that you've accepted the very premise that AI is progress. The reason people oppose it is because it is not.
Also very well said.
Article does not match my experience (Score:3)
I haven't encountered *any* reluctance to adopt AI tools, other than financial or security concerns.
Sure, if you just take what AI spits out and don't edit it, you're going to get crap. But those who use it as they would any other source of information, incorporating it into their own work, don't meet with any kind of stigma.
Re: (Score:1)
Re: (Score:2)
If you think that an LLM can check your work, you are not competent.
Re: (Score:1)
Re: (Score:2)
Learn how to read. I said that LLMs are not capable of checking your work and that anyone who thought otherwise wasn't competent.
Do you have an LLM check your posts? That would explain a lot...
Effect of corporate culture (Score:2)
Some companies preach AI usage, many don't, and maybe a few preach against it. I'm guessing that corporate attitudes and the resulting culture has a huge influence on how work use of AI is viewed by colleagues and managers. For example, I assume that AI usage at Nvidia is not only not viewed negatively, but refusing to use AI is instead viewed negatively. Perhaps this perspective is also common at hyperscalers.
Observational consideration.., (Score:1)
AI users are the slave class raising their hands. Look upon them appropriately.
Relax... (Score:2)
Re: (Score:2)
The very first stable diffusion release was advertised with an astronaut on a horse on mars - and image that never existed before that time (and has been remade a lot since then, mostly with AI). The strength of AI is to create things that never were done before, as it learns concepts instead of images (texts, music) and can seamlessly combine concepts to create new ones.
Re: (Score:2)
The strength of AI is to create things that never were done before
You are deeply confused.
These are statistical models. They only create things that were done before.
Re: (Score:2)
Please either read some technical paper about the net of your choice, or just try one to create something new. You will be surprised.
Re: (Score:2)
You're highlighting your own ignorance here.
Re: (Score:2)
Thanks for the clear sign that discussing with you is pointless. I guess I should have noticed earlier.
I call BS (Score:2)
Doctors and other 'professionals' have been using Google for ages to find out what they should do, AI is just doing it way better.
Re: (Score:2)
I think that's right, a doctor's job is to apply a wealth of experience to determine what is most probably an explanation. Doctors do much more than that, but AI is especially well suited to assisting in medicine and computerized assistants have been successful there for a long, long time.
Doctors don't need to know that their assistant uses AI techniques, though, and they won't care. Nor will their colleagues. Doctors are trained to apply consensus, AI works by computing consensus. There's no black mark.
Re: (Score:2)
For emphasis, AI is a very broad term that applies to a whole lot more than just LLMs.
AI works by computing consensus
Most AI is statistical, but very little of it operates on things like facts or concepts.
Re: (Score:2)
Bullshit.
Indeed (Score:2)
The difference is, I think, people that use AI with restraint and only as a minor tool and those that fawn over it and are under the delusion that LLMs are actually intelligent, for example. For the latter, I propose the term "AIND-user" (Artifically Intelligent Naturally Dumb user).
Re: (Score:2)
So the more inquisitive use LLMs to drill down deep into the details. Usually that leads to more questions, and usually, I find, the task balloons in size and complexity... but the LLMs help to pin down all the tiny details and split hairs on ambiguous terms.
Lazy extroverts
Re: (Score:2)
In my experience, not a lot of people can do analysis and use logic.
There are actually numbers from sociology on this: About 10-15% of all people are "independent thinkers", which essentially means they do fact-checking by themselves. And about 20% (including the independent thinkers) can be convinced by rational argument, which essentially means they can fact check when being prompted to do it. No idea whether there is any connection to introverts or not.
I disagree that LLMs become an "intelligence multiplier". LLMs can only find very shallow things with any reasonable deg
Re: (Score:2)
I don't think the idea of intelligence multiplier means the AI generates insights. However as others have said above, they remain useful. I'm certainly getting a lot of value out of GPT, to do joe jobs like reformatting unstructured text into sql code, for instance. I also have found it very good to drill down into fine details on things like networking protocols, allowing me to understand and do things today that I couldn't do a ye
Re: (Score:2)
Interesting notes on independent thinkers. Certainly close to my experience.
Thank you. I first stumbled on this when talking with a friend about academic teaching. We had independently found that we always had about 15% of students that "get it" and will fact-check things.
I don't think the idea of intelligence multiplier means the AI generates insights. However as others have said above, they remain useful. I'm certainly getting a lot of value out of GPT, to do joe jobs like reformatting unstructured text into sql code, for instance. I also have found it very good to drill down into fine details on things like networking protocols, allowing me to understand and do things today that I couldn't do a year ago.
Well, in that sense an encyclopedia is an "intelligence multiplier". But when stated that way, it sounds much less spectacular ot revolutionary...
LLMs have some uses. "Better search" is certainly an eastablished one and that was clear from the first few months of availability on. For data-transformation, I am not
Re: (Score:2)
Probably true. I would agree. Anyone with a lot of curiosity can leverage an encyclopedia or wikipedia or AI as we are currently discussing.
I think the key difference is the speed of electronics. I hear the echoes of Marshall McLuhan frequently. This IS the global village. He pointed out that the speed of electronics shrinks both time and space. The Pope is el
Re: (Score:2)
Thos speed is certainly a factor and comes with severe drawbacks. Thinking needs time. People that can do it have trouble finding that time. And people start to get overloaded more and more because pushing "news" has become so easy and cheap and they are unable to filter by importance. Historically, only importanty stuff got reported. Not anymore. And a lot of lies are part of the newsfeed as well, because the cost of getting caught in a public lie has gone down dramatically as well.
Re: (Score:2)
My take is that we really aren't evolutionarily adapted to electronics. It discombobulates us. Like you're saying, we don't have time to mentally process things before the next big story comes in. Also, cheap and easy.. yes, exactly, a conjecture of mine is that "free" or freemium is one of the huge problems with the internet as a whole. Anything free attracts parasites... there are no exceptions. What you're saying... it's so cheap to generate garbage that
Re: (Score:2)
Indeed. We are linving in strabge times: Information and information how to use it is readily available, but most people cannot do it. What we observe here is really fundamental mental limits that apply to most people.
Re: (Score:2)
the LLMs help to pin down all the tiny details and split hairs on ambiguous terms.
Far more like is that they're producing nonsense and you just haven't noticed. It is absolutely astonishing how much these things will get wrong, and how defensive they can be about the obviously incorrect output. I recently demonstrated this to a colleague of mine by asking about a trivial mathematical fact. It was something like "is ab/c the same as a/c b" The response was hilarious. It replied that the two were not equivalent, but that ab/c was equivalent to b a/c. It even gave a nonsense justif
Re: (Score:2)
My experience is that if you give a lot of context with your question, you can get precise
It's not a secret...... (Score:2)
It's a skill issue (Score:2)
Since most people paint pictures poorly, using a paint brush damages your professional reputation.
AI coding requires good coders (Score:1)