Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
AI

AI Use Damages Professional Reputation, Study Suggests (arstechnica.com) 89

An anonymous reader quotes a report from Ars Technica: Using AI can be a double-edged sword, according to new research from Duke University. While generative AI tools may boost productivity for some, they might also secretly damage your professional reputation. On Thursday, the Proceedings of the National Academy of Sciences (PNAS) published a study showing that employees who use AI tools like ChatGPT, Claude, and Gemini at work face negative judgments about their competence and motivation from colleagues and managers. "Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs," write researchers Jessica A. Reif, Richard P. Larrick, and Jack B. Soll of Duke's Fuqua School of Business.

The Duke team conducted four experiments with over 4,400 participants to examine both anticipated and actual evaluations of AI tool users. Their findings, presented in a paper titled "Evidence of a social evaluation penalty for using AI," reveal a consistent pattern of bias against those who receive help from AI. What made this penalty particularly concerning for the researchers was its consistency across demographics. They found that the social stigma against AI use wasn't limited to specific groups.
"Testing a broad range of stimuli enabled us to examine whether the target's age, gender, or occupation qualifies the effect of receiving help from Al on these evaluations," the authors wrote in the paper. "We found that none of these target demographic attributes influences the effect of receiving Al help on perceptions of laziness, diligence, competence, independence, or self-assuredness. This suggests that the social stigmatization of AI use is not limited to its use among particular demographic groups. The result appears to be a general one."

AI Use Damages Professional Reputation, Study Suggests

Comments Filter:
  • Well... (Score:5, Interesting)

    by Moryath ( 553296 ) on Friday May 09, 2025 @11:31PM (#65365749)
    If you're not doing the work, you're proving you can be automated out of your job...
    • by Brain-Fu ( 1274756 ) on Friday May 09, 2025 @11:49PM (#65365783) Homepage Journal

      Who says that you are not doing the work? In my experience the AI tools simply can't do all the work for you. Not even close. They just save you time here and there by having quick answers to questions, quick examples, or the ability to generate boilerplate for you.

      Though my experience has been limited to code-generation. I can't speak for other uses. But the bigger the code-generation task, the more the AI gets wrong. You have to review everything it writes as if it was written by an entry-level developer, and for a lot of work it's just easier to do it yourself like you always have.

      So, these judgments are unfounded. Of course, there may ALSO be people who try to use AI to overcome their own lack of competence, but that will show glaringly in the lack-of-quality of their work.

      I will also add that this door swings both ways. Refusing to utilize new technology will have judgments of being a Luddite, or of being an old dog that can't learn new tricks, start coming your way. And those judgments can cost you job, promotion, and raise opportunities.

      • by Tony Isaac ( 1301187 ) on Saturday May 10, 2025 @12:05AM (#65365819) Homepage

        I can confirm that the AI limitations you cite, apply to uses outside of code generation. For example, I've used it a number of times to help me generate slide decks for a talk I was giving. While it was helpful, such as by coming up with useful outlines, I still had to do significant editing and error correction.

      • by Z00L00K ( 682162 )

        I see that there's a difference between writing a document and understanding what you wrote and the use of an illustration image in the document.

        If I use AI to generate an illustration image and specify that it's AI generated but the text is purely my creation then I don't see any issue.

      • this is about right. Elon attempting to hand huge roles in the government over to AI shows what he's really worth.

      • Re: (Score:1, Troll)

        by dfghjk ( 711126 )

        "Who says that you are not doing the work?"
        The entire premise is you're not doing the work.

        "In my experience the AI tools simply can't do all the work for you."
        Unless you're doing ALL the work, you're not doing the work. This is not the claim you need it to be.

        "They just save you time here and there by having quick answers to questions"
        Answers of questionable value, and you're not using AI for "doing work" then.

        "But the bigger the code-generation task, the more the AI gets wrong."
        Which would be why your pr

        • by Anonymous Coward

          >Unless you're doing ALL the work, you're not doing the work. This is not the claim you need it to be.

          Ok shit for brains. Hope you flipped every electron yourself when making this post. Or you didn't do it yourself.

          This shit take is so incredibly stupid I'm actually in awe. By this take anything humans made using tools "wasn't made by humans" since the human didn't do every single little thing.

          >Which would be why your professional reputation suffers when you use it.

          Ahh I see the stupidity goes even de

      • AI is like a power tool. I'm sure that people who did stuff by hand scoffed at someone whipping out a Dremel for fine work.

        However, it is easy to be a hack with power tools. Someone can do some hacky stuff on their 1911, call themselves a gunsmith. It isn't the Dremel tool that is at fault here.

        AI is very useful. However, it is just use of it that matters. You can be a hack with it, or you can use it to give high quality deliverables.

        For example, when it comes to coding, one can tell a chatbot to creat

    • If you're not doing the work, you're proving you can be automated out of your job...

      And if you’re confident in today’s toddler-grade AI to be a valid replacement for a grown-ass experienced adult worker, you’re proving how quickly “management” can be replaced with nothing but scripts.

      When you really think about it, those doing the actual physical work, are likely the LAST to be replaced. If your “work” is nothing but making business decisions, a fucking algorithm could probably replace you. Yeah. I’m talking about you, Overpriced Executive

      • by Samare ( 2779329 )

        When you really think about it, those doing the actual physical work, are likely the LAST to be replaced.

        We've already got robots and machines for that, especially if it's repetitive work.

      • by dfghjk ( 711126 )

        "And if you’re confident in today’s toddler-grade AI to be a valid replacement for a grown-ass experienced adult worker..."

        He didn't say that though.

        "...you’re proving how quickly “management” can be replaced with nothing but scripts."

        That's been happening forever. And management isn't being replaced, there's needs to be someone to take the money.

        "When you really think about it, those doing the actual physical work, are likely the LAST to be replaced."

        No, what will happen is t

    • Apparently this doesn't apply to slashdot moderators....

    • It's the same old farts saying the same old shit as before.

      "The typewriter is killing penmanship, letters and words now have no soul and feeling to them."

      "Word processors are making people dumber, at least with a typewriter you had to think about what you wanted to say because you can't just delete whole sentences or paragraphs."

      "Digital artwork is is killing REAL art like painting since you can just undo brush strokes."

      "Digital photography is soulless and takes away from the REAL skill of setting every lit

  • Calling it "AI" when it isn't is even more damaging.

    • Re: (Score:3, Insightful)

      by Tony Isaac ( 1301187 )

      In my experience, it's just a few slashdot posters who thing this is an issue. Pointing out that AI isn't actually "intelligent" (artificially or otherwise) is like pointing out that characters in a movie aren't real people. While both are true, that truth doesn't stop us from being entertained by movie characters, or from being assisted by AI. There is no *useful* difference between "true" artificial intelligence, and what we have today.

      • Re: (Score:2, Insightful)

        by evanh ( 627108 )

        "There is no *useful* difference between "true" artificial intelligence, and what we have today."

        Huh? You might want to clarify that statement!!!! The sci-fi view of true AI is nothing like what we have.

        • Re:Even worse (Score:5, Insightful)

          by Tony Isaac ( 1301187 ) on Saturday May 10, 2025 @08:43AM (#65366289) Homepage

          Which sci-fi AI are you referring to?

          Star Trek's "computer" is able to answer all kinds of random questions. ChatGPT can also already answer most of the types of questions asked of the "computer" on the show. Is there some specific feature of sci-fi AI that you feel is missing today, other than qualitative measurements like accuracy and breadth of topics?

          The OP wasn't referencing sci-fi, but "true" AI, which would presumably be some kind of actual intelligence, but artificial. What we have instead is a fancy pattern-matching algorithm that can answer many kinds of questions in English, and do all kinds of interesting stuff. My point is that it doesn't matter that it's a fancy pattern-matching algorithm, the things that it can do is not meaningfully different than what we would expect "true" AI to do. The fact that the underlying technology or methodology isn't actually "intelligent" isn't materially significant, because it simulates intelligence so well.

      • "There is no *useful* difference between "true" artificial intelligence, and what we have today"

        Wat

        What we have now is not intelligent, AGI works actually be intelligent. What we have now cannot be trusted to give us useful answers. AGI can not be trusted not to eliminate us for not being useful. They are wholly different things.

        • Re: Even worse (Score:5, Insightful)

          by Tony Isaac ( 1301187 ) on Saturday May 10, 2025 @08:49AM (#65366301) Homepage

          Would you say that humans can be trusted to give useful answers? Consider the group of humans now running our government, for example. I'd say that the inability to be trusted, is common to both AI and to human intelligence.

          Agreed, what we have now is not intelligent, it simply gives the illusion of being intelligent. But the illusion is so good that it does the same kinds of useful things that a "real" intelligence would be able to do.

          • by narcc ( 412956 )

            Wow, I didn't think it was possible, but you've found a way to put even less thought into your nonsense ramblings.

            The kinds of mistakes humans make and the kinds of "mistakes" LLMs make are nothing alike. To equate the two is as absurd as one of your posts.

            • Hmmm maybe I'm an AI, if my posts are so absurd!

              It's true, human mistakes are different in some respects. But both humans and AI "hallucinate" (i.e., make up answers), and both humans and AI cannot be *trusted* to give useful answers. Both *usually* give useful answers, but can they be trusted? No.

              I never said that AI and humans are exactly alike, but there are certainly some resemblances. One thing that's different, is that when you bring up a valid point contradicting what AI says, it doesn't start attack

  • by rsilvergun ( 571051 ) on Friday May 09, 2025 @11:57PM (#65365795)
    To be fair I don't do the kind of mathematical programming that I think and llm works well for. There's an incredibly detailed Ars Technica article about somebody using AI effectively for programming tasks but for what I can tell he was mostly using it to write common albeit complex mathematical equations in the code form so he didn't have to bother doing it himself.

    That works because the math is well understood and literally hundreds if not thousands of years old in most cases.

    The problem I had with using AI when I gave it a whirl is that a just picks whatever the most popular answer is from pretty much all time.

    The problem with that is if you're saying trying to get it to tell you how to write a chunk of code with a specific application and that application has declined in popularity as programming languages tend to do then it's going to give you an answer that is way way out of date because the most popular answer is going to be for older technology from when the programming language was at its peak. And I can sometimes go back a decade or more.

    I suspect if I was writing code in more cutting-edge stuff maybe it would work better I don't know the last couple of projects I did involved pretty -run of the mill programming languages
    • by evanh ( 627108 )

      Any maths solutions is exactly the same as what you discovered. The LLM just gives you common answers like a search engine would.

      It all comes down to what it has scraped, legally or not.

    • by linuxguy ( 98493 ) on Saturday May 10, 2025 @12:32AM (#65365855) Homepage

      > Honestly so far I find them kind of useless

      Like most things, it takes a while to get good at using AI based development tools. Garbage in garbage out applies here as well. You have to provide good context and frame your prompt in a way that another human would understand.

      Many times I already have an idea of what I want. What libraries I want to use and the very rough architecture of the solution. I provide this as input and ask the AI to help me refine it and create a final and a comprehensive implementation plan. I then review the phased implementation plan and ask AI to write the software for each phase. I check the output of each phase before moving to the next.

      Good software engineering skills are still required. AI will make errors and get off track. Your job is to know what it is doing and put it back on track. I believe a good engineer can 5x their output with the aid of AI development tools. But, at the very beginning, it may be less than 1x. You do not have the option to quit in the learning phase. Or you will eventually be replaced by a 5x engineer.

      • Not even asking for production code just asking for hello world style code for specific scenarios so I can see how certain things work and go off and implement them myself as needed.

        I do agree that eventually yeah the AI is going to replace the majority of programmers and white collar workers in general. And I don't think it's going to take a long time. But I do think we're about 3 to 5 years away from the worst of it.

        I have no idea what's going to happen then because we built our economy around hig
        • by narcc ( 412956 )

          Not even asking for production code just asking for hello world style code for specific scenarios so I can see how certain things work and go off and implement them myself as needed

          How is this better than what you'd turn up with a quick search? Odds are good that's what you'll get from the AI anyway, just mangled in unpredictable ways.

          I do agree that eventually yeah the AI is going to replace the majority of programmers and white collar workers in general.

          Not with anything even remotely resembling the tech we have today.

          And I don't think it's going to take a long time. But I do think we're about 3 to 5 years away from the worst of it.

          "Just 10 years away" was what people used to say. The old joke was "Just 10 years away since 1960!" In early 2023, that number turned into "6 months" then "this time next year". While I'm not surprised to see it continue to increase to "3 to 5 years", I'm a little surprised that anyone

    • by godrik ( 1287354 )

      I think AI are somewhat useful. They help with quite basic tasks. I actually don't think they are helpful at cutting edge.

      In programming, I often equate them to the level of a smart freshmen with stack overflow. If the problem is simple enough or is out there already, it's going to give you the right answer. And in programming most of the code we write is easy. There are harder bits, but a lot of it is easy.

      So you can essentially use it as an always-here basic assistant. So all the trivial utility function

  • Wishful thinking (Score:4, Informative)

    by linuxguy ( 98493 ) on Friday May 09, 2025 @11:59PM (#65365805) Homepage

    I work at a small 10-person company. Here, regardless of how good you are, if you do not use AI, your output will be far far less than an engineer who does. All checked-in code goes through layers of review process and QA. We recently went through the process of hiring another software engineer, and any candidate who did not have significant experience developing with the aid of AI tools, was eliminated early.

    Any software engineering team that is not using AI will not be competitive. Yes, you still have to know your stuff. But days of manually typing in every single line of code are numbered. Anybody who thinks otherwise and refuses to level up on AI based development tools will eventually lose their job.

    You can downvote me all you want. Just know that you are sticking your head in the sand.

    • Re: (Score:3, Interesting)

      by evanh ( 627108 )

      As long as you also understand copyright is nulled. You are trading your source code, and your contracts, for others to freely develop from. Public LLMs are kind of an open shared repository in that respect.

      • Zero of what you said makes any sense. There isn't even a kernel of truth in there.

        Nothing changes wrt copyright, you or the company you work for retain all the same rights with or without AI tools. Open source AI toolchains ... make absolutely no difference.

        Where did you get any of that information?

        • by narcc ( 412956 )

          You haven't been paying attention. [reuters.com]

          The courts have decided on more than one occasion, and the copyright office agrees, that copyright requires human authorship.

          We're probably still a long way from the issue being 'settled', but at the moment it looks like AI generated nonsense isn't afforded copyright protection.

    • by dfghjk ( 711126 )

      Your company will fail, and deservedly so.

      "Any software engineering team that is not using AI will not be competitive."

      Depending on the definition of "competitive".

      "But days of manually typing in every single line of code are numbered. Anybody who thinks otherwise and refuses to level up on AI based development tools will eventually lose their job."

      The intention is for every engineer and programmer to "lose their job".

      "You can downvote me all you want. Just know that you are sticking your head in the sand."

    • Anyone who uses LLM/AI to write their code is taking a huge gamble with their company. The first AI copyright case to be decided was decided against the AI company. If this pattern follows (and this is the gamble) in the court system, anyone who has used AI code is potentially opening themselves up to copyright infringement lawsuits. All it takes is one disgruntled employee (or one employee offered a substantial reward) to snitch on the company to end the company. If the tide in court turns pro-AI, which wo

    • by ras ( 84108 )

      I see these sorts of comments repeatedly, yet they are so far from my experience it's like they are from a different world. The code generated by AI's is so wrong for me, it's not worth my time to review it. Why review something you are 9 times out of 10 going to throw away? Since I've never personally seen AI work well, all I have is youtube videos of people using it successfully. Mostly, there people are developing web apps. Sometimes they are developing in Python. Neither are languages I use a lot

  • by drnb ( 2434720 ) on Saturday May 10, 2025 @12:05AM (#65365815)
    What sort of AI use? For example:
    (1) I've used AI as a reference, for example which sorting algorithm is best for this type of data. Then when I get an answer I go read the wiki for that algorithm. It sort of replaced looking up things in an old textbook.
    (2) I've used AI to generate boilerplate UI code, it doesn't seem terribly different from GUI design tool generated code. For example asking for the code for that implements a small collection of buttons in some particular UI API.
    (3) I ask the AI to generate code implementing the core functionality of the app we are developing. I then spend an enormous amount of time reviewing the code, adding defensive code, create unit tests with edge cases, ... I'm not sure AI saved any time.
    (4) I ask the AI to generate code implementing the core functionality of the app we are developing. It's awesome, I complete scum taskx so quickly I am the star member of the team in my bosses eyes. Sure the next scrum will include a ton of bugs related to this code, and overall these eat up any time saved during implementation. However senior management doesn't see the discriptions of the scrum tasks. They just see a summary of tasks completed per scrum by each team. They think we are rock stars and are impressed by our high percentage of AI generated code. We are so much more efficient than the other teams in their eyes.

    (3) gets laid off end of year for being in the bottom 20% of the performance metrics.
  • There's a lot of John Henrys out there who are afraid that machines are going to take their jerbs. And yeah, they probably will.

    AI use is perceived in being complicit in a scheme that's ultimately going to undermine a lot of people's employability, so it's a bit like the digital equivalent of going out for a cold beer with a scab and telling him "you're just misunderstood!"

    The thing is, what's the alternative? Artificially holding back progress like the Amish, so we can maintain a day's full of busywork f

    • by dfghjk ( 711126 )

      "AI use is perceived in being complicit in a scheme that's ultimately going to undermine a lot of people's employability..."
      No, AI is not intelligent, it is not, and cannot be, "complicit" in anything. Casting it as intelligent and capable of devious human behavior IS part of the conspiracy, though, and that conspiracy is real.

      "The thing is, what's the alternative?"
      LOL wut? Build fires to scare off the animals at night like we do now? You realize we have what we have WITHOUT AI, right? What's the ALTERN

      • No, AI is not intelligent, it is not, and cannot be, "complicit" in anything.

        I was specifically referring to people who are using AI tools to help (or in some cases, perform most of the work involved) with their job tasks. The implication is just as the FP in this discussion had already said: If you're proving that AI can do your job, you're proving that you could be replaced by AI. When others see that happening, they'll probably wonder why you're willingly embracing a technology that's likely to put you out of a job.

        Your problem is that you've accepted the very premise that AI is progress.

        In their early history, automobiles were difficult to start, r

      • by narcc ( 412956 )

        Your problem is that you've accepted the very premise that AI is progress. The reason people oppose it is because it is not.

        Also very well said.

  • by Tony Isaac ( 1301187 ) on Saturday May 10, 2025 @12:11AM (#65365835) Homepage

    I haven't encountered *any* reluctance to adopt AI tools, other than financial or security concerns.

    Sure, if you just take what AI spits out and don't edit it, you're going to get crap. But those who use it as they would any other source of information, incorporating it into their own work, don't meet with any kind of stigma.

    • yep - just the opposite in tech. If you aren't using AI for ideation and to check your work, you are not competent.
      • by narcc ( 412956 )

        If you think that an LLM can check your work, you are not competent.

        • So you don't double check your work:
          • your work for grammar and typos?
          • For ideation. It is rare that in a list of 5-8 things, you won't miss something entirely.
          • For accuracy to established facts, literature.
          • Code review.
          • Productivity boost. (google says 25% of their code base has been refactored in the last year by AI)
          • Commenting on code you don't follow or understand.
          • Researching alternative ideas.
          • by narcc ( 412956 )

            Learn how to read. I said that LLMs are not capable of checking your work and that anyone who thought otherwise wasn't competent.

            Do you have an LLM check your posts? That would explain a lot...

  • Some companies preach AI usage, many don't, and maybe a few preach against it. I'm guessing that corporate attitudes and the resulting culture has a huge influence on how work use of AI is viewed by colleagues and managers. For example, I assume that AI usage at Nvidia is not only not viewed negatively, but refusing to use AI is instead viewed negatively. Perhaps this perspective is also common at hyperscalers.

  • AI users are the slave class raising their hands. Look upon them appropriately.

  • I openly use Ai. Of course I do my due diligence and check everything out. If people think less of me because I do that, they probably have issues. Not only with me.
  • Doctors and other 'professionals' have been using Google for ages to find out what they should do, AI is just doing it way better.

    • by dfghjk ( 711126 )

      I think that's right, a doctor's job is to apply a wealth of experience to determine what is most probably an explanation. Doctors do much more than that, but AI is especially well suited to assisting in medicine and computerized assistants have been successful there for a long, long time.

      Doctors don't need to know that their assistant uses AI techniques, though, and they won't care. Nor will their colleagues. Doctors are trained to apply consensus, AI works by computing consensus. There's no black mark.

      • by narcc ( 412956 )

        For emphasis, AI is a very broad term that applies to a whole lot more than just LLMs.

        AI works by computing consensus

        Most AI is statistical, but very little of it operates on things like facts or concepts.

  • The difference is, I think, people that use AI with restraint and only as a minor tool and those that fawn over it and are under the delusion that LLMs are actually intelligent, for example. For the latter, I propose the term "AIND-user" (Artifically Intelligent Naturally Dumb user).

    • Here's a guess, an oversimplification, but perhaps with some grain of truth. Some of us are more inquisitive and curious. Others are lazy and just want stuff. I'm going to say that's a general difference between introverts and extroverts.

      So the more inquisitive use LLMs to drill down deep into the details. Usually that leads to more questions, and usually, I find, the task balloons in size and complexity... but the LLMs help to pin down all the tiny details and split hairs on ambiguous terms.

      Lazy extroverts
      • by gweihir ( 88907 )

        In my experience, not a lot of people can do analysis and use logic.

        There are actually numbers from sociology on this: About 10-15% of all people are "independent thinkers", which essentially means they do fact-checking by themselves. And about 20% (including the independent thinkers) can be convinced by rational argument, which essentially means they can fact check when being prompted to do it. No idea whether there is any connection to introverts or not.

        I disagree that LLMs become an "intelligence multiplier". LLMs can only find very shallow things with any reasonable deg

        • Interesting notes on independent thinkers. Certainly close to my experience.

          I don't think the idea of intelligence multiplier means the AI generates insights. However as others have said above, they remain useful. I'm certainly getting a lot of value out of GPT, to do joe jobs like reformatting unstructured text into sql code, for instance. I also have found it very good to drill down into fine details on things like networking protocols, allowing me to understand and do things today that I couldn't do a ye
          • by gweihir ( 88907 )

            Interesting notes on independent thinkers. Certainly close to my experience.

            Thank you. I first stumbled on this when talking with a friend about academic teaching. We had independently found that we always had about 15% of students that "get it" and will fact-check things.

            I don't think the idea of intelligence multiplier means the AI generates insights. However as others have said above, they remain useful. I'm certainly getting a lot of value out of GPT, to do joe jobs like reformatting unstructured text into sql code, for instance. I also have found it very good to drill down into fine details on things like networking protocols, allowing me to understand and do things today that I couldn't do a year ago.

            Well, in that sense an encyclopedia is an "intelligence multiplier". But when stated that way, it sounds much less spectacular ot revolutionary...

            LLMs have some uses. "Better search" is certainly an eastablished one and that was clear from the first few months of availability on. For data-transformation, I am not

            • >Well, in that sense an encyclopedia is an "intelligence multiplier". But when stated that way, it sounds much less spectacular ot revolutionary...

              Probably true. I would agree. Anyone with a lot of curiosity can leverage an encyclopedia or wikipedia or AI as we are currently discussing.

              I think the key difference is the speed of electronics. I hear the echoes of Marshall McLuhan frequently. This IS the global village. He pointed out that the speed of electronics shrinks both time and space. The Pope is el
              • by gweihir ( 88907 )

                Thos speed is certainly a factor and comes with severe drawbacks. Thinking needs time. People that can do it have trouble finding that time. And people start to get overloaded more and more because pushing "news" has become so easy and cheap and they are unable to filter by importance. Historically, only importanty stuff got reported. Not anymore. And a lot of lies are part of the newsfeed as well, because the cost of getting caught in a public lie has gone down dramatically as well.

                • ...you get it ...
                  My take is that we really aren't evolutionarily adapted to electronics. It discombobulates us. Like you're saying, we don't have time to mentally process things before the next big story comes in. Also, cheap and easy.. yes, exactly, a conjecture of mine is that "free" or freemium is one of the huge problems with the internet as a whole. Anything free attracts parasites... there are no exceptions. What you're saying... it's so cheap to generate garbage that .. everyone does it now, almost
                  • by gweihir ( 88907 )

                    Indeed. We are linving in strabge times: Information and information how to use it is readily available, but most people cannot do it. What we observe here is really fundamental mental limits that apply to most people.

      • by narcc ( 412956 )

        the LLMs help to pin down all the tiny details and split hairs on ambiguous terms.

        Far more like is that they're producing nonsense and you just haven't noticed. It is absolutely astonishing how much these things will get wrong, and how defensive they can be about the obviously incorrect output. I recently demonstrated this to a colleague of mine by asking about a trivial mathematical fact. It was something like "is ab/c the same as a/c b" The response was hilarious. It replied that the two were not equivalent, but that ab/c was equivalent to b a/c. It even gave a nonsense justif

        • ...lets not come to blows over this...
          My experience is that if you give a lot of context with your question, you can get precise ... useful.... answers. Like an answer from a person, though, you always have to double check with other sources or common sense. There is no doubt I've sometimes gotten complete contradictory nonsense. But by using a combination of searches, and AI prompts, sometimes using other LLMs, to corroborate the outputs, I'm not going to say it is accurate, but after I vet the information
  • It's not "secretly" damaging your reputation. If you produce obvious AI generated content for anything other than useless corporate BS (i.e., DO fill out your self eval with AI, do NOT submit engineering proposals made with AI) you are going to be called out by me and plenty of others.
  • Since most people paint pictures poorly, using a paint brush damages your professional reputation.

  • My company is actively looking for coders that know how to design and direct AI to write quality code faster. AI can't do those things for you, and a project manager can't yet do everything with AI for themselves. If you don't have those traits you'd better learn them.

New systems generate new problems.

Working...