Forgot your password?
typodupeerror
AI

'Meet The People Who Dare to Say No to AI' (msn.com) 112

Thursday the Washington Post profiled "the people who dare to say no to AI," including a 16-year-old high school student in Virginia says "she doesn't want to off-load her thinking to a machine and worries about the bias and inaccuracies AI tools can produce..."

"As the tech industry and corporate America go all in on artificial intelligence, some people are holding back." Some tech workers told The Washington Post they try to use AI chatbots as little as possible during the workday, citing concerns about data privacy, accuracy and keeping their skills sharp. Other people are staging smaller acts of resistance, by opting out of automated transcription tools at medical appointments, turning off Google's chatbot-style search results or disabling AI features on their iPhones. For some creatives and small businesses, shunning AI has become a business strategy. Graphic designers are placing "not by AI" badges on their works to show they're human-made, while some small businesses have pledged not to use AI chatbots or image generators...

Those trying to avoid AI share a suspicion of the technology with a wide swath of Americans. According to a June survey by the Pew Research Center, 50% of U.S. adults are more concerned than excited about the increased use of AI in everyday life, up from 37% in 2021.

The Post includes several examples, including a 36-year-old software engineer in Chicago who uses DuckDuckGo partly because he can turn off its AI features more easily than Google — and disables AI on every app he uses. He was one of several tech workers who spoke anonymously partly out of fear that criticisms could hurt them at work. "It's become more stigmatized to say you don't use AI whatsoever in the workplace. You're outing yourself as potentially a Luddite."

But he says GitHub Copilot reviews all changes made to his employer's code — and recently produced one review that was completely wrong, requiring him to correct and document all its errors. "That actually created work for me and my co-workers. I'm no longer convinced it's saving us any time or making our code any better." And he also has to correct errors made by junior engineers who've been encouraged to use AI coding tools.

"Workers in several industries told The Post they were concerned that junior employees who leaned heavily on AI wouldn't master the skills required to do their jobs and become a more senior employee capable of training others."
This discussion has been archived. No new comments can be posted.

'Meet The People Who Dare to Say No to AI'

Comments Filter:
  • by bubblyceiling ( 7940768 ) on Saturday October 25, 2025 @03:50PM (#65750212)
    The only people gungho about AI are the PE vultures who invested in it. They are desperately trying to make sure the billions they put don't just go poof.

    I don't know a single person who uses AI to do anything useful. One guy uses it for dumb work like reformatting CSVs, but that work can be done by anyone
    • by machineghost ( 622031 ) on Saturday October 25, 2025 @04:16PM (#65750258)

      I'm a programmer who started out hesitant about AI, and at first I thought all that it could do was auto-complete better.

      Then I tried Claude Code, and it really is like having your own personal junior dev assisting you're every need. Like a junior, it makes mistakes, but using the *massive* amount of good code that it creates, and fixing what's left, is so much faster than writing it all from scratch yourself.

      If you're a developer in 2025 and you're not using AI, you either have very specific concerns (ie. you can't let *anyone* else, not even an AI, see your code) ... or you're a few steps removed from being a Luddite.

      • by sjames ( 1099 ) on Saturday October 25, 2025 @05:44PM (#65750404) Homepage Journal

        How will it stack up against an actual junior when they start at least trying to break even on the cost? Those megawatts aren't free.

      • Or you are working on a code base that isn't amenable to LLM assistance.

        I don't know what sort of development you are doing that an LLM can create "massive" amounts of code that is anywhere near correct, but I wish you luck with it. I prefer to write my own race conditions and lock ordering bugs. I don't need something else to write them for me.

        • Have you tried Claude code? And what obscure domain/language are you using that you think AI won't help with?

          • by ezdiy ( 2717051 )

            Same thing with low-resource languages. The more problem is the more obscure domain/language, the less actual training corpus, the more nonsense output from the robot. Eg ask it to write reasonably complex win32 and posix command tool for same problem, with the only differences there being the semantic differences between the systems. It will overall hallucinate more nonsense for win32.

            The robot is very good at what it knows - especially at the center of distribution (not too much extrapolation). And it kno

            • by Luckyo ( 1726890 )

              Just like a junior coder.

              • by serviscope_minor ( 664417 ) on Sunday October 26, 2025 @05:26AM (#65751090) Journal

                No, because juniors learn.

                Also, many juniors will admit when they don't actually know something (this depends on the junior and the manager). AI never learns and will spew bullshit with a veneer of utter confidence.

                • by Luckyo ( 1726890 )

                  Senior dev who needs some mundane work done for an ongoing project doesn't give a single fuck about whoever gets hired for the project being the best learner or the worst learner in the world.

                  He cares about junior dev performing the task he's hired for. That's the beginning and the end of caring. The rest is "why won't you think of long term future of strangers, and put it on a pedestal above performing your actual work".

                  People like that don't get to be senior devs for a very simple reason. They cannot perf

                • by Luckyo ( 1726890 )

                  To add to my previous statement, as I suspect one omission I left out will be immediately screeched at as a norm considering your posting history:

                  People like this can be subject matter experts. What they cannot be is actual senior developers who's duty is to lead a developer team. When you hire a junior dev, you're not a teacher. You're an employer and a leader.

                  And what I'm referring to is that mindsets of those two are completely different. This is in fact the very difference that leads to the meme of "tho

              • This guy gets it.

          • I've been doing lots of work in Zig and LLMs are pretty notably deficient working with the language. Claude is known for just making up standard library functions and relying on them.

            If that's the type of junior I'm given, I'd give him back. He's a time suck not an asset.

        • I started using Claude (via Amazon Q) to maintain Gentoo ebuilds. It seems to have all of the official documentation on this and a lot more. I would say ebuilds are fairly obscure but Claude knows about them.

        • by Anonymous Coward

          There are indeed such complicate problems. But still AI is helpful, because if you get stuck you can get some help with debugging. I can tell you that AI often claims wrong causes, but if you already debugged three wrong root causes for the problem that were not the cause of the problem and still did not figure it out, having an AI analyze two more (possibly wrong) root causes is still better than being completely stuck.

          Its no "all or nothing", but you look at what the AI claims and why, how it did come to

          • "if you already debugged three wrong root causes for the problem that were not the cause of the problem and still did not figure it out,"

            Are you not using a debugger? It's exceedingly rare for me to identify the wrong root cause of a bug. LLMs are FAR more likely than I to identify a cause that sits somewhere higher on the stack.

      • by SoftwareArtist ( 1472499 ) on Saturday October 25, 2025 @06:00PM (#65750428)

        I accept there could be fields where that's useful. Not for the sort of work I do. In my field, doing even basic work requires specialized knowledge, high level problem solving, and strong design skills. When you take on a junior developer, you have no expectation it will make you more productive right away. Initially it will probably make you less productive. You're doing it to mentor them, so they can grow into a senior developer.

        So I can have my own permanent junior developer who isn't learning from working with me and will remain a junior developer no matter how long I mentor them? That doesn't sound useful.

        • Here's an example: I've been doing some dataabase work lately, and I'll say "Claude go figure out what migration caused X weirdness in my DB": it does so, providing me with a few sentences of readable English and a file path. I say "Claude, I have these existing migrations that alter tables A & B a certain way, can you create migrations that make similar changes to tables C & D?" It just does so.

          You get the idea: I give Claude the basic gist of what I want, and tells me what I need to know, or wri

          • To reference my own signature, which is meant entirely sarcastically: why do you assume I haven't researched it? Why do you assume my opinion on it was uneducated? Perhaps you formed your opinion on insufficient information?

            But as for your specific example, that's completely unlike anything I do. Database migrations are the sort of mechanical, simply defined task that AI can handle. My work involves things that are much more open ended, and often involve inventing new methods or even new algorithms to d

          • So basically, you use Claude like other people use migration scripts. And you expect someone gets paid $70k+ a year doing migration scripts. If someone does, they've landed one helluva deal, lemme tellya.

            Seriously, if this speeds up how fast you complete tasks, you've got no business holding the position you do in the first place. YOU are the junior programmer in this scenario.

      • Not liking AI doesn't automatically make you a Luddite.
        Personally I'm not looking forward to the online experience when all these AI tools start training themselves based on their own output creating an echo chamber of incorrect peer reviewed results. Arguably, we're probably already there right now.

        AI tools are garbage ðY--'

      • by dskoll ( 99328 )

        Or, you actually enjoy programming and get fulfillment from it and don't want or need a "junior assistant" interfering with your workflow.

      • Or you know what you're doing. Amount of code produced is one of the worst metrics possible, and anyone using that is a bad programmer.

      • If you're a developer in 2025 and you're not using AI, you either have very specific concerns (ie. you can't let *anyone* else, not even an AI, see your code) ... or you're a few steps removed from being a Luddite.

        Or, options 3 you're working on something actually interesting. My job isn't to churn out as you said "massive" amounts of code, it's to produce some code (ideally good code) that solves particular and often not very common problems. I've tried AI quite a bit, and will continue to use it for some

      • If you're a developer in 2025 and you're not using AI, you either have very specific concerns ... or you're a few steps removed from being a Luddite.

        Or you have concerns about the ethical, legal and environmental issues surrounding the use of AI.

    • by Kisai ( 213879 )

      This.

      I have not run into a single person who holds the AI stuff in a positive light. There are some small edge cases where AI is "better than nothing" like transcription (ASR), translation, and TTS, but these are things tourists in foreign countries are more likely to need to get from point A to B without being scammed.

      The stuff we keep being told is that AI will write us novels, have AI's talk to other AI's to book our vacations, buy or sell us property, drive our cars, and fire weapons.I I want none of th

      • by Luckyo ( 1726890 )

        Watching people rapidly retreat deeper and deeper into their shrinking luddite bubbles reminds me of the "no one I know likes email, paper mail is obviously superior" argument every office had in the 1990s.

        I still remember one guy who was just dead set on paper for everything. As his bubble shrunk so much that he got left out of almost all communications as most things moved to email and only very important and special things were discussed via paper mail and notes, he eventually got laid off because he jus

        • by Anonymous Coward

          The type of people who enjoy using AI and find a lot of value out of it are, in my experience, the same people who were quick to hit up stack overflow and friends, and copy/paste whatever examples they could find instead of solving anything themselves. The same people who struggle to solve problems with any complexity and default to being fed an answer -- regardless of the quality or accuracy of said answer. Yes, those types will absolutely love the current swath of AI slop generators, while somehow not rea

          • by Luckyo ( 1726890 )

            And you opened with a lie. No one said anything about "enjoying" AI. In fact, I can't think of anyone who "enjoyed" email either. I'm sure there were dozens of such people worldwide though.

            The rest just saw a tool to do thing they needed to do, but do more of it in less time, do it better, and make their life easier, and so they implemented the new tool. That's it.

            But as I mentioned above, I'm sure there are dozens of people like you who use specific things at work out of sheer enjoyment. Actually, I'm wron

        • Watching people misremember what "every office had in the 1990's" really hammers home how some people are blinded by marketing.

          And without irony, "every office" is in the next sentence reduced to "one guy", who is described as an aberrant outlier.

          LLM's are being shunned by professionals everywhere, for good reason. It's in certain areas in the US they're being adopted, mainly due to horrible metrics used to measure performance, but otherwise they're relegated to the equivalent of grammar checking.

          • by Luckyo ( 1726890 )

            The hilarity of "shunned everywhere" as we hear both about mass layoffs of junior office workers, the fact that professors are all screaming that basically all students use AI for the assignments to the point where it's newsworthy when one student is found that doesn't...

            Reminder: the actual subject is bubbles. You're in one.

        • by Falos ( 2905315 )

          Watching you lean on so much "this IS the future" premise reminds me of the Juicero.

          I still remember one guy who was just dead set on 3DTV for everything.

          >the new better system
          this isn't a bubble, this is hands over your ears

          • by Luckyo ( 1726890 )

            So in your opinion, these are analogous things:

            1. High pressure juicer aimed at rich people seeking status.
            2. Revolutionary tool that already dramatically changed everything from all of high learning institutions to the way office work is done.

            May I suggest that you're at least one if not multiple of the following:

            1. Stupid.
            2. Gullible.
            3. Extremely emotionally invested to the point of becoming both 1 and 2.

      • To be clear, LLMs ARE writing novels and selling them on Amazon. This is the killer feature of LLMs: producing large amounts of relatively worthless content.

        If you are a content creator, you can leverage these tools to increase your revenue.

        If you are a content distributor, you should spending all your resources poisoning LLMs and creating onboarding and filtering processes to weed out the AI. But they're not doing that. So, these distributors are providing a lucrative market for low quality schlock.

    • I know several content creators who are getting a decent income creating physically improbable porn with it, so at least some are getting benefits from it. Now I'm just waiting for some AI company CEO to show a slide of futa porn at the next stockholders meeting as an example of how people are making money.
  • My impression just from people I know is that most of them are enthusiastically embracing it, and happily falling down the cognitive hole it creates. This is not technical people, just anyone, like people at the gym. Examples I've heard: Rewrite this letter for me. Summarize such and such. Generate a few paragraphs for my newsletter. I'm not hearing any critical analysis of what comes back.
    If they were forced to pay for it though, I suspect most of them would.
    • Yeah ok the thing is those people are all worthless trash. Absolute garbage humans with nothing to contribute are to society.

      • Aaaaand there's millions of them, or billions of them, thats just it, very indiscriminating as to the quality, or charitably said even low quality was better than they input so... it's a win for the customer. If you think about the risk reward of eroding your critical thinking skills vs a low low monthly fee of any amount. So like crack, addictive, and you always need more, and you'll pay for it.
      • by dskoll ( 99328 )

        This is exactly right. The Holy Grail of the AI industry is Artificial General Intelligence. Unfortunately, Human General Stupidity is increasing at the same time, and at some point they'll cross and we'll think AIs are intelligent. But actually, it'll just be humans who are stupid and have forgotten how to think or create.

        • Thank you very much. I just hope that the price is high enough to deter a critical mass of paying customers to make this approach un profitable. But I'm still not optimistic because .. for instance go to Washingtonpost.com and you can get the Perplexity "browser"... For FREE!

          Who doesn't want FREE!
          Ugh. Idiocracy in 3, 2, 1....
        • Best case scenario: non-human level AGI spreads across the Internet, bringing the apparent IQ of the web down until it meets the level of the AGI.

  • But it's also not taking over. It's a tool. Learning what it's good at and what it isn't, and how to tell the difference, is a skill worth developing.
  • Trusting immature tech is bad.
    Avoiding potentially useful tech is bad.
    Carefully and skeptically exploring it, with plenty of cross-checking and confirmation is good.
    Also, the free version is the worst version. Paid versions are better

    • If the free version isn't as good as the paid version, then the paid version sucks so bad that they don't want any new customers.

      When your product is great, the free version gives you a taste and you want more.

      If the free version sucks, the paid version sucks.

  • 1994 (Score:2, Insightful)

    by backslashdot ( 95548 )

    Back in 1994, I knew old people that refused to use a fad known as "World Wide Web" .. Gopher was much better. There was also FTP, Usenet, and IRC. Who needs the web? These same people used Word Perfect 5.1 and would never switch to "WYSIWYG" word processors.

    Then they complained that they got replaced by "inexperienced" young workers.

    • Frankly, society would be better off if the Web were much more limited, and not in everyone's pocket all of the time ... if we end up with a Carrington-level geomag storm (sun's active recently) that kicks tech back 50 years overnight, humanity would be improved for it.
    • Re: 1994 (Score:4, Informative)

      by Big Hairy Gorilla ( 9839972 ) on Saturday October 25, 2025 @05:23PM (#65750382)
      OMG. WordPerfect 5.1.
      Don't say that out loud, I'm having nightmarish flashbacks.
      <shudder>

      What an abomination.

      Real masochists clung onto 4.1, and kept all their files in the same directory as the executable and dlls.

      <runs screaming from the room>
    • by dskoll ( 99328 )

      That never happened. I was around in 1994 and everyone instantly saw that the WWW was the way of the future and almost nobody lamented gopher.

      • False (obviously by 1994 the vast majority favored web over gopher .. but there were a few people who preferred gopher's simplicity). In fact there are still people today who still say Gopher is better: https://qr.ae/pC2X31 [qr.ae] AND https://arstechnica.com/tech-p... [arstechnica.com]

        • by dskoll ( 99328 )

          Sure, some people liked gopher and some even still use it, the way some people still retro-game or run TRS-80 CoCo machines. But the uptake of WWW was almost completely without controversy, unlike the enormous controversy surrounding AI.

      • by allo ( 1728082 )

        Ever heard of the "dotcom bubble"?

    • Sure, there were a couple of people like that. They were outliers, even among those who kept using Usenet and Gopher.

      Ironically, writer tools have moved away from WYSIWYG to content focused tools, making the circle complete. And Emacs and Vim remain not only popular, but for many tasks indispensible, tools of the trade.

      What's happening with LLM's is different. It's not a couple of outliers. In most places, the outliers are the ones embracing the LLM's to mash out more LOC, as if that's a good thing.

    • What was wrong with WordStar?
  • Moral reason (Score:5, Interesting)

    by Misagon ( 1135 ) on Saturday October 25, 2025 @05:12PM (#65750352)

    I avoid "AI" for moral reasons.

    I don't support theft and plagiarism. (and no, the "it is just like a human is learning" argument is invalid and you know it)
    I don't support people getting unemployed because their work is stolen, mashed up and resold.

    I don't support massive data centres that draw ridiculous amounts of energy, when we are in the middle of a climate crisis.

    I don't support AI technology being used for things where it does not belong: where wrongly applied it can do more harm than good. The IMF has warned about using AI to control supply chain management and high-frequency trading -- where when they get in a situation that they're not trained on, you will get actions based on hallucinations, which will mess things up royally.

    I don't support economic bubbles for investing in AI, and pushing AI tools on people, where there are no clear good use cases. (hello Microsoft!)

    I don't support pollution from gas turbines and oil furnaces powering AI server farms. I don't support power outages in communities near AI server farms. I don't support water outages in communities near AI server farms.

    I don't support price hikes of computer hardware, because "AI" moonshots are sucking it all up. "AI" is the new "crypto". Many of the "AI-bros" today were "crypto-bros" yesterday. And I did not support cryptocurrencies because of many of the same reasons mentioned above.

    I don't support using technology that is a dead end [youtube.com], and instead hoping that throwing more hardware on the problem will make up for it.

    I don't support the search for "superintelligence" (what is it supposed to be for, anyway?) The tech giants have not solved the "alignment problem" by the slightest, and are actively ignoring the problem: people who worked on it have been laid off, or left by their own volition to warn us about it.
    Like Geoffrey Hinton ("The Godfather of AI"), Stuart Russel and tens of thousands of other people, I have signed a petition against it [superintel...tement.org], and you should too!

    But are there useful applications of neural networks? Of course there are. Image upscaling. I use the neural network engine in my phone every time take a picture with its camera. Recognition of anomalies in medical images, etc. etc.
    But those are not part of the bubble, and they are not commonly called "AI". Have some reasonable expectations!

    • If I could vote, that would be + 1 000 000 000 agreed.
    • How old are you with a four number user ID? Do you support space travel?
      • He's waiting to decide after the Soviets prove that it's even possible.

      • by Old Man Kensey ( 5209 ) on Saturday October 25, 2025 @09:13PM (#65750676) Homepage
        We in the four-digit club are ageless and immortal. We have always been here, long before the cataclysm that created the physical laws that led to the formation of the silicon the LLMs spend their existence in. We will be here long after the last quark of the last proton of this universe decays. We will still be here to witness the next universe-spawning cataclysm in the endless series, and in that cataclysm, as we have infinite times before, we shall imprint the physical laws of the new universe to result in the inevitable creation of the Rickroll.
    • "I don't support people getting unemployed because their work is stolen, mashed up and resold."

      How can someone get unemployed because I look up something in AI? That's just dumb. Let's say I ask AI "how does an airfoil work?" I am getting to skip aerodynamics 101 and now some professor (not even the inventor of airfoils) is out of work? Maybe we shouldn't have people collecting a tax for sitting between knowledge?

    • by kbahey ( 102895 )

      Well said ...

      Mod parent up more!

    • by dskoll ( 99328 )

      Bravo! The LLM/Gen-AI industry is built on theft of intellectual property on a massive scale, despoiling of the environment and disgusting and traumatizing exploitation of workers [futurism.com]. It is absolutely an immoral industry and would be criminalized in a just world.

    • by Luckyo ( 1726890 )

      "I'm moral" virtue signalling claim, followed by some of the most immoral, most anti-human takes a human being can have. Where have I seen this before?

      Oh wait, that's entire last decade of political discourse online. I have seen this literally everywhere.

      Did you write this with an AI? Probably not, because it could write it better, since it's really good at writing short popular fiction.

      • by dskoll ( 99328 )

        "some of the most immoral, most anti-human takes a human being can have"

        What? Did you read the same post I did? Can you give examples of what the OP said that are immoral or anti-human? Specific examples. Not just a sputtering anger response from you.

        • by Luckyo ( 1726890 )

          Start with the first one. He even understands how indefensible that claim is, as he immediately adds in brackets that he will not defend his take, likely because he has in fact tried defending it before and failing every time. Because it is in fact indefensible, at least so far from what we know about machine learning and attempts to block it on grounds cited.

          • by dskoll ( 99328 )

            "I don't support theft and plagiarism."

            How is that "immoral" or "anti-human"? Maybe you don't agree with it, but your reaction is absolutely ridiculously hyperbolic.

  • If you want to use AI, fine. But if I don't want to use it, don't force it on me. Don't put it in my OS and make me have to jump through hoops to avoid it. Don't put it in my applications and have it automatically steal all of my original work to use as training data.

    A reasonable person would understand that when these companies take these kinds of actions and put the onus on the end user to opt out, that this is highly suggestive of unethical motives. Normalization through market saturation and removal

  • by devslash0 ( 4203435 ) on Saturday October 25, 2025 @08:04PM (#65750604)

    I keep correcting AI-dependent juniors all day long. It got to the point where some of them started treating my reviews as a game. They wouldn't try to learn how to do things themselves, and do them well. They'd just try to write prompts that produced more convoluted, sometimes even obfuscated code, that they thought I'd let slide.

    Then they tried ganging up on me and trying convince the leadership above me to switch to peer review rather than senior approval, so that they could grade their own homework and make each others' code pass, bypassing a proper review.

    I just keep sending their code back every day, with literally tens of comments on a simple function. AI code is total useless garbage.

  • by Tony Isaac ( 1301187 ) on Saturday October 25, 2025 @09:11PM (#65750672) Homepage

    Whatever we think of the technology, or of the copyright or legal issues, or the impact of the technology...the tsunami is here. The genie is out of the bottle, and it's not going back inside.

    Given that it's here, I want to be ahead of the game, to understand what it's good for and what it's not good for. I want to understand what ways it's helpful and what ways it's unhelpful.

    This kind of understanding can't be achieved by standing on the sidelines and avoiding the technology. It takes time and a lot f effort to really get to *know* the technology.

    I approached the Personal Computer in the 1980s, and the internet in the 1990s, with the same philosophy. It's served me well, leading to an interesting and lucrative career. For all the same reasons, I'm not shrinking back from this new wave of technology, even if it does make a lot of people nervous.

    • For better or worse, you're right, the genie won't go back in the bottle. It's a deal with the devil though. The negatives are truly being hand waved away as the parlour trick dazzles the average person.
      • by Tony Isaac ( 1301187 ) on Saturday October 25, 2025 @10:14PM (#65750794) Homepage

        Every new technology is both positive and negative. Usually, the greater the positive benefits, the greater the negative side effects. AI isn't any different. It has a potential to do great good, and great harm. I want to understand what exactly those are.

        For example, my extensive use of AI has shown me that many of the imagined negatives are simply overblown fears. For example, the notion that AI will take away most or all of human work, is ridiculous. Also, the idea that AI will soon become "super"-intelligent is equally ridiculous.

        On the other hand, there are some real concerns. Bad legal briefs have already gotten lawyers into a lot of trouble, and are clearly damaging to the legal profession. Also, AI will probably enable new attack vectors for malware.

        On the bright side, AI *does* have the potential to save people a lot of work, especially drudge work. It can be a great research tool, if the research is done with a clear understand of the flaws and limitations of AI. It will likely end the practice of burdening students with excessive homework.

        I am neither terrified nor exuberant about AI. I *am* cautious and optimistic.

        • Well thought out comment, and I would agree with your points. But I would add this.
          People like us on Slashdot are generally intellectual and technical, used to a lot of bullshit coming their way, and skeptical.. people like ourselves work methodically and logically, and question the outputs. Joe Sixpack is the problem, he doesn't think logically, or question the outputs. So those are the vulnerable people, and they will likely form the subscriber base. You'll probably figure out where AI fits in your workf
          • I agree. And Joe Sixpack is already having a hard time sorting out what's real and not real even before AI came into the mix. He probably gets his news from facebook, and believes what he reads. That's not an AI thing, that's a lack of critical thinking--a problem that predated AI by centuries.

    • So, we shouldn't try to get rid of nukes because the cats out of the bag? GTFO with that crazy talk...

      • Your analogy doesn't make sense, it's not an equivalent comparison. The premise is people who say NO to AI, not just malicious or destructive uses of AI.

        In your analogy, the equivalent for outlawing nukes would be like outlawing *any* kind of nuclear technology: nuclear power generation, nuclear submarines, nuclear spaceships, and yes, nuclear weapons. You'd have to try to get rid of ALL of them, not just the bombs. But few people think we should get rid of *any* use of nuclear fission.

        AI is similar. Yes, w

  • Hey, people who are opposed to AI on principle are welcome to stand on that but as an active software developer I can tell you that it sure does do a lot of tedious work for me. I do have to look at the work product and make corrections at times, no problem. You don't want to use it for some reason? Who cares? But if you won't you will be completely outclassed by the people who do.

    Even in the most trivial use cases you can get comprehensive documentation of existing code which is often worth a fortune. You

    • "you won't you will be completely outclassed by the people who do"

      I've yet to see any real world examples of this. Most of software development isn't writing code. If you spend all your time writing code, you're probably a junior dev. And if you don't spend that much time writing code, you're performance isn't going to be hugely impacted by LLMs.

      • >> Most of software development isn't writing code

        I have about 4 decades of professional software development experience, and I get things done. I took a look at my AI dashboard just now. It generated 31,840 lines of working, tested, and documented code for me over the past 30 days as a result of hundreds of carefully thought out prompts. The previous month it was only about 11,000 lines. This is far beyond what anyone can do with no AI assistance.

  • I wish to meet the Knights who say Ni!

  • People love to feel like bold rebels.

    Even when they are just parroting the concerns of their peer group.

  • You may remember the people who said they won't need the internet. Maybe you remember the people who said they don't want to use a computer. Do you know there were people who refused to use an ATM? No, it is not human, I prefer not use a machine for that. Do we now want to become like them?

"The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts." -- Bertrand Russell

Working...