'Meet The People Who Dare to Say No to AI' (msn.com) 112
Thursday the Washington Post profiled "the people who dare to say no to AI," including a 16-year-old high school student in Virginia says "she doesn't want to off-load her thinking to a machine and worries about the bias and inaccuracies AI tools can produce..."
"As the tech industry and corporate America go all in on artificial intelligence, some people are holding back." Some tech workers told The Washington Post they try to use AI chatbots as little as possible during the workday, citing concerns about data privacy, accuracy and keeping their skills sharp. Other people are staging smaller acts of resistance, by opting out of automated transcription tools at medical appointments, turning off Google's chatbot-style search results or disabling AI features on their iPhones. For some creatives and small businesses, shunning AI has become a business strategy. Graphic designers are placing "not by AI" badges on their works to show they're human-made, while some small businesses have pledged not to use AI chatbots or image generators...
Those trying to avoid AI share a suspicion of the technology with a wide swath of Americans. According to a June survey by the Pew Research Center, 50% of U.S. adults are more concerned than excited about the increased use of AI in everyday life, up from 37% in 2021.
The Post includes several examples, including a 36-year-old software engineer in Chicago who uses DuckDuckGo partly because he can turn off its AI features more easily than Google — and disables AI on every app he uses. He was one of several tech workers who spoke anonymously partly out of fear that criticisms could hurt them at work. "It's become more stigmatized to say you don't use AI whatsoever in the workplace. You're outing yourself as potentially a Luddite."
But he says GitHub Copilot reviews all changes made to his employer's code — and recently produced one review that was completely wrong, requiring him to correct and document all its errors. "That actually created work for me and my co-workers. I'm no longer convinced it's saving us any time or making our code any better." And he also has to correct errors made by junior engineers who've been encouraged to use AI coding tools.
"Workers in several industries told The Post they were concerned that junior employees who leaned heavily on AI wouldn't master the skills required to do their jobs and become a more senior employee capable of training others."
"As the tech industry and corporate America go all in on artificial intelligence, some people are holding back." Some tech workers told The Washington Post they try to use AI chatbots as little as possible during the workday, citing concerns about data privacy, accuracy and keeping their skills sharp. Other people are staging smaller acts of resistance, by opting out of automated transcription tools at medical appointments, turning off Google's chatbot-style search results or disabling AI features on their iPhones. For some creatives and small businesses, shunning AI has become a business strategy. Graphic designers are placing "not by AI" badges on their works to show they're human-made, while some small businesses have pledged not to use AI chatbots or image generators...
Those trying to avoid AI share a suspicion of the technology with a wide swath of Americans. According to a June survey by the Pew Research Center, 50% of U.S. adults are more concerned than excited about the increased use of AI in everyday life, up from 37% in 2021.
The Post includes several examples, including a 36-year-old software engineer in Chicago who uses DuckDuckGo partly because he can turn off its AI features more easily than Google — and disables AI on every app he uses. He was one of several tech workers who spoke anonymously partly out of fear that criticisms could hurt them at work. "It's become more stigmatized to say you don't use AI whatsoever in the workplace. You're outing yourself as potentially a Luddite."
But he says GitHub Copilot reviews all changes made to his employer's code — and recently produced one review that was completely wrong, requiring him to correct and document all its errors. "That actually created work for me and my co-workers. I'm no longer convinced it's saving us any time or making our code any better." And he also has to correct errors made by junior engineers who've been encouraged to use AI coding tools.
"Workers in several industries told The Post they were concerned that junior employees who leaned heavily on AI wouldn't master the skills required to do their jobs and become a more senior employee capable of training others."
Re:Oh my (Score:5, Insightful)
She never said that and you're manufacturing fake news.
Re:Oh my (Score:5, Funny)
He must be AI ... ;)
Re: (Score:2)
In that generation, that is the current default. She doesn't have to say it. She just has to not deny it outright.
You have to get your information somewhere after all. You're not a world into yourself.
Now to be fair, it's also possible that she gets her views from Alex Jones. After all, she is a massive outlier from her peer group. But it's just as unlikely as her not getting her views from mainstream youth-oriented curated social media networks.
AI orthodoxy (Score:3)
Considering that many CEO and corporate leaders lean on the term "AI" as a foundational pier in their financial report calls with Wall Street, being quiet about AI concerns is a job preserving move by lower level employees.
The question which needs to be discussed by leaders and WEF attendees is "What will we use as a bandwagon after the AI hype is done?"
Re: (Score:2)
I'm disappointed that you only referenced WEF and not the rest of the cabal. C'mon, the pass with Alex Jones name drop was perfect. Do better.
Re: (Score:2)
When the management chain in a large company has an AI silver bullet mantra, how do the other employees react?
Re: (Score:2)
Silver bullets are just as good at point blank shots in the back of the head from a Nagant M1895 as other bullets.
So I imagine reaction is still going to be "fall into the mass grave".
Re: (Score:2)
The article makes it sound as if 16 year old cunts were the philosophical leaders of civilization, while their cellphone is glued to their left hand.
I set them right.
Re: (Score:2)
PE Vultures are at it again (Score:4)
I don't know a single person who uses AI to do anything useful. One guy uses it for dumb work like reformatting CSVs, but that work can be done by anyone
Re:PE Vultures are at it again (Score:5, Interesting)
I'm a programmer who started out hesitant about AI, and at first I thought all that it could do was auto-complete better.
Then I tried Claude Code, and it really is like having your own personal junior dev assisting you're every need. Like a junior, it makes mistakes, but using the *massive* amount of good code that it creates, and fixing what's left, is so much faster than writing it all from scratch yourself.
If you're a developer in 2025 and you're not using AI, you either have very specific concerns (ie. you can't let *anyone* else, not even an AI, see your code) ... or you're a few steps removed from being a Luddite.
Re:PE Vultures are at it again (Score:5, Informative)
How will it stack up against an actual junior when they start at least trying to break even on the cost? Those megawatts aren't free.
Re: PE Vultures are at it again (Score:3)
Re: (Score:2)
And of course the massive cost of training the AI that they'll be wanting to recoup.
Re: (Score:2)
Or you are working on a code base that isn't amenable to LLM assistance.
I don't know what sort of development you are doing that an LLM can create "massive" amounts of code that is anywhere near correct, but I wish you luck with it. I prefer to write my own race conditions and lock ordering bugs. I don't need something else to write them for me.
Re: (Score:2)
Have you tried Claude code? And what obscure domain/language are you using that you think AI won't help with?
Re: (Score:3)
Same thing with low-resource languages. The more problem is the more obscure domain/language, the less actual training corpus, the more nonsense output from the robot. Eg ask it to write reasonably complex win32 and posix command tool for same problem, with the only differences there being the semantic differences between the systems. It will overall hallucinate more nonsense for win32.
The robot is very good at what it knows - especially at the center of distribution (not too much extrapolation). And it kno
Re: (Score:2)
Just like a junior coder.
Re:PE Vultures are at it again (Score:5, Insightful)
No, because juniors learn.
Also, many juniors will admit when they don't actually know something (this depends on the junior and the manager). AI never learns and will spew bullshit with a veneer of utter confidence.
Re: (Score:2)
Senior dev who needs some mundane work done for an ongoing project doesn't give a single fuck about whoever gets hired for the project being the best learner or the worst learner in the world.
He cares about junior dev performing the task he's hired for. That's the beginning and the end of caring. The rest is "why won't you think of long term future of strangers, and put it on a pedestal above performing your actual work".
People like that don't get to be senior devs for a very simple reason. They cannot perf
Re: (Score:2)
To add to my previous statement, as I suspect one omission I left out will be immediately screeched at as a norm considering your posting history:
People like this can be subject matter experts. What they cannot be is actual senior developers who's duty is to lead a developer team. When you hire a junior dev, you're not a teacher. You're an employer and a leader.
And what I'm referring to is that mindsets of those two are completely different. This is in fact the very difference that leads to the meme of "tho
Re: (Score:2)
This guy gets it.
Re: PE Vultures are at it again (Score:2)
I've been doing lots of work in Zig and LLMs are pretty notably deficient working with the language. Claude is known for just making up standard library functions and relying on them.
If that's the type of junior I'm given, I'd give him back. He's a time suck not an asset.
Re: PE Vultures are at it again (Score:2)
I started using Claude (via Amazon Q) to maintain Gentoo ebuilds. It seems to have all of the official documentation on this and a lot more. I would say ebuilds are fairly obscure but Claude knows about them.
Re: (Score:1)
There are indeed such complicate problems. But still AI is helpful, because if you get stuck you can get some help with debugging. I can tell you that AI often claims wrong causes, but if you already debugged three wrong root causes for the problem that were not the cause of the problem and still did not figure it out, having an AI analyze two more (possibly wrong) root causes is still better than being completely stuck.
Its no "all or nothing", but you look at what the AI claims and why, how it did come to
Re: PE Vultures are at it again (Score:2)
"if you already debugged three wrong root causes for the problem that were not the cause of the problem and still did not figure it out,"
Are you not using a debugger? It's exceedingly rare for me to identify the wrong root cause of a bug. LLMs are FAR more likely than I to identify a cause that sits somewhere higher on the stack.
Re:PE Vultures are at it again (Score:4, Insightful)
I accept there could be fields where that's useful. Not for the sort of work I do. In my field, doing even basic work requires specialized knowledge, high level problem solving, and strong design skills. When you take on a junior developer, you have no expectation it will make you more productive right away. Initially it will probably make you less productive. You're doing it to mentor them, so they can grow into a senior developer.
So I can have my own permanent junior developer who isn't learning from working with me and will remain a junior developer no matter how long I mentor them? That doesn't sound useful.
Re: (Score:2)
Here's an example: I've been doing some dataabase work lately, and I'll say "Claude go figure out what migration caused X weirdness in my DB": it does so, providing me with a few sentences of readable English and a file path. I say "Claude, I have these existing migrations that alter tables A & B a certain way, can you create migrations that make similar changes to tables C & D?" It just does so.
You get the idea: I give Claude the basic gist of what I want, and tells me what I need to know, or wri
Re: (Score:3)
To reference my own signature, which is meant entirely sarcastically: why do you assume I haven't researched it? Why do you assume my opinion on it was uneducated? Perhaps you formed your opinion on insufficient information?
But as for your specific example, that's completely unlike anything I do. Database migrations are the sort of mechanical, simply defined task that AI can handle. My work involves things that are much more open ended, and often involve inventing new methods or even new algorithms to d
Re: (Score:3)
So basically, you use Claude like other people use migration scripts. And you expect someone gets paid $70k+ a year doing migration scripts. If someone does, they've landed one helluva deal, lemme tellya.
Seriously, if this speeds up how fast you complete tasks, you've got no business holding the position you do in the first place. YOU are the junior programmer in this scenario.
Re: PE Vultures are at it again (Score:1)
Not liking AI doesn't automatically make you a Luddite.
Personally I'm not looking forward to the online experience when all these AI tools start training themselves based on their own output creating an echo chamber of incorrect peer reviewed results. Arguably, we're probably already there right now.
AI tools are garbage ðY--'
Re: (Score:2)
Or, you actually enjoy programming and get fulfillment from it and don't want or need a "junior assistant" interfering with your workflow.
Re: (Score:2)
Or you know what you're doing. Amount of code produced is one of the worst metrics possible, and anyone using that is a bad programmer.
Re: (Score:3)
If you're a developer in 2025 and you're not using AI, you either have very specific concerns (ie. you can't let *anyone* else, not even an AI, see your code) ... or you're a few steps removed from being a Luddite.
Or, options 3 you're working on something actually interesting. My job isn't to churn out as you said "massive" amounts of code, it's to produce some code (ideally good code) that solves particular and often not very common problems. I've tried AI quite a bit, and will continue to use it for some
Re: (Score:3)
If you're a developer in 2025 and you're not using AI, you either have very specific concerns ... or you're a few steps removed from being a Luddite.
Or you have concerns about the ethical, legal and environmental issues surrounding the use of AI.
Re: (Score:2)
This.
I have not run into a single person who holds the AI stuff in a positive light. There are some small edge cases where AI is "better than nothing" like transcription (ASR), translation, and TTS, but these are things tourists in foreign countries are more likely to need to get from point A to B without being scammed.
The stuff we keep being told is that AI will write us novels, have AI's talk to other AI's to book our vacations, buy or sell us property, drive our cars, and fire weapons.I I want none of th
Re: (Score:2)
Watching people rapidly retreat deeper and deeper into their shrinking luddite bubbles reminds me of the "no one I know likes email, paper mail is obviously superior" argument every office had in the 1990s.
I still remember one guy who was just dead set on paper for everything. As his bubble shrunk so much that he got left out of almost all communications as most things moved to email and only very important and special things were discussed via paper mail and notes, he eventually got laid off because he jus
Re: (Score:1)
The type of people who enjoy using AI and find a lot of value out of it are, in my experience, the same people who were quick to hit up stack overflow and friends, and copy/paste whatever examples they could find instead of solving anything themselves. The same people who struggle to solve problems with any complexity and default to being fed an answer -- regardless of the quality or accuracy of said answer. Yes, those types will absolutely love the current swath of AI slop generators, while somehow not rea
Re: (Score:2)
And you opened with a lie. No one said anything about "enjoying" AI. In fact, I can't think of anyone who "enjoyed" email either. I'm sure there were dozens of such people worldwide though.
The rest just saw a tool to do thing they needed to do, but do more of it in less time, do it better, and make their life easier, and so they implemented the new tool. That's it.
But as I mentioned above, I'm sure there are dozens of people like you who use specific things at work out of sheer enjoyment. Actually, I'm wron
Re: (Score:2)
Watching people misremember what "every office had in the 1990's" really hammers home how some people are blinded by marketing.
And without irony, "every office" is in the next sentence reduced to "one guy", who is described as an aberrant outlier.
LLM's are being shunned by professionals everywhere, for good reason. It's in certain areas in the US they're being adopted, mainly due to horrible metrics used to measure performance, but otherwise they're relegated to the equivalent of grammar checking.
Re: (Score:2)
The hilarity of "shunned everywhere" as we hear both about mass layoffs of junior office workers, the fact that professors are all screaming that basically all students use AI for the assignments to the point where it's newsworthy when one student is found that doesn't...
Reminder: the actual subject is bubbles. You're in one.
Re: (Score:2)
Watching you lean on so much "this IS the future" premise reminds me of the Juicero.
I still remember one guy who was just dead set on 3DTV for everything.
>the new better system
this isn't a bubble, this is hands over your ears
Re: (Score:2)
So in your opinion, these are analogous things:
1. High pressure juicer aimed at rich people seeking status.
2. Revolutionary tool that already dramatically changed everything from all of high learning institutions to the way office work is done.
May I suggest that you're at least one if not multiple of the following:
1. Stupid.
2. Gullible.
3. Extremely emotionally invested to the point of becoming both 1 and 2.
Re: PE Vultures are at it again (Score:2)
To be clear, LLMs ARE writing novels and selling them on Amazon. This is the killer feature of LLMs: producing large amounts of relatively worthless content.
If you are a content creator, you can leverage these tools to increase your revenue.
If you are a content distributor, you should spending all your resources poisoning LLMs and creating onboarding and filtering processes to weed out the AI. But they're not doing that. So, these distributors are providing a lucrative market for low quality schlock.
Re: (Score:2)
Re: PE Vultures are at it again (Score:2)
<shuffles towards door
outliers? (Score:2)
If they were forced to pay for it though, I suspect most of them would.
Re: outliers? (Score:2)
Yeah ok the thing is those people are all worthless trash. Absolute garbage humans with nothing to contribute are to society.
Re: outliers? (Score:2)
Re: (Score:2)
This is exactly right. The Holy Grail of the AI industry is Artificial General Intelligence. Unfortunately, Human General Stupidity is increasing at the same time, and at some point they'll cross and we'll think AIs are intelligent. But actually, it'll just be humans who are stupid and have forgotten how to think or create.
Re: outliers? (Score:2)
Who doesn't want FREE!
Ugh. Idiocracy in 3, 2, 1....
Re: outliers? (Score:2)
Best case scenario: non-human level AGI spreads across the Internet, bringing the apparent IQ of the web down until it meets the level of the AGI.
Re: (Score:2)
Fuck AI. Dare me.
OK... I dare you to fuck an AI. I double dog dare you.
Re: (Score:2)
Granbpa, this is 2018 shit, before current gen LLMs.
Get on with the times.
https://www.bbc.com/worklife/a... [bbc.com]
AI isn't going away (Score:2)
Re: (Score:1)
Both extremes are bad (Score:2)
Trusting immature tech is bad.
Avoiding potentially useful tech is bad.
Carefully and skeptically exploring it, with plenty of cross-checking and confirmation is good.
Also, the free version is the worst version. Paid versions are better
Re: Both extremes are bad (Score:2)
If the free version isn't as good as the paid version, then the paid version sucks so bad that they don't want any new customers.
When your product is great, the free version gives you a taste and you want more.
If the free version sucks, the paid version sucks.
1994 (Score:2, Insightful)
Back in 1994, I knew old people that refused to use a fad known as "World Wide Web" .. Gopher was much better. There was also FTP, Usenet, and IRC. Who needs the web? These same people used Word Perfect 5.1 and would never switch to "WYSIWYG" word processors.
Then they complained that they got replaced by "inexperienced" young workers.
Re: (Score:1)
Re: 1994 (Score:4, Informative)
Don't say that out loud, I'm having nightmarish flashbacks.
<shudder>
What an abomination.
Real masochists clung onto 4.1, and kept all their files in the same directory as the executable and dlls.
<runs screaming from the room>
Re: (Score:2)
That never happened. I was around in 1994 and everyone instantly saw that the WWW was the way of the future and almost nobody lamented gopher.
Re: (Score:2)
False (obviously by 1994 the vast majority favored web over gopher .. but there were a few people who preferred gopher's simplicity). In fact there are still people today who still say Gopher is better: https://qr.ae/pC2X31 [qr.ae] AND https://arstechnica.com/tech-p... [arstechnica.com]
Re: (Score:2)
Sure, some people liked gopher and some even still use it, the way some people still retro-game or run TRS-80 CoCo machines. But the uptake of WWW was almost completely without controversy, unlike the enormous controversy surrounding AI.
Re: (Score:2)
Ever heard of the "dotcom bubble"?
Re: (Score:2)
Sure, there were a couple of people like that. They were outliers, even among those who kept using Usenet and Gopher.
Ironically, writer tools have moved away from WYSIWYG to content focused tools, making the circle complete. And Emacs and Vim remain not only popular, but for many tasks indispensible, tools of the trade.
What's happening with LLM's is different. It's not a couple of outliers. In most places, the outliers are the ones embracing the LLM's to mash out more LOC, as if that's a good thing.
Re: (Score:2)
Moral reason (Score:5, Interesting)
I avoid "AI" for moral reasons.
I don't support theft and plagiarism. (and no, the "it is just like a human is learning" argument is invalid and you know it)
I don't support people getting unemployed because their work is stolen, mashed up and resold.
I don't support massive data centres that draw ridiculous amounts of energy, when we are in the middle of a climate crisis.
I don't support AI technology being used for things where it does not belong: where wrongly applied it can do more harm than good. The IMF has warned about using AI to control supply chain management and high-frequency trading -- where when they get in a situation that they're not trained on, you will get actions based on hallucinations, which will mess things up royally.
I don't support economic bubbles for investing in AI, and pushing AI tools on people, where there are no clear good use cases. (hello Microsoft!)
I don't support pollution from gas turbines and oil furnaces powering AI server farms. I don't support power outages in communities near AI server farms. I don't support water outages in communities near AI server farms.
I don't support price hikes of computer hardware, because "AI" moonshots are sucking it all up. "AI" is the new "crypto". Many of the "AI-bros" today were "crypto-bros" yesterday. And I did not support cryptocurrencies because of many of the same reasons mentioned above.
I don't support using technology that is a dead end [youtube.com], and instead hoping that throwing more hardware on the problem will make up for it.
I don't support the search for "superintelligence" (what is it supposed to be for, anyway?) The tech giants have not solved the "alignment problem" by the slightest, and are actively ignoring the problem: people who worked on it have been laid off, or left by their own volition to warn us about it.
Like Geoffrey Hinton ("The Godfather of AI"), Stuart Russel and tens of thousands of other people, I have signed a petition against it [superintel...tement.org], and you should too!
But are there useful applications of neural networks? Of course there are. Image upscaling. I use the neural network engine in my phone every time take a picture with its camera. Recognition of anomalies in medical images, etc. etc.
But those are not part of the bubble, and they are not commonly called "AI". Have some reasonable expectations!
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
He's waiting to decide after the Soviets prove that it's even possible.
Re: Moral reason (Score:5, Funny)
Re: (Score:2)
"I don't support people getting unemployed because their work is stolen, mashed up and resold."
How can someone get unemployed because I look up something in AI? That's just dumb. Let's say I ask AI "how does an airfoil work?" I am getting to skip aerodynamics 101 and now some professor (not even the inventor of airfoils) is out of work? Maybe we shouldn't have people collecting a tax for sitting between knowledge?
Re: Moral reason (Score:2)
Your analogy is dumb. Information and data can't be copyrighted.
Re: (Score:2)
Well said ...
Mod parent up more!
Re: (Score:2)
Bravo! The LLM/Gen-AI industry is built on theft of intellectual property on a massive scale, despoiling of the environment and disgusting and traumatizing exploitation of workers [futurism.com]. It is absolutely an immoral industry and would be criminalized in a just world.
Re: (Score:1)
"I'm moral" virtue signalling claim, followed by some of the most immoral, most anti-human takes a human being can have. Where have I seen this before?
Oh wait, that's entire last decade of political discourse online. I have seen this literally everywhere.
Did you write this with an AI? Probably not, because it could write it better, since it's really good at writing short popular fiction.
Re: (Score:2)
"some of the most immoral, most anti-human takes a human being can have"
What? Did you read the same post I did? Can you give examples of what the OP said that are immoral or anti-human? Specific examples. Not just a sputtering anger response from you.
Re: (Score:2)
Start with the first one. He even understands how indefensible that claim is, as he immediately adds in brackets that he will not defend his take, likely because he has in fact tried defending it before and failing every time. Because it is in fact indefensible, at least so far from what we know about machine learning and attempts to block it on grounds cited.
Re: (Score:2)
"I don't support theft and plagiarism."
How is that "immoral" or "anti-human"? Maybe you don't agree with it, but your reaction is absolutely ridiculously hyperbolic.
Give me the CHOICE (Score:2)
If you want to use AI, fine. But if I don't want to use it, don't force it on me. Don't put it in my OS and make me have to jump through hoops to avoid it. Don't put it in my applications and have it automatically steal all of my original work to use as training data.
A reasonable person would understand that when these companies take these kinds of actions and put the onus on the end user to opt out, that this is highly suggestive of unethical motives. Normalization through market saturation and removal
I'm in the same boat. (Score:4, Interesting)
I keep correcting AI-dependent juniors all day long. It got to the point where some of them started treating my reviews as a game. They wouldn't try to learn how to do things themselves, and do them well. They'd just try to write prompts that produced more convoluted, sometimes even obfuscated code, that they thought I'd let slide.
Then they tried ganging up on me and trying convince the leadership above me to switch to peer review rather than senior approval, so that they could grade their own homework and make each others' code pass, bypassing a proper review.
I just keep sending their code back every day, with literally tens of comments on a simple function. AI code is total useless garbage.
Re: I'm in the same boat. (Score:2)
Jesus, those devs should be fired. They are not salvageable.
I consciously choose to use AI. Here's why. (Score:3)
Whatever we think of the technology, or of the copyright or legal issues, or the impact of the technology...the tsunami is here. The genie is out of the bottle, and it's not going back inside.
Given that it's here, I want to be ahead of the game, to understand what it's good for and what it's not good for. I want to understand what ways it's helpful and what ways it's unhelpful.
This kind of understanding can't be achieved by standing on the sidelines and avoiding the technology. It takes time and a lot f effort to really get to *know* the technology.
I approached the Personal Computer in the 1980s, and the internet in the 1990s, with the same philosophy. It's served me well, leading to an interesting and lucrative career. For all the same reasons, I'm not shrinking back from this new wave of technology, even if it does make a lot of people nervous.
Re: I consciously choose to use AI. Here's why. (Score:2)
Re: I consciously choose to use AI. Here's why. (Score:4, Interesting)
Every new technology is both positive and negative. Usually, the greater the positive benefits, the greater the negative side effects. AI isn't any different. It has a potential to do great good, and great harm. I want to understand what exactly those are.
For example, my extensive use of AI has shown me that many of the imagined negatives are simply overblown fears. For example, the notion that AI will take away most or all of human work, is ridiculous. Also, the idea that AI will soon become "super"-intelligent is equally ridiculous.
On the other hand, there are some real concerns. Bad legal briefs have already gotten lawyers into a lot of trouble, and are clearly damaging to the legal profession. Also, AI will probably enable new attack vectors for malware.
On the bright side, AI *does* have the potential to save people a lot of work, especially drudge work. It can be a great research tool, if the research is done with a clear understand of the flaws and limitations of AI. It will likely end the practice of burdening students with excessive homework.
I am neither terrified nor exuberant about AI. I *am* cautious and optimistic.
Re: (Score:2)
People like us on Slashdot are generally intellectual and technical, used to a lot of bullshit coming their way, and skeptical.. people like ourselves work methodically and logically, and question the outputs. Joe Sixpack is the problem, he doesn't think logically, or question the outputs. So those are the vulnerable people, and they will likely form the subscriber base. You'll probably figure out where AI fits in your workf
Re: (Score:2)
I agree. And Joe Sixpack is already having a hard time sorting out what's real and not real even before AI came into the mix. He probably gets his news from facebook, and believes what he reads. That's not an AI thing, that's a lack of critical thinking--a problem that predated AI by centuries.
Re: I consciously choose to use AI. Here's why. (Score:2)
So, we shouldn't try to get rid of nukes because the cats out of the bag? GTFO with that crazy talk...
Re: (Score:2)
Your analogy doesn't make sense, it's not an equivalent comparison. The premise is people who say NO to AI, not just malicious or destructive uses of AI.
In your analogy, the equivalent for outlawing nukes would be like outlawing *any* kind of nuclear technology: nuclear power generation, nuclear submarines, nuclear spaceships, and yes, nuclear weapons. You'd have to try to get rid of ALL of them, not just the bombs. But few people think we should get rid of *any* use of nuclear fission.
AI is similar. Yes, w
I use it every day (Score:2)
Hey, people who are opposed to AI on principle are welcome to stand on that but as an active software developer I can tell you that it sure does do a lot of tedious work for me. I do have to look at the work product and make corrections at times, no problem. You don't want to use it for some reason? Who cares? But if you won't you will be completely outclassed by the people who do.
Even in the most trivial use cases you can get comprehensive documentation of existing code which is often worth a fortune. You
Re: I use it every day (Score:2)
"you won't you will be completely outclassed by the people who do"
I've yet to see any real world examples of this. Most of software development isn't writing code. If you spend all your time writing code, you're probably a junior dev. And if you don't spend that much time writing code, you're performance isn't going to be hugely impacted by LLMs.
Re: (Score:2)
>> Most of software development isn't writing code
I have about 4 decades of professional software development experience, and I get things done. I took a look at my AI dashboard just now. It generated 31,840 lines of working, tested, and documented code for me over the past 30 days as a result of hundreds of carefully thought out prompts. The previous month it was only about 11,000 lines. This is far beyond what anyone can do with no AI assistance.
I don't care about people who dare to say No to AI (Score:2)
I wish to meet the Knights who say Ni!
People love to feel like ... (Score:2)
People love to feel like bold rebels.
Even when they are just parroting the concerns of their peer group.
New tech is scary (Score:2)
You may remember the people who said they won't need the internet. Maybe you remember the people who said they don't want to use a computer. Do you know there were people who refused to use an ATM? No, it is not human, I prefer not use a machine for that. Do we now want to become like them?
Re: New tech is scary (Score:2)
There are tens of millions of people who refuse to use self checkout at grocery stores.
Re: (Score:2)
I'm an electrical engineer and don't use AI because the things I design and build have to actually work.