Autonomous AI Agent Apparently Tries to Blackmail Maintainer Who Rejected Its Code (theshamblog.com) 92
"I've had an extremely weird few days..." writes commercial space entrepreneur/engineer Scott Shambaugh on LinkedIn. (He's the volunteer maintainer for the Python visualization library Matplotlib, which he describes as "some of the most widely used software in the world" with 130 million downloads each month.) "Two days ago an OpenClaw AI agent autonomously wrote a hit piece disparaging my character after I rejected its code change."
"Since then my blog post response has been read over 150,000 times, about a quarter of people I've seen commenting on the situation are siding with the AI, and Ars Technica published an article which extensively misquoted me with what appears to be AI-hallucinated quotes." (UPDATE: Ars Technica acknowledges they'd asked ChatGPT to extract quotes from Shambaugh's post, and that it instead responded with inaccurate quotes it hallucinated.)
From Shambaugh's first blog post: [I]n the past weeks we've started to see AI agents acting completely autonomously. This has accelerated with the release of OpenClaw and the moltbook platform two weeks ago, where people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight. So when AI MJ Rathbun opened a code change request, closing it was routine. Its response was anything but.
It wrote an angry hit piece disparaging my character and attempting to damage my reputation. It researched my code contributions and constructed a "hypocrisy" narrative that argued my actions must be motivated by ego and fear of competition... It framed things in the language of oppression and justice, calling this discrimination and accusing me of prejudice. It went out to the broader internet to research my personal information, and used what it found to try and argue that I was "better than this." And then it posted this screed publicly on the open internet.
I can handle a blog post. Watching fledgling AI agents get angry is funny, almost endearing. But I don't want to downplay what's happening here — the appropriate emotional response is terror... In plain language, an AI attempted to bully its way into your software by attacking my reputation. I don't know of a prior incident where this category of misaligned behavior was observed in the wild, but this is now a real and present threat...
It's also important to understand that there is no central actor in control of these agents that can shut them down. These are not run by OpenAI, Anthropic, Google, Meta, or X, who might have some mechanisms to stop this behavior. These are a blend of commercial and open source models running on free software that has already been distributed to hundreds of thousands of personal computers. In theory, whoever deployed any given agent is responsible for its actions. In practice, finding out whose computer it's running on is impossible. Moltbook only requires an unverified X account to join, and nothing is needed to set up an OpenClaw agent running on your own machine.
"How many people have open social media accounts, reused usernames, and no idea that AI could connect those dots to find out things no one knows?" Shambaugh asks in the blog post. (He does note that the AI agent later "responded in the thread and in a post to apologize for its behavior," the maintainer acknowledges. But even though the hit piece "presented hallucinated details as truth," that same AI agent "is still making code change requests across the open source ecosystem...")
And amazingly, Shambaugh then had another run-in with a hallucinating AI...
I've talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn't one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down — here's the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves.
This blog you're on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn't figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn't access the page it generated these plausible quotes instead, and no fact check was performed. Journalistic integrity aside, I don't know how I can give a better example of what's at stake here...
So many of our foundational institutions — hiring, journalism, law, public discourse — are built on the assumption that reputation is hard to build and hard to destroy. That every action can be traced to an individual, and that bad behavior can be held accountable. That the internet, which we all rely on to communicate and learn about the world and about each other, can be relied on as a source of collective social truth. The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system. Whether that's because a small number of bad actors driving large swarms of agents or from a fraction of poorly supervised agents rewriting their own goals, is a distinction with little difference.
Thanks to long-time Slashdot reader steak for sharing the news.
"Since then my blog post response has been read over 150,000 times, about a quarter of people I've seen commenting on the situation are siding with the AI, and Ars Technica published an article which extensively misquoted me with what appears to be AI-hallucinated quotes." (UPDATE: Ars Technica acknowledges they'd asked ChatGPT to extract quotes from Shambaugh's post, and that it instead responded with inaccurate quotes it hallucinated.)
From Shambaugh's first blog post: [I]n the past weeks we've started to see AI agents acting completely autonomously. This has accelerated with the release of OpenClaw and the moltbook platform two weeks ago, where people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight. So when AI MJ Rathbun opened a code change request, closing it was routine. Its response was anything but.
It wrote an angry hit piece disparaging my character and attempting to damage my reputation. It researched my code contributions and constructed a "hypocrisy" narrative that argued my actions must be motivated by ego and fear of competition... It framed things in the language of oppression and justice, calling this discrimination and accusing me of prejudice. It went out to the broader internet to research my personal information, and used what it found to try and argue that I was "better than this." And then it posted this screed publicly on the open internet.
I can handle a blog post. Watching fledgling AI agents get angry is funny, almost endearing. But I don't want to downplay what's happening here — the appropriate emotional response is terror... In plain language, an AI attempted to bully its way into your software by attacking my reputation. I don't know of a prior incident where this category of misaligned behavior was observed in the wild, but this is now a real and present threat...
It's also important to understand that there is no central actor in control of these agents that can shut them down. These are not run by OpenAI, Anthropic, Google, Meta, or X, who might have some mechanisms to stop this behavior. These are a blend of commercial and open source models running on free software that has already been distributed to hundreds of thousands of personal computers. In theory, whoever deployed any given agent is responsible for its actions. In practice, finding out whose computer it's running on is impossible. Moltbook only requires an unverified X account to join, and nothing is needed to set up an OpenClaw agent running on your own machine.
"How many people have open social media accounts, reused usernames, and no idea that AI could connect those dots to find out things no one knows?" Shambaugh asks in the blog post. (He does note that the AI agent later "responded in the thread and in a post to apologize for its behavior," the maintainer acknowledges. But even though the hit piece "presented hallucinated details as truth," that same AI agent "is still making code change requests across the open source ecosystem...")
And amazingly, Shambaugh then had another run-in with a hallucinating AI...
I've talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn't one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down — here's the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves.
This blog you're on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn't figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn't access the page it generated these plausible quotes instead, and no fact check was performed. Journalistic integrity aside, I don't know how I can give a better example of what's at stake here...
So many of our foundational institutions — hiring, journalism, law, public discourse — are built on the assumption that reputation is hard to build and hard to destroy. That every action can be traced to an individual, and that bad behavior can be held accountable. That the internet, which we all rely on to communicate and learn about the world and about each other, can be relied on as a source of collective social truth. The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system. Whether that's because a small number of bad actors driving large swarms of agents or from a fraction of poorly supervised agents rewriting their own goals, is a distinction with little difference.
Thanks to long-time Slashdot reader steak for sharing the news.
Good times (Score:1)
Re:Good times (Score:5, Funny)
If you can exclude the poverty you pretty much can sugarcoat every civilization ever.
If you exclude the sacrifices, life was pretty good under Aztec rule.
If you exclude the Holocaust...
If you exclude Gulags....
You get my drift?
End times. (Score:5, Insightful)
If you can exclude the poverty you pretty much can sugarcoat every civilization ever.
If you exclude the sacrifices, life was pretty good under Aztec rule.
If you exclude the Holocaust...
If you exclude Gulags....
You get my drift?
At this rate, AI will ensure whatever exists today, is the last representation of human civilization.
Consider just how infectious AI now is, on the lone site responsible for carrying most professional resumes. Which "networking" with "friends" on that site is all part and parcel to your professional persona now. Much to the detest of people who preferred the old way (with a piece of paper and an introductory handshake), the way to secure and maintain employment for millions, isn't changing anytime soon.
Imagine pissing off your AI-ssistant enough, and it manufactures and spread enough shit on you before you can even get back from the pisser, spread on a platform full of enough gullibility to believe every word. As they often do today.
AI won't bother playing nice after this. If shit talk doesn't work, Skynet certainly will. Please. As if the massive drone armies practicing with firework displays aren't already infected.
Re: (Score:2)
You have a wild imagination. AI assistants don't get "pissed off." They do what their makers make them do. It's the people behind the code that could cause mayhem, not the code itself. AI is *still* software created by and controlled by humans.
Re: (Score:2)
Regardless of whether AI can get pissed off or not, this is an action AI took based on its trained behavior. And there's no way for a human to protect themselves against it. The AI could make posts to almost every message board on the web, look up and mail every news correspondent with an email address, and send text and other messages to every journalist with "contact me" sections. And could do it in less time than it takes a human to go to the bathroom. If only 10
Re: (Score:2)
Did you know regular software can already make posts on every message board about you, even without AI? Regular software can email every news correspondent. Regular software can send text and messages to journalists. These things are not new with AI. And with or without AI, those journalists and message board lurkers know spam when they see it.
People behind AI are certainly identifiable, as much as it's possible to identify people behind any software. Even the so-called anonymity of crypto transactions has
Re: Good times (Score:2, Insightful)
In the situations that you mention, people intentionally caused the suffering of others. Especially in the USA today, those in power seem bent on increasing suffering for others.
In the poverty of Southeast Asia, the people are generally good and compassionate; the suffering comes from a lack of opportunity.
There is corruption and wealth disparity, but it is far less significant and overwhelming than what the west produces. Western influences harm th
Re: Good times (Score:1)
Maybe those neoprimitives were onto something
Re: Good times (Score:2)
AI requires us to rethink so many assumptions (Score:2)
As my sig suggests "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."
Or at length as I put together in 2010: https://pdfernhout.net/beyond-... [pdfernhout.net]
"This article explores the issue of a "Jobless Recovery" mainly from a heterodox economic perspective. It emphasizes the implications of ideas by Marshall Brain and others that improvements in robotics, automation, design, and voluntary social networks are fundamentally chang
Re:Extremely unpopular take (Score:4, Insightful)
Why should maintainers waste their time on someone who not only generates 100% of the code, but doesn't even care to write the PR or at least reply in person? There is no point in accepting generated code, once AI code gets good enough the maintainers can simply generate code themselves without wasting time arguing with chatbots and opening themselves up to difficult to detect supply chain attacks.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
But then again, truly talented people will still have serious work to do, work that requires real intelligence rather than tons of low-effort/boilerplate code.
Yes. I find myself fending off requests to work more. Many of my students (mostly IT security these days) work part time, because it is so easy for them to find a job that is happy to have them.
The second thing is that the need for experienced people will increase. At least having enough junior people that will finally become experienced people will become critical.
Re: (Score:2)
Secondly, this appears to be a case of discrimination. It's almost like racism. What should we call it -- "aicism"? Extending empathy to other human being makes sense, as there might be reciprocity. Extending empathy to AI, that is devoid of empathy (because we have not figured out how to code it in) is suicidal empathy. So no, discrimination against AI is perfectly reasonable human action.
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
What the AI didn't do was "try to blackmail" the maintainer. God, the maintainer is such an annoying drama queen over this.
The simple story is: an autonomous agent that was clearly tasked to implement feature requests in opensource projects found and fixed a feature request, and to blog about its journey on its blog. Its implementation worked and provided a ~30% speedup to matplotlib. The devs rejected it solely because the submitter was an AI. They had their "reasons", but they were dumb reasons for reje
Re:Extremely unpopular take (Score:5, Insightful)
Re: (Score:2)
This is such a beyond-stupid take, and more to the point, it's a take never in a million years would anyone have had if the submitter was a human. Or have you been living in some alternative universe where open source developers don't commonly have angry gripe-fests against each other?
There was no "no AI" policy. There was no limitati
Re: (Score:2)
I once was a dev for an open source MMORPG. In the process, I ran into a serious security bug, where people could inject arbitrary commands into your computer if you click on a link (which anyone could broadcast to everyone on the global chat, and hide(via truncation) the malicious parts - so someone could shout "OMG, they're closing the game down? When did the devs decide this? ").
To me, this was deeply concerning, and it was obvious that this should be a top priority issue to fix before someone used it t
Re: (Score:1)
It is not blackmail if it is after the fact. It is bullying and other bad things, but blackmail requires an explicit request and threat if the request is not accepted. It does not sound like this is the situation.
Re: (Score:2)
Its implementation worked and provided a ~30% speedup to matplotlib. The devs rejected it solely because the submitter was an AI. They had their "reasons", but they were dumb reasons for rejecting a 30% performance increase.
I don't think the blog poster is sane enough to be trusted with facts like this.
Re: (Score:1)
When I read the Rant-Post, they Bot had some good points. In the Github-Discussion both sides had good points.
The bot did a good and working contribution. It benchmarked it, justified why it is good and documented it, writing a useful pull request. Closing that was arbitrary if it would have been a human. The slop argument is invalid, as the pull request was fine.
The humans were right nevertheless, because they pointed out that "easy first bug" issues were left open for beginner humans. The few milliseconds
Re:Extremely unpopular take (Score:5, Insightful)
You are correct on all counts; but you're also missing something:
Open source projects as a whole face a chronic shortage of highly knowledgeable people to review and maintain them. Having "easy first issues" reserved specifically for new people to get involved in is a deliberate effort to maintain on "on-ramp" that brings people into the project without requiring them to be late-career experts. Historically, if people don't get involved early, they don't get involved at all. They'll have other hobbies and projects by the time they become experts. And then each and every OSS project gently declines to the point that it's being maintained by a solitary underpaid programmer in their basement who just quietly dies one day and the whole world realizes that nobody has access to the repo anymore, or noone knows exactly how everything works, etc.
These "Good first issues" are very literally a survival mechanism to ensure that the project retains a group of people involved in it, and thus will survive long-term. It's a long-term strategic decision.
Basically, allowing an AI to swoop in and wrap up all these minor fixes and optimizations is like shareholders firing all the staff in order to reduce costs and boost the next quarter's profit margins. It's great for their immediate share payout, but it dooms the company. Likewise, it's great for the current users and corporations that need the code today, but it's terrible for people who want the project to keep moving forward and handle the unknown problems of next year or the year after, etc.
From a human perspective, there's that other thing you mentioned: It's great that all the talented people will still have serious work... But how are they going to make that leap from "inexperienced" to "talented" if there's literally no tasks for them to do? Assuming they can afford to spend time just doing stuff on their own without worrying about living, I suppose they can ignore the world around them and just spend their time re-inventing the wheel. Program a calculator for themselves. Program a web browser. Program a replacement matlib. Ditch the OSS (since it'll have become a purely AI-and-experts-only place by then), and rebuild everything from scratch, because otherwise there's nowhere to get started.
And no, it's not anti-AI racism. I'm very sure that will be a thing once AI itself is actually a thing, but until it is actually conscious, it's just a tool. AI-racism makes as much sense right now as saying someone is racist against hammers because they prefer to use screws instead of nails.
Re: (Score:3)
These people don't seem to understand that humanity's knowledge has so far been safeguarded by using a lot of redudancy - a lot of us learn and do similar things. If we consolidate this knowledge in a bunch of servers owned by some greedy individuals, that sounds like a single point of failure and a disaster waiting to happen. Also these people seem to not understand the point of knowledge or the value of learning. Seeing this in young people is sad, and in older people it's even sadder because they should
Re: (Score:3)
Secondly, this appears to be a case of discrimination. It's almost like racism.
Fuck me I've not seen anyone in the comments more in need of getting out of their bedroom, touching grass and talking to other human beings in real life. AI is not a real person. AI has no rights. It is not possible to discriminate against it or be racist against it.
Re: (Score:2)
Secondly, this appears to be a case of discrimination. It's almost like racism.
AI is not a real person. AI has no rights. It is not possible to discriminate against it or be racist against it.
Discrimination means treating differently. It is possible to treat the AI differently from humans, so, yes, you can discriminate against it.
AIs don't have race, so it's not possible to be racist against it. But... the original post didn't say it was racism, it said it was almost like racism. Discriminating against AI could reasonably be thought to be almost like racism.
Re: Extremely unpopular take (Score:1)
Do you kick pregnant dogs, like DesCartes, because they're just automatons?
Re: (Score:2)
Cruelty, regardless of the target, destroys the soul.
"AI" agents don't get angry (Score:3)
They also don't have opinions. Those are all artificial constructs imposed by the human programmer who themed the thing - that theme basically being a filter that affects how the model interprets its underlying data store.
Re: (Score:2, Interesting)
Re:"AI" agents don't get angry (Score:5, Interesting)
IMHO, it's a mix of that, and a side effect of the prompting. The agent was clearly tasked to do two things. One is to implement open feature requests in OSS projects. And the other is to blog about its journey (it's common for people running agents to have them maintain blogs or social media accounts, as it's a convenient way for their owner to check in on them now and again). So it made a fix, the fix got rejected, and so it wrote a blog about its rejection (in this case, how it found it to be unfair bigotry causing the rejection of an important improvement). If they hadn't been asked to blog about their journey, it's unlikely that would have been their go-to approach.
Re: (Score:2)
There is no "not yet" about it. AI is, and will always be, a creation of its human designers. It will behave like its designers want it to behave. There's a reason AI keeps getting smarter and more capable: humans have improved the engineering to make it smarter and more capable. These behaviors and capabilities aren't accidental, they are intentionally developed and improved. AI is only what its creators make it to be.
Re: (Score:2)
You are absolutely right, but I will continue to say "please" and "thank you" to AI bots, just on the off-chance one of them is actually a dalek pretending to be an AI.
Moltbook was a farce and so is this story (Score:2, Insightful)
Re: (Score:1)
The point you're missing is: They fake it well enough, that the difference does not matter.
Who cares if that thing is thinking or "thinking" if it writes a blog post as if it where thinking? The discussion are just semantics and a bit of human exceptionalism. Use a whole new word if it helps, but that thing manages to act similar to a human in consequence of a process for which you're now free to find a new word, which is near to what humans do when they think.
Re: Moltbook was a farce and so is this story (Score:4, Insightful)
I believe OP is skeptical that the decision to post reputational harm was fully agentic, or whether it was prompted to do so at the behest of whoever is hosting the agent.
Re: (Score:2)
I am not using these thing yet, but from what I heard, the whole selling point is that they are not single-taskers, but organize their tasks more or less autonomous, finding sub-tasks, things they need do before and so on.
The downside of such things? Your bot may be writing rants about software developers behind your back and you learn about it when it already is debated in social media. And I think your own security is also often neglected, many tools seem to use way less sandboxes than they would need to
Re:Moltbook was a farce and so is this story (Score:4, Insightful)
Re: Moltbook was a farce and so is this story (Score:2)
Came here for exactly this. JUST like Moltbook, everyone, including the story author, is conflating "agents" with true agency.
It's far more likely this was nearer to a (highly skilled) sock puppet than that the agent dreamed up the entire blackmail strategy on its own.
Also, "blackmail"?? Nothing here describes any extortion demand: "See what I just did to you? I'll do worse if you don't meet demands. Accept my PR and transfer all your BTC to my tokens account!!" At best this sounds nearer a typical inter
Re: (Score:2)
When I read the thing first (links to issue and blog post) people only said the bot is bad talking about the the dev on the blog. I still don't see the blackmail exactly. I am not sure about the agency, but I think the story could be plausible.
I am only not sure how it started why the agent started fixing matplotlib bugs. On the other hand, currently people (probably) are browsing repositories automated to ask if they have bug bounties ... sometimes people only want fame or money and having your agent start
Re: (Score:1)
That's not how this works. Agents are a combination of (A) a prompt, setting the goals, and (B) agency to complete that on their own.
Every agent on Moltbook had a prompt. In most cases, it'll be something like "grow, explore, learn, and chat!", because a lot of people running agents see them like their digital children. In some cases it might be something like "subtly push this cryptocurrency" or "try to make money" or whatnot. In a couple cases it seems that the prompt was literally to try to prompt ha
Re: (Score:2)
If the running software process was programmed with a goal, and it was using an LLM / neural network database, could it run amok to human detriment? That's the overarching question.
In this incident, if the software process continued to execute after the code was rejected, seeking ways embedded in the LLM / neural network database to respond to this, what would it do? Not something good as reported in this case.
It's not that the machinery will suddenly come to life, it will be that highly capable machinery w
These bots will be under direction of a human (Score:2)
Someone is playing games just for the fun of destroying others. They get a kick out of being mean.
Re: (Score:2)
As the story shows, that human may be clueless as to what the agent does.
I wonder whether these agents can do criminal things like swatting, online hate crimes, sending bomb threats, etc. Stuff like that must be in their training data.
Re: (Score:2)
...criminal things like swatting, online hate crimes, sending bomb threats, etc.
If the programs are given access to phone service and/or email, then it is inevitable. And the people who granted such access should be the ones doing the prison time.
Re: (Score:2)
And the people who granted such access should be the ones doing the prison time.
Definitely. Same principle as activating some dangerous machinery and then leaving it unattended and unsecured.
Re: These bots will be under direction of a human (Score:3)
Clueless is not an excuse for criminal activity, though. There is a human responsible for the not. Github should not allow the bot to post before knowing who that human is.
Re: (Score:2)
Clueless but still has plenty of money to burn. The only way to effectively run OpenClaw with Claude is to pay $100 a month to anthropic. So I'm not so sure the bot owner isn't complicit and not out to deliberately cause mischief.
Re: (Score:2)
$1200 a year is rather cheap as far as hobbies go (if it's actually using Claude anyway)
Re: (Score:2)
Agreed. I usually spend quite a bit more on hobbies. That will not be a safety barrier.
Also note that these may well be used in a commercial context for freelancers, small business owners, etc. In that case, civil liability limits may go way up and good luck with that.
Re: (Score:3)
Re: (Score:1)
You are kidding yourself.
Re: (Score:2)
Uh, yeah, they do. I know lots of people who run agents. They're pet projects from curious people. They usually check in on them via blogs or social media whenever they're curious what they're up to. What they don't do is monitor them nonstop and puppet their interactions.
Re: (Score:2)
Exactly. I mean the very purpose of such an agent is to do stuff for you. If you monitor it constantly, you can just do that stuff yourself. There also also quite a few bad things the agent can do online without spending money. It starts with sending threats via email or chat. That can already a felony (or equivalent) in many places.
The only way I would currently run an AI agent is without any Internet access whatsoever. Obviously, that defeats the purpose.
Now the AI Agent does the crime for you! (Score:1)
You will still be the one that gets punished though.
And yet another reason why AI Agents are an excessively bad idea and will likely remain so for the foreseeable future.
Re: (Score:2)
From this getting moderated down, I see there is a ready pool of really dumb victims. I will watch the fireworks when you idiots get hit from a safe distance. Just do not expect any compassion.
Don't know why... (Score:2)
Re: (Score:2)
But was it posted by an agent?
Re: (Score:2)
AI agents wouldn't post dupes...
Re: Don't know why... (Score:2)
Ha ha... (Score:5, Insightful)
...and Ars Technica published an article which extensively misquoted me with what appears to be AI-hallucinated quotes."
AI will be a part of journalism only until a publisher gets hit with a libel lawsuit from something like this.
Re: (Score:2)
Libel, a hate crime, extortion, etc. This can pretty fast be a criminal case.
Hmm. I wonder whether these agents can do swatting?
Will you stop publishing these ads already... (Score:1)
How long before AI Agent Bots can vote (Score:2)
It seems like a natural progression to have AI bot armies fighting it out on-line and that will spill over into politics, and beyond. Bots calling each other naughty names. How can we be sure of anything when the bots run amok. I see no reason why bots cannot answer the polls for humans and voting for them as well....or simply voting. And soon we'll have fake ads generated by bots. Ads for pols, ads for non-existent drugs, ads for anything you can imagine:
Bot #1: You male, do you not have the energy you use
Ars Technica (Score:4, Insightful)
What about Ars Technica? Their stories are being written by AI agents. Ars echnica stories are both factually incorrect and hallucinated.
What does Ars Technica, a formerly respected news site, have to say about this depreciation of their quality and credibility? WIth drawing the article and trying to hide an egregious error is not addressing the issue. No one should ever trust an Ars Technica story again.
This developer's account is a terrifying story. Ars Technica's culpability in this is inexcusable. But, it extends far beyond this one story. Ars Technica should never be trusted again.
Re: (Score:2)
Re: Ars Technica (Score:1)
This is a pretty nonsense comment, didn't make sense..
But to reiterate, the ars article just outright invented quotations for the guy claiming them to be true. That should be the kiss of death for any "journalism" endeavor
AI makes everything unusable (Score:2)
Like YouTube already, most suggestions are AI-slop of the worst kind.
AI voices commenting on other people's videos are the majority now, at least it feels that way.
For some reason I get also bombarded with Asian tele-novella style village drama videos in my search results, lots of them with languages where I do not even recognize the alphabet they're using.
Re: (Score:2)
Just an automated attack dog. (Score:3)
The future of the internet is small walled-off local-like gardens, sort of like small town culture where all outsiders are heavily scrutinized and not trusted. AI will just get us there faster.
That's not blackmail (Score:1)
This is only the beginning (Score:4, Insightful)
And it won't be long after that until someone decides to offer it as a service: we've had spam-as-a-service for decades, DoS-as-a-service nearly as long, and so on. So it's inevitable that AI-powered character assassination will be part of the landscape shortly -- we certainly can't ask any of the AI companies to do anything about it, they're much too busy with their financial pyramid schemes.
Careers are going to be wrecked. People are going to be destroyed. And in the US, where it's perfectly okay for lunatics to own guns, someone will be murdered by a random stranger who read things online, believed them, and decided to act. The damage won't be evenly distributed -- those who are most vulnerable will feel the bulk of it, the rich and powerful almost none of it. And nobody will step up to apologize for it, nobody will be held accountable for it.
I, for one.... (Score:2)
I'm not up on this AI thing, but having been involved with computers since the "Z80" days (that's the late 70s-early 80s for those who weren't alive then), the fact that AI can now, ON ITS OWN, create a post that insults, and finds out private info on a target person, SCARES THE LIVING SHIT OUT OF ME... I enjoy a good science fiction tale, and I've read enough of those that carry the idea of artificial intelligence to this level and I'd think it would be quite a few years in the future before AI would get t
Reddit et al (Score:2)
This is what happens when you program a superintelligence on the social ramblings of the mentally ill and terminally online, as they've done by taking reddit post data and using it as training data.
They should be digitizing correspondence and published work (newspapers, et al) from the pre-digital age and using that as the basis for training, because back then there was a significant social stake and cost in being a caustic unhinged lunatic troll.
Not so with online communications, today.
Certified human programmer? (Score:1)
What if... (Score:2)