AI Job Loss Research Ignores How AI Is Utterly Destroying the Internet (404media.co) 153
An anonymous reader quotes a report from 404 Media, written by Jason Koebler: Over the last few months, various academics and AI companies have attempted to predict how artificial intelligence is going to impact the labor market. These studies, including a high-profile paper published by Anthropic earlier this month, largely try to take the things AI is good at, or could be good at, and match them to existing job categories and job tasks. But the papers ignore some of the most impactful and most common uses of AI today: AI porn and AI slop.
Anthropic's paper, called "Labor market impacts of AI: A new measure and early evidence," essentially attempts to find 1:1 correlations between tasks that people do today at their jobs and things people are using Claude for. The researchers also try to predict if a job's tasks "are theoretically possible with AI," which resulted in this chart, which has gone somewhat viral and was included in a newsletter by MSNOW's Phillip Bump and threaded about by tech journalist Christopher Mims. (Because everything is terrible, the research is now also feeding into a gambling website where you can see the apparent odds of having your job replaced by AI.) In his thread, Mims makes the case that the "theoretical capability" of AI to do different jobs in different sectors is totally made up, and that this chart basically means nothing. Mims makes a good and fair observation: The nature of the many, many studies that attempt to predict which people are going to lose their jobs to AI are all flawed because the inputs must be guessed, to some degree.
But I believe most of these studies are flawed in a deeper way: They do not take into account how people are actually using AI, though Anthropic claims that that is exactly what it is doing. "We introduce a new measure of AI displacement risk, observed exposure, that combines theoretical LLM capability and real-world usage data, weighting automated (rather than augmentative) and work-related uses more heavily," the researchers write. This is based in part on the "Anthropic Economic Index," which was introduced in an extremely long paper published in January that tries to catalog all the high-minded uses of AI in specific work-related contexts. These uses include "Complete humanities and social science academic assignments across multiple disciplines," "Draft and revise professional workplace correspondence and business communications," and "Build, debug, and customize web applications and websites." Not included in any of Anthropic's research are extremely popular uses of AI such as "create AI porn" and "create AI slop and spam." These uses are destroying discoverability on the internet, cause cascading societal and economic harms. "Anthropic's research continues a time-honored tradition by AI companies who want to highlight the 'good' uses of AI that show up in their marketing materials while ignoring the world-destroying applications that people actually use it for," argues Koebler. "Meanwhile, as we have repeatedly shown, huge parts of social media websites and Google search results have been overtaken by AI slop. Chatbots themselves have killed traffic to lots of websites that were once able to rely on ad revenue to employ people, so on and so forth..."
"This is all to say that these studies about the economic impacts of AI are ignoring a hugely important piece of context: AI is eating and breaking the internet and social media," writes Koebler, in closing. "We are moving from a many-to-many publishing environment that created untold millions of jobs and businesses towards a system where AI tools can easily overwhelm human-created websites, businesses, art, writing, videos, and human activity on the internet. What's happening may be too chaotic, messy, and unpleasant for AI companies to want to reckon with, but to ignore it entirely is malpractice."
Anthropic's paper, called "Labor market impacts of AI: A new measure and early evidence," essentially attempts to find 1:1 correlations between tasks that people do today at their jobs and things people are using Claude for. The researchers also try to predict if a job's tasks "are theoretically possible with AI," which resulted in this chart, which has gone somewhat viral and was included in a newsletter by MSNOW's Phillip Bump and threaded about by tech journalist Christopher Mims. (Because everything is terrible, the research is now also feeding into a gambling website where you can see the apparent odds of having your job replaced by AI.) In his thread, Mims makes the case that the "theoretical capability" of AI to do different jobs in different sectors is totally made up, and that this chart basically means nothing. Mims makes a good and fair observation: The nature of the many, many studies that attempt to predict which people are going to lose their jobs to AI are all flawed because the inputs must be guessed, to some degree.
But I believe most of these studies are flawed in a deeper way: They do not take into account how people are actually using AI, though Anthropic claims that that is exactly what it is doing. "We introduce a new measure of AI displacement risk, observed exposure, that combines theoretical LLM capability and real-world usage data, weighting automated (rather than augmentative) and work-related uses more heavily," the researchers write. This is based in part on the "Anthropic Economic Index," which was introduced in an extremely long paper published in January that tries to catalog all the high-minded uses of AI in specific work-related contexts. These uses include "Complete humanities and social science academic assignments across multiple disciplines," "Draft and revise professional workplace correspondence and business communications," and "Build, debug, and customize web applications and websites." Not included in any of Anthropic's research are extremely popular uses of AI such as "create AI porn" and "create AI slop and spam." These uses are destroying discoverability on the internet, cause cascading societal and economic harms. "Anthropic's research continues a time-honored tradition by AI companies who want to highlight the 'good' uses of AI that show up in their marketing materials while ignoring the world-destroying applications that people actually use it for," argues Koebler. "Meanwhile, as we have repeatedly shown, huge parts of social media websites and Google search results have been overtaken by AI slop. Chatbots themselves have killed traffic to lots of websites that were once able to rely on ad revenue to employ people, so on and so forth..."
"This is all to say that these studies about the economic impacts of AI are ignoring a hugely important piece of context: AI is eating and breaking the internet and social media," writes Koebler, in closing. "We are moving from a many-to-many publishing environment that created untold millions of jobs and businesses towards a system where AI tools can easily overwhelm human-created websites, businesses, art, writing, videos, and human activity on the internet. What's happening may be too chaotic, messy, and unpleasant for AI companies to want to reckon with, but to ignore it entirely is malpractice."
The internet was destroyed a bit before that (Score:5, Insightful)
And the biggest reason for its massive enshittification are exactly the "shoshul media" sites that TFS is lamenting for, where people go for shock content or to watch fakery and commentaries that align with what they like. So it isn't such a big loss, after all.
Perhaps eventually a dark net for humans with verification in-person will appear and leave that other internet to the chatbots.
Someone go make a movie about it.
Re: (Score:3)
Perhaps eventually a dark net for humans with verification in-person will appear and leave that other internet to the chatbots.
Meatsack-based LAN parties? Reminds me of those Usenet downloads back when campus speed meant something.
Huh. Usenet. Wonder if that’s just unfashionable enough for AI to leave the fuck alone. Sadly, I doubt it.
Re: (Score:2)
I thought google killed the Usenet long time ago, my alt.binaries.pictures compatriot.
But yeah, in-person networking in places where we don't show even our lips to Hal's eye might become a thing again.
Re:The internet was destroyed a bit before that (Score:5, Interesting)
Meatsack-based LAN parties? Reminds me of those Usenet downloads back when campus speed meant something.
When I got to college, late 90s, one of the very first things I did was get my computer hooked up to campus ethernet. I had dialup at home, and it was glorious. A friend who was also just starting at nearby university that was linked to mine with a highspeed connection, called me--on landline--and we just started transferring files back and forth. Maybe via ICQ. It was unbelievable seeing 10MB/s...
Honestly, the next time that I felt anything like that "wow" Internet speed moment was with Google Fiber when my ping from my home server to my office desktop came in at 1.5ms.
Re: (Score:2)
I was lucky; I lived in a Road Runner test market and my off-campus apartment with early cable modem had unreal speed compared to the fractional T-1 the school used.
Re: (Score:2)
Oh geez, they've flooded alt.sex.bald.captains with crappy AI / and it's always Q. WTF?!!
Is nothing sacred?
Re: (Score:2)
No, but that's because Usenet is basically just a massive binary file distribution mechanism now. No one uses it for discussions because there are so many other ways that are more convenient for people. As a result it's basically overrun with spam and AI slop posts, so most people just use it for binaries.
Re:The internet was destroyed by classism (Score:5, Insightful)
'Our' Internet was destroyed by the greed of the upper class, they bought it and now they use the Internet and all the top level sites to steal from us, cheat us, lie to us and manipluate us.
Welcome to economic slavery, yes boss, no boss, right away boss .
Re: (Score:2)
Have you not yet come across the concepts of "sense of self", "backbone", and "free will"?
Since when has the internet overcome all of that...?
Are the masses truly THAT helpless and unable to think....?
Even as bad as some people are, I just cannot thin
Re: (Score:1, Insightful)
Are the masses truly THAT helpless and unable to think....?
Even as bad as some people are, I just cannot think that is the overwhelming majority....
You do know who is running the country, correct?
Re: (Score:3)
Since when has the internet overcome all of that...?
Are the masses truly THAT helpless and unable to think....?
Even as bad as some people are, I just cannot think that is the overwhelming majority....
You are pushing in the right direction but the situation is more complex than simply whether people have backbone or free will
Individual agency matters but the conditions in which that agency operates are not equal and class plays a central role. Access to time education and media literacy is uneven and that shapes how people engage with the internet. Someone under financial stress or working long hours does not process information the same way as someone with stability and time and that is not about intell
Re: (Score:2)
Greed isn't unique to the upper class. They are just better at it than most, and that is why they are the "upper class."
Hierarchies of power have existed since before humans did. This is simply how pack-animals self-organize (humans included). The same goes for "economic slavery" which, not too long ago, was implemented as actual slavery. The difference being: you are free to quit your job and find a different one with a different boss if you want (and there are a lot of things your boss is no longer al
Re: (Score:2)
this isn't just about greed, this is also about hoarding and class exploitation
when those in power have corrupted our rule of law so they can satisfy an insataible greed, then there is no limit to how low these people will go or how much damage they can do
they've already hoarded all the capital and all the power, so now we'll see how quickly an immoral and irresposible group of people can wreck what we have built
greed is and will be once again our downfall
Re: (Score:2)
sure, you're free to quit one slave job and maybe find another slave job or maybe not, that's why unemployment is kept high and socail programs are so underfunded
when employment is slavery and does not pay enough to accumulate capital then that's not employment, it's expoitation
there's no real freedom when everyone and our governments are heavily indebted to upper class interests
I sincerely believe that banking institutions are more dangerous to our liberties than standing armies The issuing power should be
By itself social media wasn't the problem (Score:4, Interesting)
Like anything in capitalism if you do not have any competition it rapidly becomes shit
Re: (Score:2)
The Internet was not destroyed by social media. Social media put it on hold, but the fact people were going to Google pre-AI shows that the Internet still had content on it and was still being updated. Much of it was centered around a handful of websites like Wikipedia, Reddit, StackOverflow, but also various topic specific forums, government or educational institution websites, and so on, but it was there.
StackOverflow has seen a complete fall off in terms of both submissions and answers. Wikipedia is prob
Re: (Score:2)
I was going to say the same. Before AI emerged, most search results were already clickbait trash, often aggregating dozens of boilerplate answers to simple variations of a question to boost SEO. Combined with "social" walled gardens, that's what really sent the Internet into its death spiral.
Remember when a search would yield a web forum discussion or blog article about the specific topic you want? Those kinds of results almost never happen now. All those enthusiast forums have either been shut down or move
Re: (Score:3)
Google and other peer search quality didn't immediately suffer. There was sufficient inertia and hysteresis to hold it
Re: (Score:1)
Re: The internet was destroyed a bit before that (Score:2)
yep. I frequent a couple of oldfart websites that have largely kept the 90a layout and I'm not sorry.
AI is not very intelligent and not improving. (Score:5, Insightful)
Parrots sound like they are speaking, but they are merely repeating.
AI has only one single reasoning methodology - prediction based on existing data.
AI is not gaining more methods, it is instead just increasing the data. This gives 'better' results, but evolution not revolutionary. Minor improvements at great speed, not major improvements.
AI is not even as intelligent as the Parrot, it is just better educated.
The various stories of evil (AI blackmailing people, AI blogging about how people are prejudiced against it for not letting it post, AI being racist) all demonstrate low level thought - not dogs, not rats, not mice, but instead the kind of thing that an insect could do.
We think it is smart only because it has learned how to predict words that we recognize as sentences. Ignoring that ability, it is the same stupid it was when we first invented LLMs.
You can get better results from AI simply by telling it not to guess and to only show results it can back up. That is not something a person has to be told. That is something we do automatically. A well trained dog does that (i.e. drug detection dogs know not to false alert if they are well trained).
AI is like a guy I knew from college that got in because of his parent's money: Well educated moron.
Re:AI is not very intelligent and not improving. (Score:5, Informative)
Yes, a pretty good summary. Of course, LLM guardrails are getting a bit better (but only better adapted, not fundamentally better) and LLM training has reduced the most extreme forms of hallucinations (same), but LLMs remain laughably incapable as soon as something did not have good prevalence in the training data.
Re:AI is not very intelligent and not improving. (Score:4, Insightful)
"...but LLMs remain laughably incapable as soon as something did not have good prevalence in the training data."
Remarkably similar to humans in that respect.
Re: (Score:2)
OTOH, you do need that human's "training data" includes millions of years of evolutionary history. But, yeah, you're correct.
Re: (Score:2)
At least there is a chance the human will be self-aware enough to tell you they don't know the answer. The AI confidently shits out whatever information is in there.
Re: (Score:2)
Try using a model less than 2 years old.
Re: (Score:2)
Try using a model less than 2 years old.
The models all still have the same fundamental defect of not knowing anything but being stuffed with information. They train slightly differently, they prepend your prompt with more admonishments, but the models still shit out bullshit which I know from recent use. You are now dismissed.
Re: (Score:2)
The thing I find most striking (besides hallucinations, which now have been hidden for the most common cases by filters), is that current models still have the same fundamental and frankly devastating limitation: They do not give you the full context you need to competently use some information. A smart human will give you that context. A manual web-search, done competently, will give you that context. But for LLM-type AI, you have to ask the right questions.
Now, if you need to ask the right questions, you
Re: (Score:2)
Humans are capable of actual logic. LLMs are not. Logic can get a human through something they have never seen or done before., so not as remarkably similar as you think.
Humans can be pioneers, LLMs can not.
Re:AI is not very intelligent and not improving. (Score:5, Interesting)
The various stories of evil (AI blackmailing people, AI blogging about how people are prejudiced against it for not letting it post, AI being racist) all demonstrate low level thought - not dogs, not rats, not mice, but instead the kind of thing that an insect could do.
When idiot judges can’t even describe what a shitcoin wallet is are presiding over crypto cases, it tends to say a lot about the moron voter or elected leader who put them there.
If AI did NOTHING else but image and video manipulation from this point forward, it would become one hell of a dangerous weapon. Stop pretending our legal system is smart enough. It isn’t. It’s corrupt enough. The various stories of today will look like child’s play compared to the scams of tomorrow. Including ones pulled by law enforcement (like when they arrest sober people for drunk driving, because revenue generation.)
AI in the American legal system alone should scare you. Because only one of those entities isn’t getting any better or smarter.
Re: (Score:2)
While technically you are right about AI, the reality is they can already do a lot. I've been playing with some and while they do need hand-holding, particularly if you tell them to check their work and keep revising it, they eventually get to something that works. It is imperfect, but a lot of software is, and it gets the job done.
I'm not too worried because as I said, they need a lot of hand holding from someone who understands the problem. I've used them to build stuff that I don't have time to do myself
Re: (Score:2)
AI doesn't have to be intelligent to be useful. (Although it's annoying to call it AI, but that's a topic for a different discussion).
Re: (Score:2)
It really comes down to how you define intelligent. If it can solve s moderately complex problems, check its own work to some extent, and revise it when mistakes are found, that seems like a kind of intelligence. It certainly does some useful work.
Re: (Score:2)
LLM's are not reasoning machines. They function as statistical pattern-matching engines rather than reasoning agents. Yes to a small degree so does the human mind but, these are Markov engines with a higher success rate and the words Artificial Intelligence pasted on by some marketing knob. Intelligence requires self awareness. These LLM's pattern match they don't understand what any word means. Ask your trusty LLM about something it wasn't trained on and it can't speculate or reason.
Re: (Score:2)
As I said, it depends what you mean by intelligence. If self awareness is a requirement for you then sure, but from a practical standpoint a lot of useful work can be done without it.
Re: (Score:2)
Sure, provided you can differentiate a calculator and an RNG.
The problem here isn't just AI, it's that execs and a lot of people applying strategy don't know why the distinction might be meaningful... and therefore decide to put the RNG in a customer facing pipeline.
In some cases that's fine. In other cases the business process develops and we end up with pharma execs telling research pharmacists that their research outputs need to be 'at a grade 5 level' or 'they're too hard for doctors to read quickly', a
Re:AI is not very intelligent and not improving. (Score:4, Interesting)
Parrots sound like they are speaking, but they are merely repeating.
So to start off, this is not an accurate statement about parrots. Parrots can recognize individual objects, individual people, and make requests for specific things. African Grey parrots are the most studied in this regard but they are not the only such. See https://pmc.ncbi.nlm.nih.gov/articles/PMC11196360/ [nih.gov]. Alex, one of the first African Greys to be systematically studied, had to even be removed from the room when other parrots were being tested because he would sometimes correct them if they got an identification wrong. So, if you are not estimating what parrots can do in the first place, this should already be a pretty large warning sign.
AI has only one single reasoning methodology - prediction based on existing data.
This is accurate. This is also what humans do the vast majority of the time. Prediction based on existing data is incredibly powerful.
AI is not gaining more methods, it is instead just increasing the data. This gives 'better' results, but evolution not revolutionary. Minor improvements at great speed, not major improvements.
I'm not sure what content this has, but in so far as it has content it ignores the vast improvements in benchmarks which certainly look like gaining more methods. The degree to which models today are better than early models is just massive, to the point where many types of tasks which were on standard benchmarks 3 years ago are not even being used on benchmarks today because models match 99% on those routinely. Now, some of that is due to questions leaking, but others are not. For example, one standard thing to use for a benchmark for a bit was the AIME, a standard high school math competition. Using each year's AIME was reasonable because one could be confident it wasn't in the training data. The AIME competition is an invited competition in the US to students who perform well on the ACM competition. The easiest AIME problems should be solvable by any student who is confident with algebra 2 and they get progressively harder. There are 15 problems on a test. For example, here is problem 1 from 2023:
The numbers of apples growing on each of six apple trees form an arithmetic sequence where the greatest number of apples growing on any of the six trees is double the least number of apples growing on any of the six trees. The total number of apples growing on all six trees is 990. Find the greatest number of apples growing on any of the six trees.
By the time one gets to problem 15, one has things like the following:
Find the largest prime number p I've rewritten the problem slightly for formatting here but this was problem 15 of the 2023 AIMEI 1. The other example I gave was from the 2023 AIME 2 (there are two test dates each year. I choose two from different contests here because I was trying to avoid having to put any complicated diagrams in this comment. You can find all the AIME problems and solutions https://artofproblemsolving.com/wiki/index.php/AIME_Problems_and_Solutions [artofproblemsolving.com] to get a better idea of what they all look like. Now, ChatGPT in its early version could typically got correct a single AIME problem at best. Now, the [best models are getting 98% routinely with some scoring 100% every time https://www.vellum.ai/llm-leaderboard [vellum.ai]. This is not the only example of this. The IMO, the International Math Olympiad, is a proof based international competition, and is the highest level high school competition in the world. Models started off not being able to solve a single problem. Now, multiple models are getting gold medals on the IMO. The
Re: (Score:2)
None of the tests you are discussing is measuring intelligence, they are measuring other things. Math is not intelligence. Most animals can resolve the calculus equations of where you will land when you jump.
Intelligence is what you can figure out WITHOUT any training. Training provides education not intelligence.
The math stuff are things they are trained to do - just like lions etc train themselves how to calculate the math of jumping by practicing.
Here is a real measure of intelligence - can you recogn
Re: (Score:2)
"I'm curious what method within the brain you think allows you to create output according to something other than existing data."
If all the brain did was output a mashup of all its input then where did art or technology or mathematical theories come from? Sure you could claim its little steps on top of steps but ultimately "out there" in the enviroment there is nothing that could in any way be reprocessed and eventually spat out as the Sistine Chapel or Einsteins relativity theory. At some point in the proc
Re: (Score:2)
Sure you could claim its little steps on top of steps but ultimately "out there" in the enviroment there is nothing that could in any way be reprocessed and eventually spat out as the Sistine Chapel or Einsteins relativity theory.
Your comment has taken me on a journey. I started out by asking myself "If there is nothing external that could be reprocessed and eventually spat out as the Sistine Chapel or Einsteins relativity theory", then how did those things come to be? Much of how our brains work is unknown. Having said that, what you wrote reminded me as much of magical thinking as it did of analysis.
I mulled a bit, and looked up inference engines, which according to Wikipedia apply "logical rules to the knowledge base to deduce ne
Re: (Score:2)
I apologise if my opposition to AI wasn't clear. I may have been subconsciously relying on some prior posts of mine.
How does a computer have true emotions, where it isn't just relying on the dictionary definition of 'happy to see you'?
In an attempt to answer that, I've twisted the question into "Would a human brain kept alive in a lab, absent the rest of the body, experience emotions"?
As far as I can tell, I experience emotions in my body, not in my head. So if the answer is that the brain alone wouldn't experience emotions, then perhaps the AI would have emotions if it was appropriately connected to a suitable 'meat networ
Re: (Score:2)
"As far as I can tell, I experience emotions in my body, not in my head"
What a strange individual you are. Do you think quadraplegics or others with spinal cord injuries don't experience any emotion then?
Re: (Score:1, Troll)
Relativity was grown off of a dozen ideas that came before it.
Let us ignore your ignorance and rewind it way the fuck back, though.
Let's say we're talking about the first idea.
Where did it come from?
From someone's eyes, ears, or other sense interacting with the natural world.
Your brain is not some magical device that creates output that wasn't derived from its inputs.
Re: (Score:2)
If you think that the Abrahamic biblical stories are an original work from the mind of god, then you're a fucking idiot.
Ultimately- I suppose that means you're a fucking idiot.
Re: (Score:2)
Why don't you try and explain where the examples I have came from? If its all "out there" already then it won't be a problem will it?
As for AI problem solving , all its done is solve some things that required huge statistical analysis. When before AI data centres were built were they 10s of thousands of GPUs working on a single issue?
Re: (Score:2)
"I'm sure if you asked AI they'd look up and find many such examples"
No mate, I'm asking you. You made the claim, you back it up. Ask AI yourself if you're not up to it.
"What have you actually created, and not just copied from someone else?"
Some music and some games. You? Let me guess, you're a creative desert and think everyone else is too.
"You've come up with 1 example of a single human with intelligence"
Plenty more out there but I wouldn't expect you to know that as intelligence doesn't seem to be your s
Re: (Score:2)
IIUC, the public facing AIs have intentionally been set to not do permanent learning from current data. This was a decision made because of experiences like Microsoft's Tay chatbot.
It's definitely a decision that limits the AIs, but the "malice" of various people made it rather necessary. They *do* need to find a better way to handle the problem, but it's not proof that the AI is basically limited. That's a "late addition", like the other "guardrails".
OTOH, a better proof of the current (well, last year)
Re: (Score:3)
It's not paired with this body, made to exist in this environment. It didn't evolve to solve the specific survival and societal problems that our brain did.
It's fundamentally different than ours.
That isn't to say it's sentient, alive, or anything else.
The real question is, why would someone expect an LLM to have common sense?
How much common sense do you think a person that never leaves their house has?
I argued that they're intelligent. Not that they're human
Re: (Score:2)
You may want to consider being assessed for AI psychosis.
Re: (Score:2)
Because I state several facts, and then a criticism of easily the dumbest fucking thing said on their entire thread?
Is it because I mocked the idea that the brain doesn't generate its output from its inputs?
I don't think you know what AI psychosis is. I have no illusions about what AI is. You seem to have some pretty major delusions about what you are, though.
Re: (Score:2)
Psychosis doesn't necessarily mean breaking into Buckingham Palace with a crossbow. It's a spectrum. Often it takes religious tones. A lot of preachers are probably on the lower end, believing they've been sent by God for this and that (not necessarily hearing his actual voice). Or it can take an animist bent, where you believe inanimate objects are alive, and that they will deliver some kind of salvation to humanity.
Re: (Score:2)
Or it can take an animist bent, where you believe inanimate objects are alive, and that they will deliver some kind of salvation to humanity.
There is so much stupid in this, I don't know where to start.
First off, animism is not the belief that an inanimate object is alive. You're far closer to an animist than I am, because I suspect you believe in a soul.
Second, inanimate is such a vaguely stupid term, that its precise set of included things changes by the fucking century. It literally means "alive like an animal". What a profound declaration on your part that something not alive like an animal cannot be alive. Give this motherfucker a Nobel
Re: (Score:2)
Yes, alive like an animal, like the many animals you compared it to.
Re: (Score:2)
And I do have personal experience with psychosis. Currently it is under control with a phenothiazine antipsychotic.
Re: (Score:2)
I compared a diagnostic criteria of an LLM with a diagnostic criteria of an animal.
If I say that a box is striped like a Zebra, am I bordering on the edge of psychotic?
Any inference you made on my regarding an LLM as "alive" was a projection on your part.
Re: (Score:2)
Re: (Score:1, Flamebait)
The rest of your post is a long-winded Searlean presupposition that you yourself are not just a parrot.
Re: (Score:3, Informative)
So, your response is, "I am way too smart and knowledgeable to bother replying to your post, but I feel the need to call you stupid despite having no counter points... So, Nyah!"
No. My response is exactly as written.
You seem to have formed an argument informed by Searle.
Searle's Chinese Room isn't a formal argument- it's an intuition pump, built on a presupposition: That there is something magical in your neurons.
I don't want to engage with Searle-by-proxy when Searle has been definitively debunked, continuously.
The Chinese Room speaks to those who internalize his presupposition, which I suspect you do.
People who believe that often have a religious, or near-religious reason t
Re: (Score:2)
So you still want to go with "Searle has been debunked!!!" which is impossible because we're talking about the philosophy of mind which is not science. It is philosophy. You're saying you just don't understand what philosophy is.
Wrong right off the bat.
;)
To debunk is to expose a claim, belief, or theory as false, exaggerated, or pretentious.
In the case of Searle, the claim is that semantics cannot arise from syntax.
Debunking is not about science, it's about argumentation. Philosophy is entirely based on formal logic- argumentation.
We'll ask the AI, because as I suspected, it's a lot smarter than you are
Q: Can you debunk a philosophical argument?
A: Yes, philosophical arguments can be debunked by identifying false premises,
Re: (Score:2)
That's you trying to engage in apologetics for Searle's dumbfuck intuition pump.
Re: (Score:2)
Re: (Score:2)
Nobody said that state space would allow an LLM to transform into the moon.
Nobody said it would allow it to breathe.
We're discussing intelligence, and particularly the limits of information and how that may apply to it.
Scams are a bigger problem (Score:5, Interesting)
Scams have become way more convincing, which will lead to larger losses to theft. No longer can you identify scammers by broken English, or other obvious markers.
I had one recently that seemed legit until I went off script and he started dropping âoesirâ more than a normal conversation. Another hacked a friendâ(TM)s account and had a convincing post about how his uncle died and was selling cars and various items that heâ(TM)d hold for a deposit.
It will be much easier to scam grandpa when you can deepfake his grandkids
Re: (Score:3)
Scams have become way more convincing, which will lead to larger losses to theft. No longer can you identify scammers by broken English, or other obvious markers.
I had one recently that seemed legit until I went off script and he started dropping âoesirâ more than a normal conversation. Another hacked a friendâ(TM)s account and had a convincing post about how his uncle died and was selling cars and various items that heâ(TM)d hold for a deposit.
It will be much easier to scam grandpa when you can deepfake his grandkids
Shock horror, racism no longer a valid excuse.
I hate to break it to you, but scams didn't start with foreigners, locals were doing it long before anyone with a funny accent and broken English came onto the scene. We're going to have to go back to the tried and tested methods of being smart enough not to fall for obvious scams and for those who aren't... letting them suffer the consequences of their own stupidity.
Those of us who grew up poor in the 80s and 90s know full well that there are dozens of pe
Re: (Score:2)
Scammers operating within a country can be prosecuted when identified, which tends to limit how aggressive or widespread they are. In contrast, scams originating from abroad often face fewer legal consequences, allowing them to be more brazen and scalable. Historically, many of these operations have also shown linguistic or stylistic cues, partly because they’re run across language barriers.
Classic examples, like the “Nigerian prince” emails, weren’t about targeting a specific group.
Push back? (Score:3)
Is there any push back on this?
I've actually labelled a few of the sites I run to indicate no "AI" used, and all human content.
I would guess there's gotta be a bit of a movement happening behind this, anyone seen anything?
Destroying social media??? (Score:3)
I say GO (you sloppy) AIs !!!! The sooner the user tracking and push algorithms stops, the better.
I do not think the Internet is being destroyed (Score:5, Interesting)
I think what we see is merely an Internet split onto different layers of mental capability. The "old" Internet is still 100% there and can be found if you know a bit or want to learn. Blogs, personal websites, discussion forums, etc. And while we have seen some slop comments here, for example, I think that has pretty much died down close to zero because nobody was engaged.
What is struggling is part of the commercial "Internet" like parts of YouTube, FB, etc. For example, YT has so far failed to rain in propaganda slop and, worse, "we do not care about reality, just get that click!" slop that will just make things up. However, this will not "destroy" the "Internet", this just means some commercial service providers have gotten fat and lazy and too used to spectacular profits. And now they complain about these profits getting less fantastical. No need to listen to them.
Obviously the "SlopNet" part of the Internet is catering to the least mentally capable and worst informed and educated parts of the population. But even that is not really new. TV was their drug before. We do face dangers from no-morals no-integrity operators that try to forge these people, with some success, into voting blocks. For the incredible amount of damage from voting in a Kakistocracy type of government, refer to destruction the current US administration is doing. But even that is limited, as, after a while, these blocks will fracture.
Hence, nothing to see here, move along.
Not AI fault (Score:5, Insightful)
Re: (Score:3)
... on platforms that optimize for engagement.
Like 404 Media and their "anonymous" bloggers who keep submitting their paywalled blog posts as Slashdot stories?
Re: (Score:3)
..nobody is posting slop in places where there is no chance of getting that precious traffic. It is a system level problem.
And when AI slop is the one creating more AI slop? Take a look at the human phenomenon of “going viral”. Over the dumbest of falsehoods.
AI will get there. Then we will realize the problem of email spam is child’s play by comparison to AI-generated slop that believes its own delusions.
You're right. AI isn’t the inherent problem in content creation. Gullible humans believing every word is. Not sure how we protect against that. There are a LOT of grown-ass children now. It
Youtube is already a mess (Score:2)
Any supposed "real life" event video less than 5 years old simply can't be trusted anymore there's so much AI generated crap out there. Up until recently they were easy to spot if you knew what you were looking for (eg the classic 6 fingers on a hand etc) but in the last year or so this has become much harder and all that will happen is youtube will become useless as any kind of repository of real event videos.
Re: (Score:2)
We're seeing this play out today. There are whole segments of the Internet engaged in arguing over whether Netanyhu is dead, maimed, in hiding, whatever, and whether his proof of life videos are AI slop or not. E.g.: https://www.timesofisrael.com/netanyahu-dead-tel-aviv-flattened-ai-generated-videos-are-dominating-the-iran-war/ [timesofisrael.com]
I don't feel that I can correctly identify all forms of AI content anymore. For months the AI slop ads were super easy to catch. I feel like I haven't seen as many recently. I'm assum
Re: (Score:2)
and whether his proof of life videos are AI slop or not
I like the video where yahu's coffee is above the rim of the cup and doesn't spill. It doesn't prove he's dead, but it proves the video is slop.
Re: (Score:2)
Agreed. I will confess to having the algorithm lead me on time sucks. Example, looking up how to fix ice maker on my fridge, stuff YouTube has been historically pretty worthwhile. In the recent past, perhaps I also ended up watching several, people slipping on ice, falling into swimming pools, and bad driving YT shorts. I'm only human.
But within the past, heck 90 days, YT shorts is almost entirely AI slop. It has effectively cured me of clicking on any YT short that isn't one of my real, life, human-
The bigger issue (Score:2)
is that close to 99% of people on Earth contribute nothing valuable, whether it's new knowledge or new things.
AI has amassed enough data to cover most people's lives from birth to death. Yeah, it's that bad and sad.
dot-com companies did not talk about porn either (Score:2)
And nothing of ad was lost. (Score:5, Insightful)
Chatbots themselves have killed traffic to lots of websites that were once able to rely on ad revenue to employ people, so on and so forth...
Said to an audience hiding behind their ad-blockers. Mourning not it's loss.
Re: (Score:2)
I don't use an ad blocker because I hate ads. I use it because it is now an essential security tool.
Ad companies started getting VERY unreasonable about running megs of scripts downloaded from all kinds of 3rd-party sites and doing all kinds of nasty cross-scripting BS. You don't need to be a genius to understand how dangerous these practices are, but... there's almost zero ethics in this industry. Banks have warned us for decades about phishing attacks, but these days even official bank correspondence l
Of course not? (Score:2)
AI Job Loss Research Ignores How AI Is Utterly Destroying the Internet
Err, yes? This is like complaining that research into solar panels is ignoring that prom dress sales were down last quarter. Also:
Not included in any of Anthropic's research are extremely popular uses of AI such as "create AI porn" and "create AI slop and spam." These uses are destroying discoverability on the internet, cause cascading societal and economic harms.
My dude, if "AI porn is destroying discoverability" you'd better make sure no one looks at your search history.
Re: (Score:2)
Err, yes? This is like complaining that research into solar panels is ignoring that prom dress sales were down last quarter. Also:
YES! This is what I came here to complain about! Stupid article premise!
The old Internet already WAS subsumed (Score:5, Insightful)
Yes, AI slop has accelerated a lot of enshittification, but the enshittification started decades ago.
It started when Facebook and other major social media aggregators started putting content behind walls and made searching old content extremely difficult.
It started when Google pagerank started being actively abused by SEO "experts" churning out meaningless, contentless blog posts and other junk content just to fluff up rank.
It started when every error message you search for leads to a enshittified page that exists solely to capture common searches, lead you along for as long as possible while displaying as many ads as possible, without any real content.
I used to be able to search for recipes and fine a lot of individual bloggers and websites. IF you search for any given receipe today, there are a handful of sites that are going to pop up at the top of search results for almost everything. Damn you Spruce Eats!
Etc.
I could keep going. The biggest problem is that the EXISTING, in-progress enshittification, is 100% compatible with AI slop.
Re: (Score:3)
https://battlepenguin.com/tech... [battlepenguin.com]
Direct link to widget: https://djsumdog.gitlab.io/sea... [gitlab.io]
Re: (Score:2)
Just taking the last example. Spruce Eats et al exist because people visit them and they make money. If Spruce Eats is rubbish then it is easy to exclude it when searching but people don't. If people were willing to pay for good quality content then we'd have good quality sources out there to use, but they aren't so we've got whatever parasitical or
Re: (Score:3)
We're on the same page. Enshittification is kind of unavoidable because the vast majority of people go along with it. I'm sure I do too, in ways that I'm not criticizing!
I pirated software when I was a kid. As an adult today I buy licenses to free software, subscribe to patreons or substacks of people who produce content I use or follow, etc. It's not a ton, but I try to do my part to support individuals.
With the Spruce Eats example, it's not absolute junk. I'm sure some recipes on that site are great. But
Re: (Score:2)
Right back at you--I agree with you. It's a scary loop. I hated the Google Gemini popup at the top of search results, and yet, recently, I'm finding myself using them.
I'm terrified of the day when ads are 100% generated on the fly for the viewer's exact demographic and likes and dislikes. I'm sure that's coming. Talk about dystopian..
Re: (Score:2)
https://odysee.com/@swprs:3/er... [odysee.com]
Re: (Score:2)
Very interesting interview, thanks for sharing. Amazing that that goes back to 2005.
(Also of minor interest that both Eric Schmidt and Charlie Rose were accused of being sex pests, with legal proceeding still or recently ongoing.)
Media = âoeThe Worldâ to this author? (Score:2)
AI Porn is a win (Score:2)
Re: (Score:2)
Very few people want to marry someone who was into porn
Depends, am I going to hear "I don't do that any more" or not? Go big or go home.
Opinion isn't research (Score:2)
Sure, the researchers might have ignored some things, but then again, the writer hasn't done any research. FUD is easy to spread, and "human slop" is easy to write. Lots of people write FUD about AI, it's the in thing. I think that in general AI slop is better.
Terrible headline (Score:2)
The internet is fine, not destroyed
Yes, there is a lot of slop on some social media sites, but the internet is more than that
The future is becoming increasingly unpredictable and most predictions are simplistic nonsense "because the inputs must be guessed"
Really? (Score:2)
Let us not forget that we've spent the last 30 years trying to make ads less invasive. This is a fact. There is what is now an entire category of software that revolves stealthy ways to block them. This was always a weak, ineffective, and arguably immoral stream of revenue, with more than trivial privacy concerns.
If you're still depending on ad revenue to run your website, please think of something else.
Next up, this isn't the first time the google algorithm has changed. Louis Rossman did a great video on t
karma (Score:2)
Also not in Anthropic's report - education (Score:3)
AI has had a profoundly negative effect in education, which naturally none of the AI vendors will take any responsibility for.
It turns out that "retrieving answers to exams" is something that LLMs excel at. Since most early education is about learning stuff that's already well known to older or more educated people, it is nearly impossible for teachers to devise assignments that are appropriate to the learning level of students that cannot be easily answered by LLMs.
My teenage daughter reports that many of her classmates basically cannot do any work without LLM. Her lacrosse coach recently assigned an exercise of watching a video, and most of the members of the team put it into AI to give answers. That may be shortsighted and self-destructive behavior, but these are minors who we don't expect to understand or deal with long-term consequences. That's why we don't let them vote, drink, drive cars, gamble or do other things that are destructive to self and others.
Yet nobody at OpenAI or Anthropic seems to give two shits about destroying the education of millions of young people, and saddling teachers and schools - who have salaries/budgets many orders of magnitude smaller than these speculative cash receptacles - with the fallout of a perfect assignment-faking machine.
We're still stuck hearing the platitudes about how "AI can help them learn in new ways", which is a radioactive pile of nuclear bullshit, while Anthropic's "research" says nothing about the impacts on education right now.
a.i. porn? (Score:2)
A.i. pron is bad now? I have to say that rule 34 websites have never looked as great. Give me this endless supply of Wolverine fucked hard by Deadpool while both are stabbing each other please.
Good. Internet 2.0 was a mistake. (Score:2)
I don't care if ad supported sites die. Including this one. In fact, I wish they would.
Re: (Score:1, Funny)
Just do a Trump and say you didn't want an Internet after all. Nobody will remember you begging for it yesterday.
Especially funny that the biggest help has been from Ukraine. Yet Trump and JD still haven't said thankyou.
Re: (Score:3)
That must be why our oil prices are exactly the same as they were a month ago.
The complete inability to understand economics by conservatives will never cease to amaze me, especially as, even now, the "liberal media" insists it's the Republicans who are good with the economy.