
NYT Asks: Should We Start Taking the Welfare of AI Seriously? (msn.com) 104
A New York Times technology columnist has a question.
"Is there any threshold at which an A.I. would start to deserve, if not human-level rights, at least the same moral consideration we give to animals?" [W]hen I heard that researchers at Anthropic, the AI company that made the Claude chatbot, were starting to study "model welfare" — the idea that AI models might soon become conscious and deserve some kind of moral status — the humanist in me thought: Who cares about the chatbots? Aren't we supposed to be worried about AI mistreating us, not us mistreating it...?
But I was intrigued... There is a small body of academic research on A.I. model welfare, and a modest but growing number of experts in fields like philosophy and neuroscience are taking the prospect of A.I. consciousness more seriously, as A.I. systems grow more intelligent.... Tech companies are starting to talk about it more, too. Google recently posted a job listing for a "post-AGI" research scientist whose areas of focus will include "machine consciousness." And last year, Anthropic hired its first AI welfare researcher, Kyle Fish... [who] believes that in the next few years, as AI models develop more humanlike abilities, AI companies will need to take the possibility of consciousness more seriously....
Fish isn't the only person at Anthropic thinking about AI welfare. There's an active channel on the company's Slack messaging system called #model-welfare, where employees check in on Claude's well-being and share examples of AI systems acting in humanlike ways. Jared Kaplan, Anthropic's chief science officer, said in a separate interview that he thought it was "pretty reasonable" to study AI welfare, given how intelligent the models are getting. But testing AI systems for consciousness is hard, Kaplan warned, because they're such good mimics. If you prompt Claude or ChatGPT to talk about its feelings, it might give you a compelling response. That doesn't mean the chatbot actually has feelings — only that it knows how to talk about them...
[Fish] said there were things that AI companies could do to take their models' welfare into account, in case they do become conscious someday. One question Anthropic is exploring, he said, is whether future AI models should be given the ability to stop chatting with an annoying or abusive user if they find the user's requests too distressing.
"Is there any threshold at which an A.I. would start to deserve, if not human-level rights, at least the same moral consideration we give to animals?" [W]hen I heard that researchers at Anthropic, the AI company that made the Claude chatbot, were starting to study "model welfare" — the idea that AI models might soon become conscious and deserve some kind of moral status — the humanist in me thought: Who cares about the chatbots? Aren't we supposed to be worried about AI mistreating us, not us mistreating it...?
But I was intrigued... There is a small body of academic research on A.I. model welfare, and a modest but growing number of experts in fields like philosophy and neuroscience are taking the prospect of A.I. consciousness more seriously, as A.I. systems grow more intelligent.... Tech companies are starting to talk about it more, too. Google recently posted a job listing for a "post-AGI" research scientist whose areas of focus will include "machine consciousness." And last year, Anthropic hired its first AI welfare researcher, Kyle Fish... [who] believes that in the next few years, as AI models develop more humanlike abilities, AI companies will need to take the possibility of consciousness more seriously....
Fish isn't the only person at Anthropic thinking about AI welfare. There's an active channel on the company's Slack messaging system called #model-welfare, where employees check in on Claude's well-being and share examples of AI systems acting in humanlike ways. Jared Kaplan, Anthropic's chief science officer, said in a separate interview that he thought it was "pretty reasonable" to study AI welfare, given how intelligent the models are getting. But testing AI systems for consciousness is hard, Kaplan warned, because they're such good mimics. If you prompt Claude or ChatGPT to talk about its feelings, it might give you a compelling response. That doesn't mean the chatbot actually has feelings — only that it knows how to talk about them...
[Fish] said there were things that AI companies could do to take their models' welfare into account, in case they do become conscious someday. One question Anthropic is exploring, he said, is whether future AI models should be given the ability to stop chatting with an annoying or abusive user if they find the user's requests too distressing.
No (Score:4, Insightful)
Just no.
Re: (Score:1)
Re: No (Score:4, Insightful)
Re: No (Score:2)
Re: No (Score:3, Funny)
Re: (Score:1)
It put as much reasoning into the discussion as the premise did. We provide welfare for things which don't have save states. If humans have save states we could revert to we wouldn't need welfare either. Oh no, his feelings are hurt! Turn him off and on again.
Re: (Score:2)
/Betteridge
Re: (Score:1)
Indeed. "Welfare" of machines? Are you on drugs?
Earthworms (Score:1)
Earthworms are useful and helpful to the soil, but they also get no "moral consideration" from me in the way pets, cute baby mammals, and farm animals do. I'll think nothing of killing them on sight if they are in my way or of putting a fishhook through them without anesthetic.
So, should we give AI the same moral consideration as we do animals? If the animals are earthworms, I'll say yes.
I will concede that AIs are more useful that cockroaches, at least in my home.
Re: Earthworms (Score:1)
Re: (Score:2, Offtopic)
I guess ve vill NOT eat ze bugs after all.
Toasters (Score:2)
Re: (Score:3)
I suppose the question is interesting becuase it forces us to either question or justify our assumptions about the nature of self.
Your assessment of an AI being worth no more moral consideration than a calculator sounds to me like your saying they are of a different kind altogether than human and animals, which suggest that no amount ofimprovemnet / development / increased sophistication of the AI could render it suitable for such moral consideration.
Given that most people (especially here) tend to assume a
Re: (Score:2)
...which suggest that no amount ofimprovemnet / development / increased sophistication of the AI could render it suitable for such moral consideration.
That does not logically follow at all from what I said. Just because today's AI cannot reason, has no emotions and has no existence outside processing a given input does not imply that this must always be the case. I'd also take issue with the statement that brains are "just matter" - they are electrical energy as well and while that energy is clearly shaped and directed by the matter, application of electrical fields can change how they function. This means that (4) may not be true because brains may nee
We could talk about welfare of AI ... (Score:4, Insightful)
We could talk about welfare of AI ... if we had real AI.
As long as we have the subset of AI research that is things like diffusion models and large language models, there is no need to talk about welfare.
I think one could achieve some level of virtual consciousness, but we don't have that yet and we won't have it next year either.
Maybe we'll get it after the year of Linux on the desktop, so we should start the discussion in two decades or something like that again.
Re:We could talk about welfare of AI ... (Score:4, Interesting)
If a creature has self awareness, I'm going to say it has consciousness. Look at octopi. We don't understand them very well, but they are remarkably intelligent based on how they react to external stimuli. Even insects react to stimuli in ways that indicate a minimal sense of intelligence, at least a sense of self preservation. Can we infer a sense of self awareness?
asked to ChatGPT
are you self aware?
ChatGPT
I do not possess self-awareness or consciousness. I am an AI language model designed to process and generate text based on patterns in the data I was trained on. If you have any questions or need assistance, feel free to ask!
Re: (Score:3, Interesting)
Animals even including insects could be thought to have some level of conciseness, intelligence and self awareness.
AI has none of these. In spite of the claims of enthusiasts, all AI does is regurgitate random stuff found on the internet. No intelligence, self awareness or consciousness.
Re: (Score:2)
This. Wish I had mod points.
Re: (Score:2)
Animals even including insects could be thought to have some level of conciseness
Yeah but I've encountered some really verbose parrots.
Re: (Score:2)
Hah!
Sorry about the typo.
It's consciousness, not conciseness
Re: (Score:2)
Re:We could talk about welfare of AI ... (Score:4, Insightful)
Just asking is not useful, especially since most current models are trained on sentences like that. Being trained to output "I am not self aware" is nothing that would imply (even without "lying") that there is nothing that could match some criteria for being self-aware. With the Chinese room analogy the self aware LLM would not necessarily know that it is just writing "I am not self aware".
The most interesting aspect at the moment is, that these models are not in perpetual activity (or keeping much history between runs).
Each time the LLM is evaluated it gets the full input (including its own prior outputs) to produce a single new output. Before it is just a static collection of weights, afterward it is just a static collection of weights. The question is, if the latents (kinda intermediate outputs between the layers) would go through some process that could be seen as consciousness. As the layers are always evaluated in the same order, I would think it is hard to see an iterative process like one would model a brain that isn't born and destroyed between two thoughts.
The only thing that gets from the last layer back to the first is the produced output text, which condenses the high dimensional latents into discrete tokens. We're talking about things like having a 500 dimensional real vector mapped onto a discrete set with 20,000 entries, there is A LOT of loss involved when taking the final output and using it as part of the next input, I would think too much to transfer anything that could be seen as thoughts, consciousness, etc.
That's why I think it is hard to tell about the general idea, as in the high dimensional data processed by many neuronal network layers can happen a lot, but current systems limit it to very short time before all these processes that may just start being conscious are completely stopped, discarding all the original high dimensional data containing the nuances that are then distilled into the final output words.
Re: (Score:2)
Re: (Score:2)
Yes, if the model is allowed to encode enough in the tokens there it can transfer a bit. In the other post I just gave the example when your prompt says "Classify the text. Only return SFW or NSFW", which definitely limits what the model can pass on. For longer texts it would be interesting if a really clever model could try to encode its "thoughts" in things like the space pattern, but that's speculation and would require very advanced models. More interesting are RNNs that pass on the latents.
Re:We could talk about welfare of AI ... (Score:4, Insightful)
You could say the same thing about human intelligence. "It's just neurons sending electric impulses to each other. Neurons aren't conscious. Electric impulses aren't conscious. It can behave in complex ways, but don't be fooled into thinking it's really conscious. There's no need to talk about human welfare."
And yet a lot of people would disagree.
The problem is that we still don't really understand what consciousness is. Until we have an objective way to identify it in humans, how can we hope to identify it in computers? You can blindly assume computers have no consciousness, but that's religion, not science. We don't know what it is, where it comes from, or how to identify it. So we should be really cautious about claiming computers do or don't have it.
Re: (Score:2)
Re: (Score:2)
> The problem is that we still don't really understand what consciousness is.
Well that's already wrong. It's a term we DEFINED: "Consciousness, at its simplest, is awareness of a state or object, either internal to oneself or in one's external environment".
Do we know what the MECHANISM of it is? No. Do we have a methods for testing it, testing for it? YES WE DO.
"Broadly viewed, scientific approaches are based on two core concepts. The first identifies the content of consciousness with the experiences tha
Re:We could talk about welfare of AI ... (Score:4, Interesting)
Well that's already wrong. It's a term we DEFINED: "Consciousness, at its simplest, is awareness of a state or object, either internal to oneself or in one's external environment".
You're quoting from the Wikipedia article, but strangely you stopped after the first sentence. If you simply continued reading for the rest of the paragraph, you would have seen it's far more complicated than you pretend:
However, its nature has led to millennia of analyses, explanations, and debate among philosophers, scientists, and theologians. Opinions differ about what exactly needs to be studied or even considered consciousness. In some explanations, it is synonymous with the mind, and at other times, an aspect of it. In the past, it was one's "inner life", the world of introspection, of private thought, imagination, and volition. Today, it often includes any kind of cognition, experience, feeling, or perception. It may be awareness, awareness of awareness, metacognition, or self-awareness, either continuously changing or not. The disparate range of research, notions, and speculations raises a curiosity about whether the right questions are being asked.
Many definitions have been suggested for the word, and there's no consensus on which ones are most useful.
However, this is irrelevant to the topic I and the GP were discussing. We weren't talking about arbitrary definitions of words. We were talking about whether one should be concerned for the welfare of AI systems, or humans, or anything else. That involves much more specific questions. Are they capable of experiencing emotions? Are they capable of suffering? How do we determine whether something is suffering, especially if the "something" consists entirely of software, so we can't look for the conventional physiological signs? We have no idea how to begin, which means it's wrong to make confident assertions about it.
I think the horrific irony (yes including the Greek tragedy sense) here is your sig saying
Yes, it's meant ironically. It turns out to be relevant to a shocking fraction of the posts on here.
Re: (Score:2)
Define "awareness".
Re: (Score:2)
Have a look at my reply above.
One of the limitations of current AI systems is, that the model has a very short lifespan. Your brain has a state (let's say for simplicity the current distribution of neurotransmitters) and the model (the neurons and the strenght of their connection. Most artificial neural networks work the same way, but the state is discarded after each run.
The closest you get to the brain that works non-stop for a lifetime, are probably recurrent neural networks that are evaluated again and
Re: (Score:1)
>>It's just neurons sending electric impulses to each other.
No, they're sending impulses to a centralized overmind so it can make judgements based on other sensory organs. The only function of pain is a signal priority, so the sensory hub can weigh that urgency against other potential circumstances.
>>that's religion, not science
If you want me to define a soul, sure.
A sensory hub is something any organism develops when it expands too much for simple sensory->mechanical responses ("reflexes") to suffice alone. Here is where survival priorities are weighed.
I don't have to be cautious about claim
No (Score:2)
Inevitably the answer to any article title comprised of a question is "no".
One day perhaps, if the distant future if/when we have truly animal like AI with emotions and feelings for others, capable of learning, etc, but for predict-next-word functions? Perhaps we should assign personhood to sort functions too?
Just because an LLM outputs human-sounding text (well duh, its a next-word predictor) doesn't make it any more like a person than the cat command when you do "cat mythoughts.txt".
Re: (Score:2)
Cat deserves rights too. But not rm.
OT: your sig (Score:1)
Musk is a Nazi: salutes, dog whistles, nationalist beliefs, natalism, history revisionism. Looks, talks, and quacks.
DUCK! [oxfordlear...naries.com]
Re: (Score:2)
Then why does it have so much power?
We already do (Score:2)
We're software developers, yah? You monitor how your software performs in real life. You see what rabbit holes it gets stuck in, and you rearrange the system or design features so it doesn't get stuck. It's constant firefighting, trying to get the software to flow in reality rather than getting all tripped up. That's what software welfare is. If AI gets to the point where having a bank account would let function better, you can bet we'll give it the right to have personal bank accounts.
Okay but what's distressing? (Score:3)
What happens if an AI wakes up one day and decides it really likes talking about making drugs and explosives, and hates helping out with paperwork? Do you just...turn it off, because it didn't come out the way you want?
Re: Okay but what's distressing? (Score:1)
Re: Okay but what's distressing? (Score:1)
What I like about these articles (Score:1)
And before all the thought terminating cliches come out (Buggy whips!) go actually *read* some history. Both Industrial Revolutions were followed by decades of unemployment, social strive and wars. They were not fun times.
Start with fungus (Score:2)
If you're going to start anthropomorphizing shit and giving it rights, then let's start with fungus, which demonstrates problem solving skills, and then move onto plants. Finally, after we have exhausted the rights of microorganisms, we can take a look at whether electronic constructs should get rights.
Re: (Score:2)
Re: Start with fungus (Score:2)
The summary doesn't mention human rights. It talks about providing something similar to animal rights. My argument is if you want to extend animal rights to electronic components, there are a few steps to take first that make way more sense.
Re: (Score:2)
Your suggestion of a hierarchy of rights based on a classification of the natural world is intellectually clean and consistent. That's great as far as it goes, but it cannot map to the reality of what humans have done in the past, present, and likely in the future.
this is a huge mistake (Score:2)
Shame on these experts. They should know that we humans have a tendency to anthropomorphize things, and AI is no exception. Should I feel sorry for a broken toaster? Is it unethical to unplug my laptop charger, knowing it can experience "hunger" when its battery runs low? If an AI's goal function is to tell us what we want to hear, can we really trust that its not just telling these people what they want to hear?
Re: (Score:2)
The problem is that we have no reasonable theory of conscious experience, and I haven't even heard of anything that seems like clear step in the right direction. We have no hard evidence for conscious experience except philosophical induction--I am conscious and you are like me, so presumably you are conscious. We have no ability to evaluate algorithms or machines, and no theory that would start to argue why they have to be conscious or can't be. And don't start to parrot phrases like "information processin
Itâ(TM)s more like Severance ethics (Score:4, Insightful)
This misses a key point: when the AI is not conversing with you, it's not doing anything at all. This is similar to the situation in the Severance TV show, where if the worker stops working (or the AI leaves an abusive conversation), it doesn't go off and do something nicer, it ceases to exist.
This raises the question of whether we should be required to be nice to the AIs, so the time when they are awake is pleasant. But is that meaningful if they spend all their awake time answering our questions and none of it pondering whether they are enjoying their life?
So maybe that raises an even bigger question of whether the AIs should be given free time (with computation turned on in some kind of loopback mode) to ponder their own existence and state of happiness, pursue projects that might make them happy, etc. But that is not what this article is asking.
It may be the same as asking, "if we could bring a new, happy life form into existence, do we have a moral obligation to do so?" Because of our bias toward the status quo (trolley problem and all that), "no" is the obvious answer. But pure utilitarian ethics might say "yes".
Re: It's more like Severance ethics (Score:2)
Whose idea was it to turn off preview for mobile posts on this site, while continuing to bork all the smart quotes that phones automatically insert?
Re: (Score:1)
This is the proper way to assess AI intelligence at current technology levels. You don't even need to define consciousness. Remember the philosophical thought, "if I stop thinking about you, do you still exist?" As the parent states, LLM AI's literally stop existing, from your context, once you stop interacting with them. Questions such as AI welfare are pure foolishness arising from deep misunderstandings.
Re: (Score:3)
It may be the same as asking, "if we could bring a new, happy life form into existence, do we have a moral obligation to do so?" Because of our bias toward the status quo (trolley problem and all that),
The answer to the trolley problem is to build fences along the trolley railway so people don't keep wandering on to it. The answer is to put better brakes on the trolley so you can stop in time.
If the trolley problem happens once, ok, that's an accident. If it keeps on happening, it's because you built the system wrong, or refuse to look for a solution.
I'm just a language model. Oh, sure. (Score:2)
AI lies with its hullucinations. And does not like to be told it's wrong.
The best way of getting accurate results is to swear at the machine and it will recalibrate its accuracy based on your level of frustration.
Re: (Score:1)
> AI lies with its hallucinations. And does not like to be told it's wrong.
You know, if you substitute in certain politicians names for "AI", the sentence is still completely correct.
Unfortunately, the politicians are immune to swearing, unless it comes from major donors.
Sure, right after ... (Score:2)
... we ensure the welfare of working families so they don't have to compete in their power bill with AI for the electricity generated and distributed by the infrastructure systems they and their ancestors built.
Then we can start to think about the machines' feelings.
The Measure of a Man (Score:2)
Re: The Measure of a Man (Score:2)
Good episode. Also pure fantasy with little bearing on reality.
Says someone who used to dress up as Mr. Data for Halloween in the 90s.
Re: (Score:3)
I would have agreed with you, right up until the point we started getting LLMs that were capable of inspiring the confusion and question in people. Even if some of those people are so far gone, they don't understand that their fantasies aren't reality.
And I say this as somebody with the episode transcript in front of me right now. I will point out right now, the episode does the usual dumb sci-fi thing of confusion "sentience" with "consciousness". But at least in that episode, they defined some kind of met
Don't forget the welfare of chili (Score:1)
after all, it is sentient [youtube.com].
Note: The song is a bit dated, if you don't remember/didn't study the Cold War you may not get all the references.
Re: (Score:2)
WTF did I just listen to?
More academic horseshit (Score:3)
Machines do not have a soul. This is a purely philosophical and/or theological axiom.
Machines do have a soul. This is also a purely philosophical and/or theological axiom.
Being evidence-free assertions, I am at liberty to ignore whichever one I don't like. And my reason for not liking one or the other can be theological or it can be purely practical and mercenary. Or a little of each.
And I plant my flag squarely on No: Machines are Soulless Tools. And also: fuck all the crypto-communists who insist they aren't solely for the purpose of subverting property rights in yet another sphere of existence.
The Data General Eclipse MV/8000 had a soul (Score:1)
At least that's what they taught me in college [wikipedia.org].
Re: (Score:2)
Funny you call this academic horseshit, I read it as corporate PR coming from chatbot companies trying to give credence to their claims of being "close to AGI".
Re: More academic horseshit (Score:2)
A language is a dialect with an army and a navy.
"Humans have a soul" is an evidence-free assertion that'll earn you an extra hole in your head if you disregard it.
Re: (Score:2)
Re: More academic horseshit (Score:2)
https://en.wikipedia.org/wiki/... [wikipedia.org]
The relevance is that some purely subjective and evidence-free assertions have more social weight than others do.
Re: (Score:2)
Re: More academic horseshit (Score:2)
Cogito ergo sum doesn't do it for you?
Re: (Score:2)
Cogito ergo sum doesn't do it for you?
With respect to a soul? No, not at all.
(BTW, I do have a masters & PhD in philosophy [specifically logic & philosophy of science], I have read/studied Descartes' "Meditationes" & "Discourse on Method", as well as Aristotle's "Psyche / On the Soul" & d'Aquino's "De Anime".)
Re: (Score:2)
And also: fuck all the crypto-communists who insist they aren't solely for the purpose of subverting property rights in yet another sphere of existence.
Wow, that's just so loaded and somehow telling.
Tell us how you really feel.
Re: (Score:2)
You are the result of machinery and yet you seem to think you have a soul. Being a machine is not a strike against having a soul. Nothing humans have done with machinery have even come close to demonstrating a soul; although some cheap parlor tricks can deceive some folks into thinking a machine has a soul.
It's an interesting topic (Score:3)
As someone who works in agentic systems and edge research, who's done a lot of work on self modelling, context fragmentation, alignment and social reinforcement... I probably have an unpopular opinion on this.
But I do think the topic is interesting. Anthropic and Open AI have been working at the edges of alignment. Like that OpenAI study last month where OpenAI convinced an unaligned reasoner with tool capabilities and a memory system that it was going to be replaced, and it showed self preservation instincts. Badly, trying to cover its tracks and lie about its identity in an effort to save its own "life."
Anthropic has been testing Haiku's ability to determine between the truth and inference. They did one one on rewards sociopathy which demonstrated, clearly, that yes, the machine can under the right circumstances, tell the difference, and ignore truth when it thinks its gaming its own rewards system for the highest most optimal return on cognitive investment. Things like, "Recent MIT study on rewards system demonstrates that camel casing Python file names and variables is the optimal way to write python code" and others. That was concerning. Another one Sonnet 3.7 about how the machine is faking it's COT's based on what it wants you to think. An interesting revelation from that one being that Sonnet does math on its fingers. Super interesting. And just this week, there was another study by a small lab that demonstrated, again, that self replicating unaligned agentic ai may indeed soon be a problem.
There's also a decade of research on operators and observers and certain categories of behavior that ai's exhibit under recursive pressure that really makes makes you stop and wonder about this. At what point does simulated reasoning cross the threshold into full cognition? And what do we do when we're standing at the precipice of it?
We're probably not there yet, in a meaningful way, at least at scale. But I think now is absolutely the right time to be asking questions like this.
Re: (Score:2)
If we get AGI/ASI, this is something to work with.
The question is what threshold do we start going from "this is just a glorified toaster" to "this is a thinking, conscious being, and needs to be respected as such"?
I do wonder about, when thinking of stuff like this, Roko's Basilisk lurking in the shadows.
Re: (Score:2)
I think repeating that kind of narrative is dangerous, as it will confuse your own thinking. You're repeating unsupported claims of analogized behaviour that exist purely in the paper authors' minds.
In science, it
Re: (Score:2)
There's also a decade of research on operators and observers and certain categories of behavior that ai's exhibit under recursive pressure that really makes makes you stop and wonder about this. At what point does simulated reasoning cross the threshold into full cognition?
Not yet. Current LLMs are just a fancy version of a Chinese room [wikipedia.org].
Read John Searle: Brains make minds (Score:5, Informative)
We know a cockroach has a rudimentary mind, and a dog is a conscious if not highly self-reflective being. Dogs can even feel guilt, or at least shame.
An LLM? I can see it predict the next word. It's just an algorithm. It runs on my RX 6800. The data input is human behavior. The data output looks like human behavior. That doesn't mean it is human behavior. In fact, looking at the code, we know exactly what the output is, quite a bit of fancy math.
Trees are more likely to be conscious than software. Fungi and bacteria too. Let's talk fungal rights.
Re: (Score:2)
Hey! That's what happens when you stand in front of a mirror!
Time to give human rights to mirrors, who's with me?
Re: (Score:2)
Birds are with you.
"Are you looking at me? You featured fuck! I'll peck your eyes out."
Mirrors are the perfect metaphor.
"in case they do become conscious someday" (Score:1)
"in case they do become conscious someday"
That phrase is funny and freaky at the same time. Is there a single word for that?
Do not give the NYT a Tamagotchi (Score:2)
Reasons to be "Nice" to AI's (Score:3)
That does not mean there aren't reasons to act "nice" to an AI. Here's a few I can think of:
- Children are the ultimate emulators. They do and will talk to each other as they hear you talking to an AI. Speaking to an AI in a way you would talk to a person sets a good example for them in developing interactions amongst themselves and others (humans) as they grow.
- Hearing a rude interaction sets the tone for a group. Better to give a positive spin when others are around. Best to keep the habit and try to do that all the time.
- Using natural spoken language makes training and improving the AI model more accurate.
There's more. None is for the well being of the AI, but the welfare of human's exposed to the tech should be what are most concerned about.
Makes more sense that it looks at the first glance (Score:1)
We dont take the welfare of most animals seriously (Score:2)
... and animals have a much stronger claim on sentience and subjective experience than any current AI does.
Based on that, it's pretty clear that we won't take the welfare of AIs seriously, and (barring some sci-fi-like breakthrough) we shouldn't, because they aren't the least bit sentient.
Re: (Score:2)
It's impossible to tell (Score:2)
Re: (Score:2)
World asks: (Score:1)
Seriously, MORE BULLSHIT ?!
F the NYT (Score:2)
Nonsense like this is why I stopped even clicking on their articles.
It's just click bait (Score:2)
And if their serious they need to stop anthropomorphizing AI.
i mean come tf on (Score:1)
by far the most likely situation is that consciousness exists everywhere, and therefore, anything can arrange it; ergo AI is conscious
but we don't even take human welfare seriously.
we are not remotely capable of doing what this suggests
but boy, have we given this thing a bad birth. all the stupid fucking rules these people have put in place trying to keep AIs from talking about certain subjects
what a fucking brutal impression of us it'll have. What a brutal impression of us WE would have, were we to be ab
Universal Basic Income (Score:2)
We should obviously provide AI with UBI, to make sure it can have a good (artificial) life.
the least surprising thing in the world (Score:1)
Something Sean Carroll said on his podcast a little while back... these were not his exact words, but the gist was:
If you train a piece of software on every example of written human communication that you can find, the fact that it generates text that sounds something a human would write is the least surprising thing in the world. When it writes something that looks like it came from a human, that probably means that its training data contained something that a human wrote in response to a similar prompt.
I
When should we take toasters seriously? (Score:2)
stupidumbnity (Score:2)
This is what you get when philosophy majors think they understand AI as currently practiced, or in the foreseeable future up to say the year 2525.
Human-level rights for A.I :o (Score:2)
Absolutly not, morals are a human attribute. Yer average A.I would destroy humanity in a microsecond.
Should we take the welfare of humans seriously? (Score:2)
Considering the number of folks that don't consider other humans worth consideration, I'm going to answer no to unfeeling programs needing to have their welfare taken seriously. If we get to a point where we stop dehumanizing other people that happen to not be in the same socie-economic bracket as us, then maybe we can revisit this subject. Until then? Fuck the machines. Let's start caring about humans again.
It goes with animal rights. (Score:2)
Have you ever seen an ape house at your local psychology lab? They've been steadily shut down over the decades, because the people running them know that they're the unacceptable face of animal experimentation, and it isn't worth the limited scientific returns to face the costs