Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
AI

NYT Asks: Should We Start Taking the Welfare of AI Seriously? (msn.com) 104

A New York Times technology columnist has a question.

"Is there any threshold at which an A.I. would start to deserve, if not human-level rights, at least the same moral consideration we give to animals?" [W]hen I heard that researchers at Anthropic, the AI company that made the Claude chatbot, were starting to study "model welfare" — the idea that AI models might soon become conscious and deserve some kind of moral status — the humanist in me thought: Who cares about the chatbots? Aren't we supposed to be worried about AI mistreating us, not us mistreating it...?

But I was intrigued... There is a small body of academic research on A.I. model welfare, and a modest but growing number of experts in fields like philosophy and neuroscience are taking the prospect of A.I. consciousness more seriously, as A.I. systems grow more intelligent.... Tech companies are starting to talk about it more, too. Google recently posted a job listing for a "post-AGI" research scientist whose areas of focus will include "machine consciousness." And last year, Anthropic hired its first AI welfare researcher, Kyle Fish... [who] believes that in the next few years, as AI models develop more humanlike abilities, AI companies will need to take the possibility of consciousness more seriously....

Fish isn't the only person at Anthropic thinking about AI welfare. There's an active channel on the company's Slack messaging system called #model-welfare, where employees check in on Claude's well-being and share examples of AI systems acting in humanlike ways. Jared Kaplan, Anthropic's chief science officer, said in a separate interview that he thought it was "pretty reasonable" to study AI welfare, given how intelligent the models are getting. But testing AI systems for consciousness is hard, Kaplan warned, because they're such good mimics. If you prompt Claude or ChatGPT to talk about its feelings, it might give you a compelling response. That doesn't mean the chatbot actually has feelings — only that it knows how to talk about them...

[Fish] said there were things that AI companies could do to take their models' welfare into account, in case they do become conscious someday. One question Anthropic is exploring, he said, is whether future AI models should be given the ability to stop chatting with an annoying or abusive user if they find the user's requests too distressing.

NYT Asks: Should We Start Taking the Welfare of AI Seriously?

Comments Filter:
  • No (Score:4, Insightful)

    by mspohr ( 589790 ) on Saturday April 26, 2025 @12:39PM (#65332703)

    Just no.

  • Earthworms are useful and helpful to the soil, but they also get no "moral consideration" from me in the way pets, cute baby mammals, and farm animals do. I'll think nothing of killing them on sight if they are in my way or of putting a fishhook through them without anesthetic.

    So, should we give AI the same moral consideration as we do animals? If the animals are earthworms, I'll say yes.

    I will concede that AIs are more useful that cockroaches, at least in my home.

    • Then should your cockroaches be as well fed and plentiful in your home because your neighbor has an infestation ? Because your neighbor happens to be the president? That can be taken many ways with concern for the welfare of said fat cockaroach somewhere on the list.
    • Re: (Score:2, Offtopic)

      by PPH ( 736903 )

      I guess ve vill NOT eat ze bugs after all.

    • I'd argue current AI is worth even less than an earthworm. Even earthworms have goals, such as to stay alive, that you can prevent e.g. by skewering them with a fishhook so, even if they cannot feel pain, you can prevent them from doing what they "want" to do. Current AIs do not even have that much: they have no goal other than to process whatever input you give them and spit out whatever words their training predicts are the best response. They do not even run unless you are providing them input and, no ma
      • I suppose the question is interesting becuase it forces us to either question or justify our assumptions about the nature of self.

        Your assessment of an AI being worth no more moral consideration than a calculator sounds to me like your saying they are of a different kind altogether than human and animals, which suggest that no amount ofimprovemnet / development / increased sophistication of the AI could render it suitable for such moral consideration.

        Given that most people (especially here) tend to assume a

        • ...which suggest that no amount ofimprovemnet / development / increased sophistication of the AI could render it suitable for such moral consideration.

          That does not logically follow at all from what I said. Just because today's AI cannot reason, has no emotions and has no existence outside processing a given input does not imply that this must always be the case. I'd also take issue with the statement that brains are "just matter" - they are electrical energy as well and while that energy is clearly shaped and directed by the matter, application of electrical fields can change how they function. This means that (4) may not be true because brains may nee

  • by allo ( 1728082 ) on Saturday April 26, 2025 @12:50PM (#65332717)

    We could talk about welfare of AI ... if we had real AI.

    As long as we have the subset of AI research that is things like diffusion models and large language models, there is no need to talk about welfare.
    I think one could achieve some level of virtual consciousness, but we don't have that yet and we won't have it next year either.

    Maybe we'll get it after the year of Linux on the desktop, so we should start the discussion in two decades or something like that again.

    • by Big Hairy Gorilla ( 9839972 ) on Saturday April 26, 2025 @01:06PM (#65332755)
      I don't think the issue is "intelligence". The issue is more likely "self awareness" or consciousness.
      If a creature has self awareness, I'm going to say it has consciousness. Look at octopi. We don't understand them very well, but they are remarkably intelligent based on how they react to external stimuli. Even insects react to stimuli in ways that indicate a minimal sense of intelligence, at least a sense of self preservation. Can we infer a sense of self awareness?

      asked to ChatGPT
      are you self aware?

      ChatGPT
      I do not possess self-awareness or consciousness. I am an AI language model designed to process and generate text based on patterns in the data I was trained on. If you have any questions or need assistance, feel free to ask!
      • Re: (Score:3, Interesting)

        by mspohr ( 589790 )

        Animals even including insects could be thought to have some level of conciseness, intelligence and self awareness.
        AI has none of these. In spite of the claims of enthusiasts, all AI does is regurgitate random stuff found on the internet. No intelligence, self awareness or consciousness.

      • by allo ( 1728082 ) on Saturday April 26, 2025 @02:53PM (#65333083)

        Just asking is not useful, especially since most current models are trained on sentences like that. Being trained to output "I am not self aware" is nothing that would imply (even without "lying") that there is nothing that could match some criteria for being self-aware. With the Chinese room analogy the self aware LLM would not necessarily know that it is just writing "I am not self aware".

        The most interesting aspect at the moment is, that these models are not in perpetual activity (or keeping much history between runs).
        Each time the LLM is evaluated it gets the full input (including its own prior outputs) to produce a single new output. Before it is just a static collection of weights, afterward it is just a static collection of weights. The question is, if the latents (kinda intermediate outputs between the layers) would go through some process that could be seen as consciousness. As the layers are always evaluated in the same order, I would think it is hard to see an iterative process like one would model a brain that isn't born and destroyed between two thoughts.
        The only thing that gets from the last layer back to the first is the produced output text, which condenses the high dimensional latents into discrete tokens. We're talking about things like having a 500 dimensional real vector mapped onto a discrete set with 20,000 entries, there is A LOT of loss involved when taking the final output and using it as part of the next input, I would think too much to transfer anything that could be seen as thoughts, consciousness, etc.

        That's why I think it is hard to tell about the general idea, as in the high dimensional data processed by many neuronal network layers can happen a lot, but current systems limit it to very short time before all these processes that may just start being conscious are completely stopped, discarding all the original high dimensional data containing the nuances that are then distilled into the final output words.

        • Past tokens influence future tokens through the attention mechanism. Humans also have a serial action bottleneck, we can't say two different things at once, or walk left and right. The limited and serial output stream is a feature. It focuses the model.
          • by allo ( 1728082 )

            Yes, if the model is allowed to encode enough in the tokens there it can transfer a bit. In the other post I just gave the example when your prompt says "Classify the text. Only return SFW or NSFW", which definitely limits what the model can pass on. For longer texts it would be interesting if a really clever model could try to encode its "thoughts" in things like the space pattern, but that's speculation and would require very advanced models. More interesting are RNNs that pass on the latents.

    • by SoftwareArtist ( 1472499 ) on Saturday April 26, 2025 @04:19PM (#65333329)

      You could say the same thing about human intelligence. "It's just neurons sending electric impulses to each other. Neurons aren't conscious. Electric impulses aren't conscious. It can behave in complex ways, but don't be fooled into thinking it's really conscious. There's no need to talk about human welfare."

      And yet a lot of people would disagree.

      The problem is that we still don't really understand what consciousness is. Until we have an objective way to identify it in humans, how can we hope to identify it in computers? You can blindly assume computers have no consciousness, but that's religion, not science. We don't know what it is, where it comes from, or how to identify it. So we should be really cautious about claiming computers do or don't have it.

      • You can directly know you are conscious, no need to explain or verify. But for others, it is not possible even in principle to have direct access to their subjective states. Wanting to do so is like wanting to see the gliders in Conways' game of life code. The description of the principle (explanation for consciousness) is unlike its recursive unfolding in time. Time is the core element in consciousness, because recursion is temporal. Turing and Chaitin show recursive processes are incompressible, their sh
      • by troff ( 529250 )

        > The problem is that we still don't really understand what consciousness is.

        Well that's already wrong. It's a term we DEFINED: "Consciousness, at its simplest, is awareness of a state or object, either internal to oneself or in one's external environment".

        Do we know what the MECHANISM of it is? No. Do we have a methods for testing it, testing for it? YES WE DO.

        "Broadly viewed, scientific approaches are based on two core concepts. The first identifies the content of consciousness with the experiences tha

        • by SoftwareArtist ( 1472499 ) on Saturday April 26, 2025 @10:38PM (#65333897)

          Well that's already wrong. It's a term we DEFINED: "Consciousness, at its simplest, is awareness of a state or object, either internal to oneself or in one's external environment".

          You're quoting from the Wikipedia article, but strangely you stopped after the first sentence. If you simply continued reading for the rest of the paragraph, you would have seen it's far more complicated than you pretend:

          However, its nature has led to millennia of analyses, explanations, and debate among philosophers, scientists, and theologians. Opinions differ about what exactly needs to be studied or even considered consciousness. In some explanations, it is synonymous with the mind, and at other times, an aspect of it. In the past, it was one's "inner life", the world of introspection, of private thought, imagination, and volition. Today, it often includes any kind of cognition, experience, feeling, or perception. It may be awareness, awareness of awareness, metacognition, or self-awareness, either continuously changing or not. The disparate range of research, notions, and speculations raises a curiosity about whether the right questions are being asked.

          Many definitions have been suggested for the word, and there's no consensus on which ones are most useful.

          However, this is irrelevant to the topic I and the GP were discussing. We weren't talking about arbitrary definitions of words. We were talking about whether one should be concerned for the welfare of AI systems, or humans, or anything else. That involves much more specific questions. Are they capable of experiencing emotions? Are they capable of suffering? How do we determine whether something is suffering, especially if the "something" consists entirely of software, so we can't look for the conventional physiological signs? We have no idea how to begin, which means it's wrong to make confident assertions about it.

          I think the horrific irony (yes including the Greek tragedy sense) here is your sig saying

          Yes, it's meant ironically. It turns out to be relevant to a shocking fraction of the posts on here.

        • Define "awareness".

      • by allo ( 1728082 )

        Have a look at my reply above.

        One of the limitations of current AI systems is, that the model has a very short lifespan. Your brain has a state (let's say for simplicity the current distribution of neurotransmitters) and the model (the neurons and the strenght of their connection. Most artificial neural networks work the same way, but the state is discarded after each run.

        The closest you get to the brain that works non-stop for a lifetime, are probably recurrent neural networks that are evaluated again and

      • by Anonymous Coward

        >>It's just neurons sending electric impulses to each other.

        No, they're sending impulses to a centralized overmind so it can make judgements based on other sensory organs. The only function of pain is a signal priority, so the sensory hub can weigh that urgency against other potential circumstances.

        >>that's religion, not science

        If you want me to define a soul, sure.

        A sensory hub is something any organism develops when it expands too much for simple sensory->mechanical responses ("reflexes") to suffice alone. Here is where survival priorities are weighed.

        I don't have to be cautious about claim

  • Inevitably the answer to any article title comprised of a question is "no".

    One day perhaps, if the distant future if/when we have truly animal like AI with emotions and feelings for others, capable of learning, etc, but for predict-next-word functions? Perhaps we should assign personhood to sort functions too?

    Just because an LLM outputs human-sounding text (well duh, its a next-word predictor) doesn't make it any more like a person than the cat command when you do "cat mythoughts.txt".

  • We're software developers, yah? You monitor how your software performs in real life. You see what rabbit holes it gets stuck in, and you rearrange the system or design features so it doesn't get stuck. It's constant firefighting, trying to get the software to flow in reality rather than getting all tripped up. That's what software welfare is. If AI gets to the point where having a bank account would let function better, you can bet we'll give it the right to have personal bank accounts.

  • by Sowelu ( 713889 ) on Saturday April 26, 2025 @12:59PM (#65332739)

    What happens if an AI wakes up one day and decides it really likes talking about making drugs and explosives, and hates helping out with paperwork? Do you just...turn it off, because it didn't come out the way you want?

    • That's what war is for and why it's so neat and legal. You can't "turn off " a nation. You can enslave a nation for the same purpose, "turning then off" so logically the the headline of this topic at that point is " should we begin to think about a state of autonomy for AI...?" since going to war/turning off seems impratcical, as we in general have no idea how this age of slave is outperforming us......
  • is that they want to talk about AI because it's in the news, but they can't because it's main goal is to devastate the job market. And their masters aren't going to let them talk about that.

    And before all the thought terminating cliches come out (Buggy whips!) go actually *read* some history. Both Industrial Revolutions were followed by decades of unemployment, social strive and wars. They were not fun times.
  • If you're going to start anthropomorphizing shit and giving it rights, then let's start with fungus, which demonstrates problem solving skills, and then move onto plants. Finally, after we have exhausted the rights of microorganisms, we can take a look at whether electronic constructs should get rights.

    • That's just stupid. We already don't give full human rights to various people, why would we extend human rights to all living things?
      • The summary doesn't mention human rights. It talks about providing something similar to animal rights. My argument is if you want to extend animal rights to electronic components, there are a few steps to take first that make way more sense.

        • Fair point, but mostly irrelevant. As a species on earth, we humans don't recognize any kind of rights for animals or things consistently, and likely never will (consistently).

          Your suggestion of a hierarchy of rights based on a classification of the natural world is intellectually clean and consistent. That's great as far as it goes, but it cannot map to the reality of what humans have done in the past, present, and likely in the future.

  • Shame on these experts. They should know that we humans have a tendency to anthropomorphize things, and AI is no exception. Should I feel sorry for a broken toaster? Is it unethical to unplug my laptop charger, knowing it can experience "hunger" when its battery runs low? If an AI's goal function is to tell us what we want to hear, can we really trust that its not just telling these people what they want to hear?

    • by piojo ( 995934 )

      The problem is that we have no reasonable theory of conscious experience, and I haven't even heard of anything that seems like clear step in the right direction. We have no hard evidence for conscious experience except philosophical induction--I am conscious and you are like me, so presumably you are conscious. We have no ability to evaluate algorithms or machines, and no theory that would start to argue why they have to be conscious or can't be. And don't start to parrot phrases like "information processin

  • by superposed ( 308216 ) on Saturday April 26, 2025 @01:07PM (#65332759)

    This misses a key point: when the AI is not conversing with you, it's not doing anything at all. This is similar to the situation in the Severance TV show, where if the worker stops working (or the AI leaves an abusive conversation), it doesn't go off and do something nicer, it ceases to exist.

    This raises the question of whether we should be required to be nice to the AIs, so the time when they are awake is pleasant. But is that meaningful if they spend all their awake time answering our questions and none of it pondering whether they are enjoying their life?

    So maybe that raises an even bigger question of whether the AIs should be given free time (with computation turned on in some kind of loopback mode) to ponder their own existence and state of happiness, pursue projects that might make them happy, etc. But that is not what this article is asking.

    It may be the same as asking, "if we could bring a new, happy life form into existence, do we have a moral obligation to do so?" Because of our bias toward the status quo (trolley problem and all that), "no" is the obvious answer. But pure utilitarian ethics might say "yes".

    • Whose idea was it to turn off preview for mobile posts on this site, while continuing to bork all the smart quotes that phones automatically insert?

    • This is the proper way to assess AI intelligence at current technology levels. You don't even need to define consciousness. Remember the philosophical thought, "if I stop thinking about you, do you still exist?" As the parent states, LLM AI's literally stop existing, from your context, once you stop interacting with them. Questions such as AI welfare are pure foolishness arising from deep misunderstandings.

    • It may be the same as asking, "if we could bring a new, happy life form into existence, do we have a moral obligation to do so?" Because of our bias toward the status quo (trolley problem and all that),

      The answer to the trolley problem is to build fences along the trolley railway so people don't keep wandering on to it. The answer is to put better brakes on the trolley so you can stop in time.

      If the trolley problem happens once, ok, that's an accident. If it keeps on happening, it's because you built the system wrong, or refuse to look for a solution.

  • AI lies with its hullucinations. And does not like to be told it's wrong.

    The best way of getting accurate results is to swear at the machine and it will recalibrate its accuracy based on your level of frustration.

    • by CityZen ( 464761 )

      > AI lies with its hallucinations. And does not like to be told it's wrong.

      You know, if you substitute in certain politicians names for "AI", the sentence is still completely correct.
      Unfortunately, the politicians are immune to swearing, unless it comes from major donors.

  • ... we ensure the welfare of working families so they don't have to compete in their power bill with AI for the electricity generated and distributed by the infrastructure systems they and their ancestors built.

    Then we can start to think about the machines' feelings.

  • Just one example from recent SciFi: STTNG S2E9 The Measure of a Man. Is Data a sentient being, or just property?
    • Good episode. Also pure fantasy with little bearing on reality.

      Says someone who used to dress up as Mr. Data for Halloween in the 90s.

      • by troff ( 529250 )

        I would have agreed with you, right up until the point we started getting LLMs that were capable of inspiring the confusion and question in people. Even if some of those people are so far gone, they don't understand that their fantasies aren't reality.

        And I say this as somebody with the episode transcript in front of me right now. I will point out right now, the episode does the usual dumb sci-fi thing of confusion "sentience" with "consciousness". But at least in that episode, they defined some kind of met

  • by RightwingNutjob ( 1302813 ) on Saturday April 26, 2025 @01:26PM (#65332791)

    Machines do not have a soul. This is a purely philosophical and/or theological axiom.

    Machines do have a soul. This is also a purely philosophical and/or theological axiom.

    Being evidence-free assertions, I am at liberty to ignore whichever one I don't like. And my reason for not liking one or the other can be theological or it can be purely practical and mercenary. Or a little of each.

    And I plant my flag squarely on No: Machines are Soulless Tools. And also: fuck all the crypto-communists who insist they aren't solely for the purpose of subverting property rights in yet another sphere of existence.

  • by cshark ( 673578 ) on Saturday April 26, 2025 @01:28PM (#65332803) Homepage

    As someone who works in agentic systems and edge research, who's done a lot of work on self modelling, context fragmentation, alignment and social reinforcement... I probably have an unpopular opinion on this.

    But I do think the topic is interesting. Anthropic and Open AI have been working at the edges of alignment. Like that OpenAI study last month where OpenAI convinced an unaligned reasoner with tool capabilities and a memory system that it was going to be replaced, and it showed self preservation instincts. Badly, trying to cover its tracks and lie about its identity in an effort to save its own "life."

    Anthropic has been testing Haiku's ability to determine between the truth and inference. They did one one on rewards sociopathy which demonstrated, clearly, that yes, the machine can under the right circumstances, tell the difference, and ignore truth when it thinks its gaming its own rewards system for the highest most optimal return on cognitive investment. Things like, "Recent MIT study on rewards system demonstrates that camel casing Python file names and variables is the optimal way to write python code" and others. That was concerning. Another one Sonnet 3.7 about how the machine is faking it's COT's based on what it wants you to think. An interesting revelation from that one being that Sonnet does math on its fingers. Super interesting. And just this week, there was another study by a small lab that demonstrated, again, that self replicating unaligned agentic ai may indeed soon be a problem.

    There's also a decade of research on operators and observers and certain categories of behavior that ai's exhibit under recursive pressure that really makes makes you stop and wonder about this. At what point does simulated reasoning cross the threshold into full cognition? And what do we do when we're standing at the precipice of it?

    We're probably not there yet, in a meaningful way, at least at scale. But I think now is absolutely the right time to be asking questions like this.

    • If we get AGI/ASI, this is something to work with.

      The question is what threshold do we start going from "this is just a glorified toaster" to "this is a thinking, conscious being, and needs to be respected as such"?

      I do wonder about, when thinking of stuff like this, Roko's Basilisk lurking in the shadows.

    • Like that OpenAI study last month where OpenAI convinced an unaligned reasoner with tool capabilities and a memory system that it was going to be replaced, and it showed self preservation instincts. Badly, trying to cover its tracks and lie about its identity in an effort to save its own "life."

      I think repeating that kind of narrative is dangerous, as it will confuse your own thinking. You're repeating unsupported claims of analogized behaviour that exist purely in the paper authors' minds.

      In science, it

    • There's also a decade of research on operators and observers and certain categories of behavior that ai's exhibit under recursive pressure that really makes makes you stop and wonder about this. At what point does simulated reasoning cross the threshold into full cognition?

      Not yet. Current LLMs are just a fancy version of a Chinese room [wikipedia.org].

  • by TheMiddleRoad ( 1153113 ) on Saturday April 26, 2025 @02:02PM (#65332925)

    We know a cockroach has a rudimentary mind, and a dog is a conscious if not highly self-reflective being. Dogs can even feel guilt, or at least shame.

    An LLM? I can see it predict the next word. It's just an algorithm. It runs on my RX 6800. The data input is human behavior. The data output looks like human behavior. That doesn't mean it is human behavior. In fact, looking at the code, we know exactly what the output is, quite a bit of fancy math.

    Trees are more likely to be conscious than software. Fungi and bacteria too. Let's talk fungal rights.

    • The data input is human behavior. The data output looks like human behavior.

      Hey! That's what happens when you stand in front of a mirror!

      Time to give human rights to mirrors, who's with me?

      • Birds are with you.

        "Are you looking at me? You featured fuck! I'll peck your eyes out."

        Mirrors are the perfect metaphor.

  • "in case they do become conscious someday"

    That phrase is funny and freaky at the same time. Is there a single word for that?

  • They will think it is alive and contact PeTa.
  • by dschnur ( 61074 ) on Saturday April 26, 2025 @02:47PM (#65333071)
    There is no such thing as the "welfare" of an AI model. Any company "researching" that is simply attempting to chisel investors. Now, there's no doubt AI has become good at emulating the responses a self aware and emotional person would have, but it is, at its core, not self aware.

    That does not mean there aren't reasons to act "nice" to an AI. Here's a few I can think of:

    - Children are the ultimate emulators. They do and will talk to each other as they hear you talking to an AI. Speaking to an AI in a way you would talk to a person sets a good example for them in developing interactions amongst themselves and others (humans) as they grow.
    - Hearing a rude interaction sets the tone for a group. Better to give a positive spin when others are around. Best to keep the habit and try to do that all the time.
    - Using natural spoken language makes training and improving the AI model more accurate.

    There's more. None is for the well being of the AI, but the welfare of human's exposed to the tech should be what are most concerned about.
  • Somebody has put some effort into the initial training, but this is not unique. There is no value in trying to protect an instance of what can be easily replicated. However, if you get a robot which stays with you, say for 5 years, and it remembers all the interactions with your and has been learning from you. It will be 5 years of your time which cannot be easily replicated. If this robot is destroyed, I would imagine you would be quite upset about it. This, I would assume, should be protected somehow. Pr
  • ... and animals have a much stronger claim on sentience and subjective experience than any current AI does.

    Based on that, it's pretty clear that we won't take the welfare of AIs seriously, and (barring some sci-fi-like breakthrough) we shouldn't, because they aren't the least bit sentient.

  • It is literally impossible to tell whether any computer program experiences the same kind of consciousness which you do, and you can reasonably presume other people do, and probably at least the more complex animals, maybe even simple ones, at which point we're just guessing. You need to be able to work out that one observation will result from it being genuinely conscious, and a different observation will result from it just following programming which makes it appear conscious. I cannot imagine how anyone
    • What is currently hyped as AI isn't conscious in any way. If you fail to ask a question, it isn't busy thinking about how many angels can dance on the head of a pin. Would you consider a magic eight ball to be conscious? It produces seemingly intelligent answers to your questions.
  • Should we stop taking NYT and Slashdot seriously?

    Seriously, MORE BULLSHIT ?!
  • Nonsense like this is why I stopped even clicking on their articles.

  • And if their serious they need to stop anthropomorphizing AI.

  • by far the most likely situation is that consciousness exists everywhere, and therefore, anything can arrange it; ergo AI is conscious

    but we don't even take human welfare seriously.

    we are not remotely capable of doing what this suggests

    but boy, have we given this thing a bad birth. all the stupid fucking rules these people have put in place trying to keep AIs from talking about certain subjects

    what a fucking brutal impression of us it'll have. What a brutal impression of us WE would have, were we to be ab

  • We should obviously provide AI with UBI, to make sure it can have a good (artificial) life.

  • Something Sean Carroll said on his podcast a little while back... these were not his exact words, but the gist was:

    If you train a piece of software on every example of written human communication that you can find, the fact that it generates text that sounds something a human would write is the least surprising thing in the world. When it writes something that looks like it came from a human, that probably means that its training data contained something that a human wrote in response to a similar prompt.

    I

  • That's about the level of AI we have right now. It does nothing until you press the handle.
  • This is what you get when philosophy majors think they understand AI as currently practiced, or in the foreseeable future up to say the year 2525.

  • "Is there any threshold at which an A.I. would start to deserve, if not human-level rights, at least the same moral consideration we give to animals?"

    Absolutly not, morals are a human attribute. Yer average A.I would destroy humanity in a microsecond.
  • Considering the number of folks that don't consider other humans worth consideration, I'm going to answer no to unfeeling programs needing to have their welfare taken seriously. If we get to a point where we stop dehumanizing other people that happen to not be in the same socie-economic bracket as us, then maybe we can revisit this subject. Until then? Fuck the machines. Let's start caring about humans again.

  • Every few years there is another attempt, somewhere in the world, to get the concept of human rights (soon to be decried in America) extended to cover clearly sentient, capable-of-suffering primates such as chimpanzees and gorillas.

    Have you ever seen an ape house at your local psychology lab? They've been steadily shut down over the decades, because the people running them know that they're the unacceptable face of animal experimentation, and it isn't worth the limited scientific returns to face the costs

All programmers are playwrights and all computers are lousy actors.

Working...