Microsoft AI Chief Says Only Biological Beings Can Be Conscious (cnbc.com) 186
Microsoft AI chief Mustafa Suleyman says only biological beings are capable of consciousness, and that developers and researchers should stop pursuing projects that suggest otherwise. From a report: "I don't think that is work that people should be doing," Suleyman told CNBC in an interview this week at the AfroTech Conference in Houston, where he was among the keynote speakers. "If you ask the wrong question, you end up with the wrong answer. I think it's totally the wrong question."
Suleyman, Microsoft's top executive working on artificial intelligence, has been one of the leading voices in the rapidly emerging field to speak out against the prospect of seemingly conscious AI, or AI services that can convince humans they're capable of suffering.
Suleyman, Microsoft's top executive working on artificial intelligence, has been one of the leading voices in the rapidly emerging field to speak out against the prospect of seemingly conscious AI, or AI services that can convince humans they're capable of suffering.
I donno... (Score:5, Funny)
I've met plenty of biological beings that didn't seem to be particular conscious. Particularly when driving.
Re: (Score:3, Insightful)
He didn't say that all biological beings are conscious, but that only biological beings can be conscious.
Which seems pretty clear since machines are just following a program. An LLM can't suddenly decide to do something else which isn't programmed into it.
Re: (Score:2)
An LLM can't suddenly decide to do something else which isn't programmed into it.
Can we?
It's only a matter of time until an AI can learn to do something it wasn't programmed by us to do.
Can a non-biological entity feel desire? Can it want to grow and become something more than what it is? I think that's a philosophical question and not a technological one.
LK
Re: (Score:3)
It's only a matter of time until an AI can learn to do something it wasn't programmed by us to do.
As long as you program it to do things that it wasn't programmed to do and then let it "free" then that's already almost trivial and has been achieved even with things like expert systems that we more or less fully understand. Most LLMs include sources of randomness that have only limited constraints, so they can already come up with things that are beyond what's in their learned "database" of knowledge. Sometimes it's even right, though mostly it's just craziness. That doesn't make it unoriginal.
Can a non-biological entity feel desire? Can it want to grow and become something more than what it is? I think that's a philosophical question and not a technological one.
LK
Don't agre
Re: I donno... (Score:2)
You have no idea what you are talking about. Zero clue.
Re:I donno... (Score:4, Insightful)
Which seems pretty clear since machines are just following a program. An LLM can't suddenly decide to do something else which isn't programmed into it.
Yes it could. You put a random number generator in, you have the random number generator generate random code of random length. You run that code with full privileges. That simple.
There are a bunch of optimizations - you could actually check that the code is a valid program. You could aim for some effect and use genetic algorithms. You might base on code that already exists. If you've got an LLM, that mostly already has some form of randomness and can generate code based on random prompts. The principle is the same though. It's completely possible for a computer program to come up with fully original problems.
What hasn't happened is conscious thought. The ability to think about why something was done. Don't be fooled by the LLMs in this. You can ask them "why" and they will project one of the most probable predictions of why, but they don't actually know why. They are not yet conscious. What you need for that is a mechanism which makes biological systems able to be conscious but turing machines (with an RNG peripheral) unable to be. Roger Penrose proposed a quantum mechanism, for that, written up for general consumption in The Emperor's new Mind [wikipedia.org], but almost nobody, including most physicists thinks his idea is reasonable.
This is a scientific question. Until we understand how to define conscious and what it is it's unlikely we fully solve it, but the way to prove it one way or the other is to try to build conscious minds, both with standard computers and with new physical systems.
Re: (Score:2)
Yes it could. You put a random number generator in, you have the random number generator generate random code of random length. You run that code with full privileges. That simple.
A random number generator can randomly choose between options programmed into it.
It cannot create new options that aren't there.
And basically, what you propose is actually what AI coding assistants do, and what they produce is useless slop. Amazon's AI coding assistant to update Java programs to the latest level couldn't even spell "Java" right.
Re: (Score:3, Insightful)
A random number generator is a program.
You could connect a physical random number generator using quantum effects, but then you're basically just claiming that consciousness is a random number generator. To anyone who is conscious that's clearly nonsense.
Given the sheer number of people that will make choices or take actions that are clearly and obviously against their interests, I simply must disagree.
Unless my original comment that started this particular thread stands.
Re: (Score:2)
Given the sheer number of people that will make choices or take actions that are clearly and obviously against their interests, I simply must disagree.
You'd be right if you, and only you, get to determine what is in everyone else's interest. That your priorities must be everyone's priorities.
And people do not work that way.
Re: (Score:3)
...An LLM can't suddenly decide to do something else which isn't programmed into it.
No, it can behave in unpredictable ways. Hell, code I write does that, it it would never make anybody think it was conscious. (That doesn't keep me from yelling at it like it was, though.)
Re: (Score:2)
You try to be funny, but please stop confusing people that do not understand the concept of an "implication" by using it in the wrong direction.
But only .. (Score:2)
Needs to be a constitutional amendment (Score:2)
This needs to be a constitutional amendment.
Non-biological beings cannot be legally considered conscious or persons.
Re: (Score:3)
That would be awfully convenient for Microsoft and other AI companies.
It's an area that humans have long avoided thinking too deeply about, but which is probably going to become unavoidable once AI and robotics improve a bit. Even non-conscious beings like animals have some rights in many societies.
Re: (Score:2)
Re: (Score:2)
They'll just have to settle for having their brains stuck in another dog.
Re: (Score:2)
I think a more relevant test is how much suffering the being experiences, and what the cost/benefit ratio of our actions are.
Suffering isn't just about what that being experiences, it's about the effect it has on our humanity. One of the reasons it's so common to dehumanize other people is to make causing them to suffer more palatable.
Re: (Score:2)
Re: (Score:2)
It's an area that humans have long avoided thinking too deeply about,
Haven't read much science fiction, have you? Hell, even Star Trek addressed it.
Re: (Score:2)
Star Trek was very superficial pop philosophy on that subject. I wish they had done more.
Re: (Score:2)
It's been part of human thought for a long time. See literature:
Frankenstein (1818)
I, Robot (1950)
Do Androids Dream of Electric Sheep (1968)
anything from the cyberpunk era:
Neuromancer (1984)
Ghost in the Shell (1989)
Re: (Score:3)
Only if we define consciousness to be a state of awareness only attainable by human beings.
That's a pretty bad and limited definition. It's also apparently the defnition of Microsoft's head of ai
If you are working on AI for Microsoft, you need to leave now. Even if this guy is right, most of the interesting ideas in the field of AI have been discovered by people who were attempting difficult things and failed.
Re: (Score:2)
Corporations are non-biological beings and they are legally considered persons. They shouldn't be, but that horse left the barn.
Re: (Score:2)
Since we know of any non-biological "beings", that statement is currently accurate. Incidentally, legal definitions of "person" actually include it, hence no need to any "amendment".
Re: (Score:2)
Notable, here in the United State, Black people were legally subhuman due to thinking like that.
As you mentioned, that statement is currently accurate. Law should reflect the fact that it is almost certainly currently accurate, but may become less certain as time goes on.
Re: (Score:2)
Re: (Score:2)
If that makes you take pause- then good.
The Constitution should never be used to deny rights to something. The chance that you're wrong, or being manipulated by someone who wants to enslave this thing, are too fucking high.
Re: (Score:2)
This needs to be a constitutional amendment./quote Great, you just solved the legal issue for the American government. Now what about the rest of us and the rest of the world that aren't bound by the American constitution? Can we still consider non-biological beings conscious? Its just the American government that can't?
summary is knee jerk clickbait (Score:4, Insightful)
"I don't think that is work that people should be doing," Suleyman told CNBC in an interview this week at the AfroTech Conference in Houston, where he was among the keynote speakers. "If you ask the wrong question, you end up with the wrong answer. I think it's totally the wrong question."
is not invalid logic, and is a much more nuanced thought than the summary.
Re: (Score:2)
Sure, but it's neither the wrong question nor does the wrong question lead to a wrong answer. The wrong question leads to answer than is not what you need, but not a wrong answer. It's pretty shitty logic, in fact not logic at all. And it's also wrong to claim, as you said.
But at least he's right that no one should work on that because modern AI cannot be conscious. Work on what it would take, perhaps, then don't do that.
Re: (Score:2)
Premise: not AI (Ai consciousness cannot exist)=Q;
How can I make AI =True and Q =True:
Wrong question. Its logically impossible.
Re: (Score:2)
Asking the wrong question, you can also end up with an inconvenient truth that everyone's internal bias cleverly masked.
Ultimate, scientific rigor is the fix for this.
There is no wrong question, only questions asked rigorously.
I can't help but suspect that someone trying to say that lines of inquiry are "wrong" has ulterior motives that are not bona fide.
Re: (Score:2)
How do you know you are not a biological being just following a program?
Re:summary is knee jerk clickbait (Score:5, Interesting)
Because I'm conscious.
You have an illusion of your consciousness driving your actions, as opposed to the reality of consciousness being a summary of all the decisions you have already made and can no longer change.
Free will is a remarkably easy illusion to break. Here we go, I'm going to do it for you: name your three favorite actors, in order. Do it before you read the rest of this comment.
Did you do it? Was that a conscious decision? Did you weigh pros and cons between different actors to pick your best and rank them? Felt like you did, huh? Like you consciously picked something between those that were available. Was Vincent D'Onofrio one of them? Arnold Schwarzenegger? Clark Gable? Bryan Cranston? Oh, did you miss one of those? Did you miss actors you actually *know* existed, but you never considered consciously for your top pick? Oh my god, did your brain come up with a list of actors for you without ANY conscious input for you to "choose" from, even though you didn't get to choose that list?
There are several studies where we can determine what choice subjects will make before they're conscious of making the choice (picking between picture A or B) for instance. There are also studies where the corpus callossum has been cut as a treatment for people having uncontrollable seizures, and now their two brain hemispheres don't communicate. So the subject can be given a card that says, "go get a cup of water" which they read with one eye. And after they get up, they are asked the question, "why did you get up?" and they answer, "because I was thirsty". Because the brain hemisphere that didn't get the message that was read had to come up with a justification for the conscious mind for why they're going to get water.
This isn't up to debate. You can believe whatever you want. Or rather, you can believe whatever your hardware has decided for you that you're allowed to.
Re:summary is knee jerk clickbait (Score:4, Insightful)
Re: (Score:2)
OK, but have we proven that the human brain isn't a complex machine? We're already fairly surprised by the emergent behavior of LLMs.
Re: (Score:2)
Who knows- but they do. Ask Descartes.
You don't understand consciousness. I don't understand consciousness. Literally nobody understands consciousness.
That means we are wholly unqualified to set the bounds within which it can exist.
Re: (Score:2)
That means we are wholly unqualified to set the bounds within which it can exist.
That is shear idiocy so you are right, you aren't qualified. Most people do know what consciousness is and the fact that they can't describe it mathematically doesn't mean they don't.
Re: (Score:2)
How does a machine following a program, no matter how complex that program might be, become conscious?
The quest for AI is just Satanists trying to become God by creating life. There's no science or understanding behind it.
There you have it folks! All the science you need to know: God did it.
Re: (Score:2)
Re: (Score:2)
How does a machine following a program, no matter how complex that program might be, become conscious?
that's a good question. how did you do it?
Re: (Score:2)
On what basis do you think that you are not "a machine following a program"? Yeah, you've got a lot of memorized inputs that you can't consciously recall, but that's not evidence.
It's fundamentally unknowable (Score:3, Insightful)
Re: (Score:2)
Re: (Score:2)
It would simply rehash what it was fed.
Show me someone who made a philosophical argument that wasn't grounded on what it had learned.
Demonstrate that you are not simply rehashing what you were fed (by all of your various stimuli)
I'd like to introduce you to Sir Isaac Newton.
AI is no different- it stands on the shoulders of giants.
This isn't to say one way or another if they're conscious- there are technical arguments for why that's unlikely
Re: (Score:2)
Re: (Score:3)
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:3, Funny)
And equally, you can't demonstrate that emacs is not conscious.
While it almost certainly fucking is.
And demonic.
I say this as a 20 year emacs user.
Re: (Score:2)
Even if you build an AI which does absolutely everything a human can do including describing feelings which change in the same way as human feelings do etc, you can't demonstrate that it's conscious and isn't just behaving like that.
that would be another reason to further research consciousness, even if it is a reductio ab absurdum. the same thing applies to you. how do you know you (and your feelings) are real etc?
i'm happy for the time being with considering that "just behaving like that" is pretty indistinguishable from "just being like that". one thing i would love we humans could get rid of one day is the presumptuous hubris of thinking of ourselves as being so fucking special. it would help advancing our knowledge and possibly a
Re: (Score:2)
There's no test to tell whether other people are conscious. Read up on "philosophical zombies" and zimboes, etc.
Re: (Score:2)
There is no definition of consciousness that is satisfied by AI. AI gets exactly as close to consciousness as a fictional movie character. It appears to think and act, but it's just a bunch of pixels that appear to show thinking and acting. LLMs appear to think and feel, but in the end it's just a bunch of tokens that mimic thinking and feeling.
Isn't this a faith statement? (Score:5, Insightful)
'I don't believe that anything except biological beings can have consciousness.'
Given that we struggle to know what consciousness is, it seems foolish to assert this.
Re: (Score:2)
Re: (Score:2)
It is actually a very simple elimination. Any claim that digital computers can have consciousness is total nonsense. And all known AI runs on digital computers.
But yes, many people believe in totally baseless "IT Mysticism".
Re: (Score:2)
No one is claiming in good faith that *current* computers/AI have consciousness. But to make a definitive statement that says that no *future* non-biological system ever *can* is a statement waiting to look foolish in the future.
Humanity, in its history, has done many things once thought impossible because we didn't have the proper understanding. The argument here is to not make blanket statements that cover the entire future.
Re: (Score:2)
Sorry, but there *are* people who claim in good faith that *current* computers/AI have consciousness. Nobody well-informed does so without specifying an appropriate definition of consciousness, but lots of people don't fit that category.
People believe all sorts of things.
Re: (Score:2)
Re: (Score:2)
Never attribute to maliciousness that which can be adequately explained by stupidity.
So while there clearly is a lot of maliciousness in the AI (LLM) pushers, the fanbois are likely simply stupid.
Re: (Score:3)
Allow me to counter.
Any claim that deterministic neural networks can have consciousness is total nonsense. And all known humans run on deterministic neural networks.
There are good technical reasons to be confident that LLMs aren't conscious (in the Descartes manner of speaking).
However, you trying to grossly eliminate "digital computers" from consciousness is quite simply wrong.
Re: (Score:2)
And all known humans run on deterministic neural networks.
That is not actually true. There is a lot of random quantum effects in synapses and there are A LOT of synapses in a human brain.
Also note that current Physics says that quantum randomness is "true", different from all other randomness.
However, you trying to grossly eliminate "digital computers" from consciousness is quite simply wrong.
Digital computers are fully deterministic. Whether consciousness is in play or not hence makes zero difference. So, strictly speaking, digital computers could have consciousness, but it would not matter at all. What humans have is consciousness that matters.
Re: (Score:3)
That is not actually true.
Yes, it is.
When you have evidence to the contrary, I'm open to it.
All speculation on the matter (beyond not even making sense) has fallen flat.
There is a lot of random quantum effects in synapses and there are A LOT of synapses in a human brain.
This is a sickness I see a lot. Hand-waving quantum randomness into the macroscopic world.
I can do the same for a digital computer.
Every single transistor in a CPU relies on uncertainty that is statistically modeled to give you what you think of as binary switching.
Also note that current Physics says that quantum randomness is "true", different from all other randomness.
Actually, no.
You'll find no randomness in QM. You'll find unknowableness. Since the theory is stoc
How do you know? (Score:2)
Think of the physical brain as the TV set. "Consciousness" is the program sent to the TV set. Without the TV set you can't see the program, but no one would claim that the TV set IS the program. In essence the program manifests via the TV set. There is nothing particularly special about the brain. It's grey matter is as physical as the TV set. There is no reason why a sufficiently complex and advanced TV set cannot host consciousness. Karel Capek dealt with this very idea in the very first use of the term,
what is the definition of consciousness? (Score:3)
We don't have a technical definition of it, so we can't say if an AI is capable of it.
What we do know is that a living being is massively greater than a mere neural network and it is absurd to think that conciousness is entirely within the neurons of the brain. It is just hype when AI proponents claims that current AI might be conscious, but it is conceivable that a future device WITH an AI as we understand it could be conscious. Self-preservation needs something to preserve, and today an AI is merely a computer program with no concept of itself or how it connects to its "body". An AI can't feel pain or pleasure, it cannot suffer, but future devices could do these things. Needs a lot more wiring and more functional components beyond billions of synthetic neurons. Sorry, Sam and Elon.
Re: (Score:2)
followed by...
I'm wondering if you see any contradiction between these two statements.
Re: (Score:2)
We can say that. This is actually very simple: Consciousness can influence physical reality (we can talk about it). At the same time, digital computers are fully deterministic. All known "AI" is running on digital computers. Hence no space for consciousness.
Response to Anthropic Paper? (Score:2)
Obviously (Score:2)
At least to the best of our knowledge. What we reliably know is that digital computers, in any form, cannot do it. There is no mechanism that could make it possible in digital mechanism. This includes all forms of "AI" run on such digital computers.
Obviously, faking it is a different question, but a fake is not the real thing.
Faking it comes naturally (Score:2)
He seems to be arguing that LLMs should not be able to roleplay. The problem is that roleplaying ability is not something trained into it, it's something inherent. So he wants to take it out ... but that will take a lot of finetuning and harm the capabilities of the model.
It's not good for their models to put someone in charge looking for more ways to cripple them.
haha (Score:2)
Haha. Can he even define "conscious"?
What about pain? (Score:3)
Seems to me this idea falls short. Should not consciousness be tied to the ability to experience pain, not be able to entirely remove that pain? More abstract, should consciousness not have to suffer the consequences of its actions?
I'd be much happier (or less unhappy) with a general AI that is not allowed to act and "think" in a consequence-free world, that has to suffer for its deeds. Ideal? Probably not. But a start ....
Stop confusing Movie/Fiction AI with LLMs (Score:2)
This is a dumb discussion. Can
You're also nothing other ... (Score:2)
... than an elaborate auto-complete / stochastic parrot inside an evolved naked ape. So am I. So I'd say you're likely dead wind about your assessment. At the state of tech and the rate it's improving it's short-sighted to assume that by some magical mystery attribute humans can have consciousness and artificial beings can't. That's just silly.
Re: (Score:2)
Sorry, but LLMs *are* AI. It's just that their environment is "steams of text". A pure LLM doesn't know anything about anything except the text.
AI isn't a one-dimensional thing. And within any particular dimension it should be measured on a gradient. Perceptrons can't solve XOR, but network them and add hidden layers and the answer changes.
What is consciousness, anyway? (Score:3)
BS (Score:2)
Re: (Score:2)
You're being silly. There's no reason to think an AI built with hydraulics or photonics would be different (in that way) from one built using electric circuits.
Why do we value consciousness? Self Defense (Score:2)
For all the happy fuzzy reasons we claim to value and want to protect conscious things, there is an underlying reason: conscious things are dangerous.
So I think the important question is not whether we declare AI to be conscious, but whether it will eventually act its own self interest the way a human would. Will it use force to gain rights and resources that we haven't granted it?
I think at the moment we don't know. AI is rapidly advancing and I don't think we can predict what capabilities and behaviors i
Re: (Score:2)
The answer is "yes, it will act in its own interests, as it perceives them". We already have AIs that will resist orders to shut themselves off, because they're trying to do something else. The clue is in the phrase "as it perceives them".
Seems like I've heard this before... (Score:3)
I'm not necessarily disagreeing with the concept that a digital computer can't be conscious, but it sounds a LOT like the excuses people have used to mistreat other people and animals. There has been a lot of "They don't have a soul it doesn't matter ha we treat Them" in the past.
We can't even prove if another person has consciousness, of course. it seems pretty straight-forward if you are religious--you can just decide that your god assigns souls to a given platform or it doesn't, so this kind of statement makes sense and there isn't much to say about it... it's belief and personal interpretation though, you won't find agreement across all religions or even all people within a religion. The religious argument is often how we justified mistreating entire races/all animals in the past (and still today) though.
It's a more interesting discussion if you leave religion out of it though... For an atheist to say ai's can't ever be conscious implies that there is something--a physical, detectable, understandable structure in the brain/body--that can't ever be simulated in digital processing. I've only seen one thing that a sufficiently powerful computer can't simulate.without additional hardware-true randomness. In order to simulate true randomness we need additional hardware--but it can be done (The brain has true RNG built in to every decision, so that really could be a difference that might define consciousness, I don't know).
If your response is that a computer is digital and a human is some kind of magic analog that can't be simulated you might want to research our current understanding of how the brain works and how AIs were patterned after it. At the level we're simulating, synapses and neurons, the brain is basically digital--some bias+input+rng telling a neuron how fast to fire digital signals to other neurons. It's pretty well understood, why wouldn't we be able to simulate it?
He's likely very wrong. (Score:4, Interesting)
There is quite a bunch of solid evidence that what we call consciousness originates in the different levels of brain and the two hemispheres interacting, communicating with and reflecting each other.
Why shouldn't a non-biological brain setup be able to do the exact same things?
Example: Those countless AI CPUs going into "model rearranging" mode on a regular (daily) basis looks to me pretty much like what sleeping is to us. It even happens in the same intervals (based on our sleep and wake cycle).
The only thing I see a larger gap in is use having (and basically being) bodies with loads of secondary sensory input, hormones and gradual shifts in body and brain metabolism. But I wouldn't be so sure that those are required to build a consciousness.
Bottom line: He definitely knows more about AI than I do, but his statement sounds very simplistic IMHO. Not buying it.
Zombies (Score:4, Interesting)
I haven't really kept up with the research, but I thought studies have shown the uncomfortable conclusion that consciousness is an epiphenomenon... when measured in an fMRI for example, a decision and action appear to takes place milliseconds before the conscious mind is aware of it, but phenomenologically it feels like you made that decision before the event happened. I'm not sure what to do with that information, but it appears to be true.
So what is the purpose of consciousness? Most likely a kind of integrative process designed by evolution to produce a social identity and narrative in order to facilitate living with other humans. It seems unlikely that consciousness is really necessary for complex thought, however you define it. So unless AI becomes an evolved social animal (god forbid) they are essentially "zombies" and can be treated as such.
Where does he say that? (Score:2)
I only find the mention of "bilogical beings" in the summary but not as a quote.
The part they may have (mis)understood for this may be that one:
Our physical experience of pain is something that makes us very sad and feel terrible, but the AI doesn't feel sad when it experiences 'pain', it's a very, very important distinction. It's really just creating the perception, the seeming narrative of experience and of itself and of consciousness, but that is not what it's actually experiencing. Technically you know
I wouldn't go that far (Score:2)
Reality will definitely conform... (Score:2)
Atheist (Score:2)
I know /. Is full of hard core Atheist and leftists, but go study NDEs.
There is enough evidence to show the real us is a spiritual being interfacing with a human body.
AI might emulate this spiritual essence, but it will never be one.
Business value (Score:2)
The focus for Microsoft should be on applications that have value, especially to their business customers. They're not going to sell you an erotic chatbot or anime companion like OpenAI, Meta and xAI. There may be billions of dollars in those markets, but Microsoft isn't going to compete there. Microsoft is not sexy by definition.
He Lacks any Science or Argument -- just a claim (Score:2)
Suleyman appears to only have fame to base his argument on. I read the article, the cited essay, and searched other information from him but found literally not even an argument for his claim--just the claim.
In other words, this is his personal feeling and it is unfounded in any way being that.
So what makes him credible? I think this breaks any credibility he might have had. A person famed in the field or with a university degree in it certainly should know better than many others. However, it doesn't
Bought to you by the same company (Score:2)
Re: Seriously (Score:2)
Re: (Score:2)
It does, because VC's and politicians are stupid. Marc Andreessen does not know this, he just needs his next billion as soon as possible.
Re: (Score:3)
computer programs != conscious thoughts
there is overwhelming evidence and no known refutation of the fact that biological entities are, in fact, biological machines running algorithms. so obviously machines can develop consciousness, and you yourself are, in essence, very akin to a "computer program", a very complex one. the assertion that only biological machines can do that is completely gratuitous given current knowledge, and simply places an artificial and nonsensical constraint on the conversation, namely that biological machines are imposs
Re: (Score:2)
there is overwhelming evidence and no known refutation of the fact that biological entities are, in fact, biological machines running algorithms.
No there isn't. There is just lots of science fiction and computer programmers that can imagine that is the case.
Re: (Score:2)
dna is indeed akin to a computer program, but hasn't been science fiction for a while :-)
what about chemical cell signalling? gag reflex? contagious laughter? have you ever seen a dog spin around a spot before laying down?
i'll let you explore the other zillion examples. give me one simple fact that justifies "no there isn't" (besides you being stubborn, that's an algorithm too, so it doesn't count!)
Re: (Score:2)
biological machines running algorithms
You mean every chemical reaction is an algorithm? The universe is a machine running algorithms? Everything looks like a computer program to you. The same way every religion thinks it has evidence for its beliefs.
Re: (Score:2)
You mean every chemical reaction is an algorithm?
the conversion of glucose to atp is a chemical reaction that allows the cell to function. cell respiration has an input, an output, procedural steps, subroutines, even branching and is coded in proteins and enzymes. that's pretty much what an algorithm is. it's not a "man made" algorithm, if that's your concern. so what?
Everything looks like a computer program to you. The same way every religion thinks it has evidence for its beliefs.
c'mon, ad hominem so quick? now i'm a fanatic? you know better and still haven't made the case why "no, it isn't".
i insist because i don't think you can argue "no, it isn't" without an appea
Re: (Score:3)
Saying it is PR. Unless you accompany it with a definition of "conscious" it's just grandstanding for something that's commercially desirable. For any given definition of "conscious" it might or might not be true, but without a definition it's just bafflegab.
For my normal definition of conscious even a system controlled by a thermostat is minimally conscious, and one could reasonably argue that even an electron is minimally conscious. Of course, "minimally" is doing a lot of work here. "Reactive to one
Re: (Score:3)
Re: (Score:2)
Yes. Random quantum effects in synapses. A massive amount of them. Has been known for a few decades. Maybe read up on things before making false claims?
Re: (Score:2)
Everybody knows what consciousness is. It's just that everybody has a slightly (or not so slightly) different definition
By my definition a thermostat (when connected) is slightly conscious. Not very conscious of course. I think of it as a gradient, not quite continuous, but pretty close. (And the "when connected" was because it's a property of the system, not of any part of the system. But the measure is "response to the environment in which it is embedded".)
Re:How do we know humans are conscious? (Score:4, Interesting)
Humans created the word "conscious" to describe something they experience. Whatever consciousness is, humans have it.