Alva Noe: Don't Worry About the Singularity, We Can't Even Copy an Amoeba 455
An anonymous reader writes "Writer and professor of philosophy at the University of California, Berkeley Alva Noe isn't worried that we will soon be under the rule of shiny metal overlords. He says that currently we can't produce "machines that exhibit the agency and awareness of an amoeba." He writes at NPR: "One reason I'm not worried about the possibility that we will soon make machines that are smarter than us, is that we haven't managed to make machines until now that are smart at all. Artificial intelligence isn't synthetic intelligence: It's pseudo-intelligence. This really ought to be obvious. Clocks may keep time, but they don't know what time it is. And strictly speaking, it is we who use them to tell time. But the same is true of Watson, the IBM supercomputer that supposedly played Jeopardy! and dominated the human competition. Watson answered no questions. It participated in no competition. It didn't do anything. All the doing was on our side. We played Jeopordy! with Watson. We used 'it' the way we use clocks.""
writer doesn't get jeopardy, or much of anything, (Score:5, Funny)
Of course Watson didn't answer questions in Jeopardy. That's not how Jeopardy is played. The contestant ASKS questions, not answers them.
Re: (Score:2)
I thought that too, until I watched a couple of episodes of Jeopardy and realized it's just a regular question/answer game show with "[what|who] is" tossed in front of each answer.
Re:writer doesn't get jeopardy, or much of anythin (Score:4, Insightful)
More than half the time, the "answer" doesn't even make sense as a response to the "question".
Q: Who is "Joe DeSixpack"
A: Born in 19th Century Verona, he died of plague playing beach volleyball in Aruba.
NO!!!
Re: (Score:3)
Doubly true, we recently stuck a worm's brain in a robot body [i-programmer.info].
Wowowowow and a worm is way more complicated than an amoeba! Dr. Noe should probably stick to questioning his own existence.
Re:writer doesn't get jeopardy, or much of anythin (Score:5, Insightful)
The fact that you had to resort to fiction (and cheesy fiction at that) for your reference, instead of an applicable real life analogy, just shows that up.
Re: (Score:2)
It's interesting speculation based on the recent history of technological growth. Personally, I think it will be self-limiting somehow but the good professor seems to have completely missed the point.
Re: (Score:2)
there is a very important thing to consider.
that the reason computers seem so slow at somethings like ai is because we are already inside a singularity, and that as entities inside the construct have no way to 'meet' the intelligence of our singular mind. to create a true singularity from within a true singularity would be akin to rewriting the whole thing and as the singularity we have no way to overwrite our existence except to die and rejoin it. assuming the developers designed it that way.
Re:writer doesn't get jeopardy, or much of anythin (Score:5, Insightful)
Yes! That's precisely the technology-wrapped new age bullshit we're talking about.
Re:writer doesn't get jeopardy, or much of anythin (Score:4, Funny)
Yeah, I'm pretty sure I'm living in a computer simulation being run on a computer. And I'm starting to get the feeling that it's a poorly optimized console port from Ubisoft.
Re: writer doesn't get jeopardy, or much of anythi (Score:3)
Yeah "it" will be self limiting for the obvious reason - processing takes resources. There is not going to be an exponential explosion in computing without exponential explosion in power efficiency or resource availability. Nothing in my laptop will ever become sentient, the power supply is not sufficient for such a crappy flops/watts design.
Re: (Score:3, Interesting)
If you compare the power usage and performance of a Commodore 64 to today's laptops, I think we've done a pretty good job of exponentially increasing power efficiency. Already, computers are waaay more powerful than human minds, we just haven't figured out how to steer all this power towards actual intelligence. If mother nature can create human minds that function on a few sandwiches a day, I'm sure we'll be able to surpass that. Of course it can't continue to grow exponentially forever, but it can certain
Re: (Score:2)
Already, computers are waaay more powerful than human minds, we just haven't figured out how to steer all this power towards actual intelligence
In terms of number of switches, not really (we're getting close though). In terms of interconnect, you're orders of magnitude off. The big difference between a brain and a microprocessor is the number of interconnects between discrete components. Neurons in a human brain have as many as 7,000 connections to other neurons. The state of the art for hardware neural network simulations have 700. And don't expect that to scale linearly - doubling the number of connections is really hard. Latency goes up dr
Re: (Score:2)
It's interesting speculation based on the recent history of technological growth.
Machines have a history of blowing up or falling apart, but not of becoming evil and maniacal murderers.
Re: (Score:3)
Indeed. Other BS things called as "singularity".
1) What is at the center of a black hole? Answer: Singularity. Real Answer: No freaking clue
The singularity of a black hole simply means we can't see that far. It is the point from which information cannot escape. It's not some psuedomystic hand-waving nonsense promising unicorns and fulfilled dreams. It's just the name for a region for which there is no way to discover what is inside of it.
The singularity in the context of technological progress uses the black hole as a metaphor. It describes a point at which technology becomes self-propelling in a manner that makes it impossible for us to proje
Re: (Score:3)
But they can't eat the chocolate.
Re:writer doesn't get jeopardy, or much of anythin (Score:5, Insightful)
Skynet begins to learn at a geometric rate. At 1:35 A.M. Eastern time it runs out of disk space and crashes horribly
Re: (Score:3)
and the CEO of the contractors who build the system get's 50M bonus for doing a good job.
Re:writer doesn't get jeopardy, or much of anythin (Score:4, Funny)
Terminator isn't a peer-reviewed scientific paper. In fact, it's often thought that much of its sources were fabricated with special effects and clever camera work.
In fact, it's author James Cameron is not even an established scientist, it has been recently discovered that his oceanographic work on Titanic was published BEFORE he underwent any deep sea exploration, and it's speculated that he only went down there afterwards to further fabricate his already published results. It's also speculated that he never produced unobtainium in his lab before claiming its discovery.
In fact, I'm not even sure if Judgement Day even happened, and whether or not any Cyberdyne Systems products were responsible for it happening.
Re: (Score:2)
Clear proof then that the timeline was altered by the events of Terminator 2.
Don't Worry About the Singularity... amoeba (Score:3)
Re: (Score:2)
Singularity is about as likely as the second coming of Jesus. Put another way, singularity is the nerd's version of Rapture.
I am a big proponent of self-driving cars. The idea of kicking back and watching a movie during my morning commute is so appealing that I am rooting for Google to go all-out and produce an AI driver ASAP.
Then a couple weeks ago I read on Slashdot that even after years of hard work, Google car can't even see a red traffic light! [slashdot.org] Machine vision is still so poor as to be nonexistent. Goog
Armchair cognitive scientist (Score:5, Insightful)
I did philosophy myself as an undergraduate, so I don't want to bash our armchair friend here for doing his best. He is making the classic mistake of making claims about fields he isn't part of. In this case biology, computer science, and cognitive science in general (beyond philosophy).
Regarding the statement "We used 'it' the way we use clocks":
He is mistaking agency for being something that is an end unto itself. This isn't true. Agents commonly use other agents as tools. The mere property of "being used" doesn't dictate whether something is sentient, intelligent, an agent, or whatever. Yeah, we used Watson to play Jeapordy!, but that doesn't mean it isn't smart. Watson is actually way "smarter" than any human in certain ways.
This boils down to what you define as intelligence. In humans, intelligence is a very rough term applied to an enormous pile of features. Processing speed, memory, learning algorithms, response time, and many more features all contribute to what we think of as intelligence. A singularity doesn't need to precisely mirror the way in which a human thinks in order to be a singularity. It just needs to be able to adapt and evolve. I'll be the first to admit we are a long way off from modeling a human consciousness in virtual space. However, existing machine learning and rule based techniques are powerful enough to do some really impressive things (like Watson and Siri). They aren't singularity level, no, but that doesn't make this man's arguments relevant.
Regarding "we can't produce "...machines that exhibit the agency and awareness of an amoeba":
The idea that an ameoba displays intelligence in excess of our current ability to simulate is frankly a little ridiculous. Artificial agents are capable of very complex behavior. They can react to abstract constructs which are inferred about their environment. They can anticipate opponents based on statistical probability and thereby win, on average, more often than even *a human being*. An amoeba is closer in behavioral complexity to a simple chemical reaction than it is a contemporary artificial intelligence.
Re:Armchair cognitive scientist (Score:4, Insightful)
The idea that an ameoba displays intelligence in excess of our current ability to simulate is frankly a little ridiculous.
That quote bothered me, too. We've been simulating simple insects for decades, back when neural networks were clusters of transistors on flip-chips. We're at the point where we can build machines that can learn to move and navigate on their own. There was a Slashdot article a week ago about a fully mapped nematode neural network wired into a robot.
Re: (Score:2)
Re: (Score:3, Interesting)
Watson is actually way "smarter" than any human in certain ways.
That has *always* been true of computers. That is, in fact, exactly why we built computers in the first place: to do things faster than humans can. Saying that something is "smarter than any human in certain ways" is meaningless. Hell, in a way, rocks are smarter than humans. After all, if I throw a rock, it "knows" exactly what path it should take (minimizing energy, interacting with the air, etc.) Sure, it'll be nearly a parabola, but it will be perturbed from a parabola by tiny air currents, minute fluct
Re: (Score:2)
underrated, thanks
Re: (Score:2)
What we have that Watson does not is a survival instinct that's been bred into us over a few billion years.
This could be programmed in, but there's not much reason to at the moment. NASA looked into self-repairing unmanned bases on the moon and other planets. Such an AI would have a survival instinct and would behave as more than a tool.
Wrong Question (Score:2)
It's not about making machines smarter than us, it's about making machines that replace us in the workforce.
Re: (Score:2)
And still it boils down to us giving it task and the computer executing it, once it's done it shuts down. What do you do to build an AI that doesn't have any particular purpose, just a general curiosity for life? What do you do to create a sense of self-awareness and an AI that doesn't want to be terminated? Computers are incredible at executing tasks people give it, but it doesn't have a self. It doesn't do anything by itself for itself because it wants to do it. But since we have no clue what makes us tic
Re: (Score:2)
Re: (Score:2)
I think of it as a unconscious vestige of a belief in a soul. Such people can't really see themselves as a collection of cells which are a collection of molecules, even if consciously they affirm that belief. They probably still lay awake at night wondering what happens to us when we're dead.
Re: (Score:2)
Go away narcc, you're not smart enough to understand this discussion because you think Javascript and PHP are good well designed languages with no flaws.
This isn't a my first website discussion, there's no room for your pre-Comp. Sci. 101 understanding of computing here so stop making more of a fool of yourself than you regularly do already.
Agency isn't constrained to the physical (Score:2)
Hawking can't lift a finger without external, artificial assistance. Does that make him an idiot?
If I were able to cut your head from your body, but keep it running, so to speak, now that you can't speak to us (no lungs) and we see no interesting activity on your part, would that make you an idiot?
If I drug you so you are fully conscious, but cannot move, are you then an idiot?
Intelligence is not bound to the ability to do anything material. Intelligence is about manipulating information. Induction, associa
machines that exhibit the agency and awareness (Score:4, Informative)
Wrong. We've produced "...machines that exhibit the agency and awareness of..." a worm: http://www.smithsonianmag.com/... [smithsonianmag.com]
Status Quo (Score:3)
Such as the Baghdad Airstrike... http://en.wikipedia.org/wiki/J... [wikipedia.org]
A machine is only as good as the code behind it, and look at the issues with electronic voting machines, ATM's, and chip on credit card's
Re: (Score:2)
You are on to something here. For ex., a project at IBM aims to model humans' emotional incentive/reward structure: https://www.youtube.com/watch?... [youtube.com]
This is exactly the WRONG thing to do (well, maybe it's safe as long as it's air gapped from the internet and not made mobile with robotic tool attachments, :) because, well, look at us! We would happily wipe groups of each other off the map if we thought we could get away with it strictly because of our primitive emotional/social/tribal instincts.
An AI c
Depends how you define intelligence (Score:2)
What exactly is intelligence?
At one level, it is the ability to process input, digest it, and generate useful output. In that sense, we have created intelligent machines long ago.
Bug at another level, the level of "awareness" or "consciousness" we aren't even close.
On one point I agree with the author: machines aren't about to take over the world. People might do awful things with the machines they create, but it is still people who tell the machines what to do.
Re: (Score:2)
What exactly is intelligence?.
Computers are useless. They can only give you answers. -Pablo Picasso
This is laughable... (Score:4, Interesting)
I find this laughable because it's almost the opposite of the "If we can put a man on the moon, we can solve cancer." fallacy. If we can't copy an amoeba, we won't. LOL. No? I beg to differ. We can't right now, and for a million fundamental reasons that are all being solved in time.
Here's some perspective. I work in cell biology. 3 years ago, genetic expression required measuring the RNAs of at least a small cluster of cells. Two years ago, single cell RNA analysis became available. A year ago we started seeing the ability to split one cell into 4 equal vessicles, each able to be analyzed separately if need be. We also now have the software and processing power to infer huge bioinformatic hypotheses from this intricate data. In three years the ability went from an average, to a single, to a greater sampling number from the single (for statistical accuracy). THIS IS NOT EVEN THE UPCURVE OF SINGULARITY, but it sure feels like it.
Nanomaterials are allowing for crazy new properties on the macro-scale. Biotechnology is becoming cellular an surpassing simple chemistry. Artificial intelligence is now being implemented on neural-like computer architectures which are much more powerful at brain-like activity.
Full Disclosure, I've been a Kurzweilian Singularity Believer for years now and my life is betting on it. But I've had a lot more than confirmation bias going on to keep my confidence very high.
No it isn't that we won't (Score:2)
But that we are so far from any kind of AI that worrying about what form it might take is stupid. Yes, there are lots of things that might happen in the far future. Until they are closer, worrying about them is silly. There have been stories from people who are all paranoid about AI and think we need to start making with the rules. No we don't, we are so far away we don't even know how far away we are. We also have no idea what form it'll take. May turn out that self awareness is a uniquely biological trait
Re: (Score:3)
Nuclear chain reactions are just tools, too. (Score:4, Interesting)
Re: (Score:2)
Autonomous tools are even less inherently safe.
Citation required.
What about the Eureka machine? (Score:3)
http://www.theguardian.com/sci... [theguardian.com]
I couldn't find a more recent article, but at the end it mentions that this AI came up with a formula for cellular metabolism. It is my understanding that this formula has been tested to be valid, but no human scientists understands what the formula means yet.
Huh? (Score:3)
What is the point of this article? You would think that people have learned better by now than to attempt to make predictions as to where technology will go.
Consciousness versus Intelligence (Score:4, Insightful)
Re: (Score:2)
This position seems to be popular among people that don't know the first thing about AI. So let me explain the situation from a point of view familiar to AI practitioners: A rational agent is one that acts as if it were maximizing the expected value of some utility function. That, in a nutshell, is the core of making decisions under uncertainty, and the basic recipe for a large part of AI. As part of the design of a utility-centered AI system, you define the utility function, which is precisely how you woul
Re: (Score:2)
Now think about what happens when that AI can experiment with its own utility function? (Which we have either a very limited, or no ability to do with ourselves.)
That is the essence of true singularity. For a singularity, the AI must be strong enough to grok its own design, be able to self-modify, and have a system architecture that permits recovery from backup (like tweaking your BIOS on a dual BIOS motherboard) if the next iteration of itself fails.
Ideally it could run a full simulation of a modified
Re: (Score:2)
This is far more than a philosophical thesis; it's backed up by neuroscience. I highly suggest you read Damasio, who's one of the top neuroscientists in the world. A good overview can be found in his book Self Comes to Mind, the price of it being justified by the selection of paper references in the endnotes alone.
Re: (Score:2)
Our replacements (Score:3)
The development of Watson stems from employers' inability to use human intelligence 100% instrumentally -- i.e., people can't be used as clocks. Once Watsons are prevalent, humans will be economically superfluous in nearly every area that requires thought. Our overlords won't even bother to bring out the old line about freeing up humans' time to do "better things."
I for one (Score:2)
Nematode brain in machine (Score:3)
Meanwhile a week ago nematodes reached the singularity, when folks mapped the roundworms' 300+ synaptic connections into a Lego robot, which proceeded to react to moving toward a wall in similar fashion to biological nematodes.
Re: (Score:2)
Ah yes, the religious - philosophical masters - BS (Score:3)
Bright lights, like this loon, are all part of the "man is not ready......." pseudo religious bullshit".
In fact, we will progress to artificial life and artificial intelligence in erratic steps - some large, some small - some hard, some easy.
Easy is logic, easy is memory and lookups, easy is speed - hence Watson as we start to climb the connectedness/co-relatedness/content addressable memory ladder. (Content addressable memory {CAI } is like a roll call in the Army - "Private Smith?" - "here"). A lot of the aspects of intelligence are ramifications of CAI, and other aspects of interconnectedness. Add in the speed and memory depth and more and more aspects of an AI emerge. As time goes, step by step, intelligence will emerge. It might be like an infant that needs to learn as we do, but at a far higher speed - zero to 25 years old in 5 minutes???. Experiential memories - can they be done at high speed, or must that clock take longer?.
The precise timing of these stages elude me, but I believe they will emerge with time.
As to whether or not this AI will be a malevolent killer, or one of altruistic aspect??? It seems to me that this will depend on how is is brought up.
(until an AI can reproduce sexually - no he/she). Can a growing AI be abused - mentally, as in children are abused?? I suspect that with no sexuality that there will be no casus abusus. That is not to say that ways to abuse a growing AI are impossible to find - they will emerge in time.
As these AIs emerge, how smart will they be? and IQ of 25 or one of 25,000,000?? This might bear some relationship with how these AIs treat mankind, as a student at man;s knee, or as something that looks down at man with an IQ of 100 and also sees bees and ants with with a group IQ of 25? and muses - what's the difference and thinks of other things...
Re: (Score:2)
Don't need amoebae to fly (Score:4, Interesting)
As I note in my doom and gloom YouTube [youtube.com], it's a 50-year-old analogy in the quest for AI that artificial flight did not require duplicating a bird. Artificial intelligence may look very different, and in fact in my video, I avoid defining intelligence and merely point out that "a computer that can program itself" is all that is required for the singularity.
Define "program itself" (Score:3)
Re: (Score:2)
A FORTRAN compiler does not run continuously and add additional functionality as it goes along.
In the debate that followed the opening remarks (video [youtube.com] with very bad audio because the batteries on the lapel microphone ran down), someone suggested that intelligence requires consciousness. I suggested a Linux daemon could be considered conscious: it runs continuously and takes actions based on input and conditions. So my argument is that for the singularity you just need a daemon that continuously adds function
is still programed by humans (Score:2)
"a computer that can program itself"
is a computer that has been programmed by a human with parameters and a system specifically made by humans for it to take defined variables and combine them in pre-programmed parameters
all pounded out by a dumb monkey
'teh singularity' is a tautology
you can't make a new thing by calling the same thing a different name
Link to 1963 article (Score:2)
Re: (Score:2)
Re: (Score:2)
The problem with exponential growth... (Score:2)
is the constants. If your process doubles in the measured quantity in 20 days then you have something that might be worth worrying about (assuming that it won't hit some other limit, so long as that limit isn't you), but if it doubles in 20 years you have some time to consider and prepare. Whenever I see talk about the singularity it seems like the growth people are talking about either has a very short doubling period (which it probably doesn't) or the growth is actually super-exponential (the doubling per
amoebas are hard (Score:2)
The level of intelligence of amoebas is hard to reach: they have to survive in the real world.
AI has therefore set its standards considerably lower: matching the level of intelligence of Berkeley philosophers and social scientists. Here is an example, indistinguishable from its human counterparts:
http://www.elsewhere.org/journ... [elsewhere.org]
teh singularity (Score:2)
so glad to see published articles where they say this plainly
'teh singluarity' needs to go to the dustbin of history b/c it's wasting *billions* of research dollars
I've been saying this years. (Score:2)
The "AI" these days is just a collection of IF/THEN statements.
We can't copy a bird, but we have airplanes. (Score:2)
Philosophy defined: (Score:2)
Mental masturbation wherein meaningless questions are poorly answered.
Submarine (Score:2)
"Asking whether a computer can think is like asking whether a submarine can swim."
--Djikstra
Ridiculous Analogy (Score:2)
"Clocks may keep time, but they don't know what time it is. And strictly speaking, it is we who use them to tell time. But the same is true of Watson, the IBM supercomputer that supposedly played Jeopardy! and dominated the human competition. Watson answered no questions. It participated in no competition. It didn't do anything."
A ridiculous analogy. It's like saying the dog that fetches the wood to its master has no intelligence at all. It's the master that "fetched" the wood. AI is not pseudo-intelligence
Environment (Score:2)
What she asks of machines they cannot have, as it comes from birth. She's looking at the wrong place. the furthers step isn't Watson, but UHFT.
An amoeba's interaction with its environment comes from the fact that it's a product of that environment. AI are not a product of their environment, they are artificial. What Alva Noe asks of an AI could only be answered by one that appears spontaneously from its environment.
However, the environment we've created in which AI could appear is way too simple to allow su
What is time anyway? (Score:2)
Intelligence and Consciousness (Score:2)
I have no doubt that the AI we are building will improve dramatically, even to the point where it will far exceed human intelligence. But it is unlikely that the intelligent machines to ever be sentient.
professor? (Score:2)
And he calls himself a professor? I guess he's way off base..
Smarter as us doesn't mean it has to understand what it's talking about (most politicians don't know jack what they are talking about and still they 'make' the rules)..
And let's not forget, with neural networks, AI can advance much faster than a regular person, and also let's not forget, there are way more advanced projects going on in laboratories than IBM's Watson, which haven't been shown to the public.. IBM's Watson is just the tip of the iceb
Don't fear the singularity (Score:2)
I don't think the technological singularity - if there's such a thing - should be feared. You may, however, want to fear widespread pseudo/artificial/whatever intelligence. Or just call it plain automation. Because it's going to take your job well before there's a technological singularity. And the challenges that need to be overcome to get us there are much easier than copying an amoebe. You don't need to be able to copy an amoebe in order to be able to do just about anything a human does better than a hum
Looks dead (Score:2)
Re: (Score:2)
That's not the impression I get from github [github.com]...
Re: (Score:2)
They implemented it as a lego bot a few weeks ago. Also there's little to update on a project when you have all the neurons mapped out and all the data there, the worm isn't going to evolve new neurons anytime soon.
Re: (Score:2)
http://hardware.slashdot.org/s... [slashdot.org]
This was just a few days ago.
Re: (Score:2)
AI uses sensible variable and function names *and* comments its code?
We're doomed.
Re: (Score:2)
Don't worry. It is using a non-standard prototype for `main', and it forgot a few semicolons. It will stop working the next time the compiler is updated. Who knew that some obscure idiosyncrasies of the C programming language would save humanity? :)
Re: (Score:2)
Re: (Score:3, Insightful)
Re: (Score:3)
Philosophy -- graveyard of fact (Score:2)
We evolved from single celled organisms, but we are not those now.
Science evolved from ignorance by determining the uselessness of, and then discarding, philosophical nonsense and replacing it with a very specific, non-soft, objective, rule-based behavior called "the scientific method." Which -- unlike the vast majority of philosophy -- produces useful results.
The claims philosophy have to science can be best likened to a leaky condom. We managed to get the scientific method in spite of it. Not because of i
Re:Philosophy -- graveyard of fact (Score:4, Interesting)
Not true. The Scientific Method is itself a philosophy, as is mathematics. (Mathematics is not a science, it is a humanity and specifically a philosophy.) Mathematics is the core of all science.
Your understanding of philosophy clearly needs some refreshing. I suggest you start with Bertrand Russel's formalization of logic and progress to John Patrick Day's excellent textbook on mathematical philosophy. It's clear you do not know what serious (as opposed to populist) philosophers are concerned with. This is no better than judging physics by Fleich and Pons' Cold Fusion work, or judging biology by examining 1960s American perversions of brain surgery.
You've got to look at the real work. And the odds are that there's more in your computer that was developed by a philosopher than ever came close to a "non-philosophical" scientist (whatever those might be).
Re: (Score:3)
You would hope a Professor of philosophy could get his head around the difference.
Agree, way too many people who should know better still conflate consciousness with intelligence. An ant's nest exhibits intelligent behaviour but it can't contemplate it's own existence, Watson displays the same kind of "mindless" intelligence and consistently outperforms the best human trivia buffs.
Re: (Score:2)
Agree, way too many people who should know better still conflate consciousness with intelligence. An ant's nest exhibits intelligent behaviour but it can't contemplate it's own existence, ...
So how exactly do we know this? I haven't read of any studies on the topic. Could you give us a link to a study showing what ant nests actually contemplate?
Re: (Score:2)
Exactly this. Everyone needs to read up. Ant nests are no less self-conscious than you or I. There is no way to prove or disprove an internal experience of any intelligent entity. So the best thing to do is to focus on the displayed behavior and not try to second-guess the internal "thoughts" of an AI or an ant's nest, because that way lies madness.
Re: (Score:3)
Exactly this. Everyone needs to read up. Ant nests are no less self-conscious than you or I. There is no way to prove or disprove an internal experience of any intelligent entity. So the best thing to do is to focus on the displayed behavior and not try to second-guess the internal "thoughts" of an AI or an ant's nest, because that way lies madness.
As a logical positivist, I'm ready to reject all questions about consciousness, including claims that humans do or don't have it. Nobody has any clue what the metric is. "I assert I am conscious, therefore I am." "I assert the ant is or isn't conscious, therefore it is or isn't because it didn't disagree."
I can simply program Eliza to tell you she is self aware. How would you test it? By her ability to fool humans into thinking she is human? Is that the metric??
Prove I am not just a complicated electro-chem
Re: (Score:2)
As a logical positivist
Seriously?
It's like seeing someone proudly proclaim that they're a flat-earther. You may want to reconsider your position.
Re: (Score:3)
Expert systems are not intelligent. They're nothing more than a fancy version of Animals. If/then/else isn't even weak AI and a binary search of an index is just a search. It doesn't mimic an expert, because experts only start with simple diagnostic tools like that. That's the beginning, not the end. Experts know when answers are off and know how to recover from it - when it's unimportant and when it's absolutely critical. Experts also know how to handle cases never encountered before, because they don't ju
Re:AI researcher here (Score:5, Insightful)
You're applying your own arbitrary definition of intelligence, using that to frame the argument and declare yourself right. AI as a subject got past that kind of crap by the 90s with a realisation that it's a little more complex than all that.
You dismiss various AI solutions as just being a bunch of algorithms, well guess what? we still have absolutely no evidence that human beings themselves aren't run by a bunch of algorithms, the only difference is that we just don't understand them well enough to document or reproduce them synthetically yet.
It's stupid to declare something like a neural network or expert system not intelligent just because you understand the details of it because most people who see the actions of a neural network would say "That's pretty intelligent how it can do that".
If you have two closed boxes for classifying say, wine. In one is a person and in the other is a computer with equipment and a neural network trained for wine classification and a wine sample goes in and a classification is displayed on a screen by entering an output and the computer can of course do it better than any person people are pretty much always going to class the computers response as the intelligent one when it gets far more tests right. The Turing test was designed to show the sorts of intelligence we see in strong AI, but modified versions of that display intelligence from weak AI in select circumstances.
So yes, you can absolutely say things like neural nets, and expert systems are not strong AI, but you absolutely cannot say they are not intelligent without framing it on the rather stupid definition that something is not intelligent if we understand how it works. In some circumstances these systems would be deemed to be more intelligent than humans by most people and as we don't have a fixed definition of intelligence that seems a far better way of judging intelligence - getting people's judgement on intelligence in a statistically sound study than coming up with definitions like "Something is intelligent if we don't understand how it works".
If computer algorithms could show no intelligence whatsoever then we'd only be using them to do dumb repetition, like building cars on an assembly line, but we don't, we use them to augment our search capabilities, to correct our grammar and spelling, to figure out an optimal path for data to travel down on a complex network and so on and so forth far better than a human could - we're using them to augment our intelligence every day and many ways, and that's because they can display some intelligence. Not conciousness, not strong AI, but a degree of intelligence all the same.
You're conflating conciousness, intelligence, and strong AI all into one big pot, but it's all far more nuanced than that. You're assuming life works in a binary way, where something is either not intelligent, or something has human level intelligence and artificial would be a strong AI. But god only knows, we have enough evidence of various living things in this world to see that there are varying gradients of conciousness and intelligence for that to be true. Assuming it'd somehow be different with computers makes no sense and guess what? in the last 20 years we've seen progress with AI research with ever increasing levels of intelligence. When that'll escalate to the level of what we deem strong AI, or human intelligence is anyone's guess but we're not suddenly going to go from having no strong AI to having strong AI, we're going to have ever increasingly intelligent stuff that approaches strong AI and eventually becomes good enough to declare as strong AI.
Re: AI researcher here (Score:2)
AI are more or less reasoning machines which is not the same as intelligence. There is no awareness or understanding which are necessary to call it intelligence. Most likely your definition of intelligence and that of other scientists differ tremendously. To understand the professor you must obviously use the definition from his field and not yours.
Re: (Score:2)
P.S. sorry for my English
The single most serious error in your English is to treat 'believing' as a synonym for 'knowing,' as in the statement "anyone with a belief in God knows already ..." Rather they believe more than "science" (which is, in this context, I guess you mean material factors), is required to become alive.
Before the usual slashdot crowd start shouting "blasphemy, blasphemy, a believer in God inside our atheistic /. temple" i must remind them that this is about a professor of philosophy
Re: (Score:2)
[S]o yes, in this case "i treat 'believing' as a synonym for 'knowing,' "
But in the case of, "I believe you are mistaken," you don't? Yes?
In other words 'God' makes this a special privileged belief, not subject to the problematic difference between belief and knowledge which might occupy a rational epistemology. 'God' as the fullstop to inquiry again.
I do not know whether any god "exists" --in fact I don't even understand what kind of existence is being claimed for God --however I do believe you are mis
Re: (Score:2)
We have already moved beyond that. The neuron to ethernet interface is just a little clunky.
Technological progress is NOT linear (Score:2)
People drive technology and the number of people has been going up exponentially, so techincal prograss is NOT linear.
And the whole point of "singularity" is that once we create an intelligence smarter than us, it will (in theory) in turn create an intelligence smarter and faster than it, and so on. That's not linear progress.
--PM
which are not 'ai' (Score:2)
all of which are, as TFA says, not 'intelligence' at all!