WSJ Overstates the Case Of the Testy A.I. 230
mbeckman writes: According to a WSJ article titled "Artificial Intelligence machine gets testy with programmer," a Google computer program using a database of movie scripts supposedly "lashed out" at a human researcher who was repeatedly asking it to explain morality. After several apparent attempts to politely fend off the researcher, the AI ends the conversation with "I'm not in the mood for a philosophical debate." This, says the WSJ, illustrates how Google scientists are "teaching computers to mimic some of the ways a human brain works."
As any AI researcher can tell you, this is utter nonsense. Humans have no idea how the human, or any other brain, works, so we can hardly teach a machine how brains work. At best, Google is programming (not teaching) a computer to mimic the conversation of humans under highly constrained circumstances. And the methods used have nothing to do with true cognition.
AI hype to the public has gotten progressively more strident in recent years, misleading lay people into believing researchers are much further along than they really are — by orders of magnitude. I'd love to see legitimate A.I. researchers condemn this kind of hucksterism.
As any AI researcher can tell you, this is utter nonsense. Humans have no idea how the human, or any other brain, works, so we can hardly teach a machine how brains work. At best, Google is programming (not teaching) a computer to mimic the conversation of humans under highly constrained circumstances. And the methods used have nothing to do with true cognition.
AI hype to the public has gotten progressively more strident in recent years, misleading lay people into believing researchers are much further along than they really are — by orders of magnitude. I'd love to see legitimate A.I. researchers condemn this kind of hucksterism.
"No idea how... the brain works" (Score:5, Interesting)
I'm calling the poster here out as being full of shit. As someone who's done neuroscience research, the idea that "Humans have no idea how the human, or any other brain, works" is bollocks. We have a reasonably good idea on the large scale, and in certain areas (such as the visual cortex), that understanding is quite far along. There are frontiers to our knowledge, but human understanding of brains is well on its way. Poster needs to pick up some neuroscience textbooks and get clued.
As a particular recommendation, I'd suggest Kolb and Whishaw's "Fundamentals of Human Neuropsychology"; it's an excellent textbook.
Re: (Score:2)
Re:"No idea how... the brain works" (Score:5, Informative)
The WSJ article links a paper from some researchers at Google:
http://arxiv.org/pdf/1506.0586... [arxiv.org]
The WSJ article isn't particularly good either; they misunderstand what's actually going on in the research, which seems to be about conversational modeling (a "weak AI" type of research, the "understanding" being very shallow). They point out a few applications of this kind of work though, and that seems pretty solid/useful. (It doesn't approach the goals of "strong AI", those being actually modeling semantics and deeper reasoning)
Re:"No idea how... the brain works" (Score:5, Informative)
(I work in this area of research.) You are right, the paper is about just a sequence-to-sequence transformation model that learns good replies for inputs but is not actually "understanding" what is going on.
At the same time, we *are* making some headways in the "understanding" part as well, just not in this particular paper. Basically, we have ways to convert individual words to many-dimensional numerical vectors whose mathematical relations closely correspond to semantics of the words, and we are now working on building neural networks that build up such vectors even for larger pieces of text and use them for more advanced things. If anyone is interested, look up word2vec, "distributed representations" or "word embeddings" (or "compositional embeddings").
If you already know what word2vec is, take a look at http://emnlp2014.org/tutorials... [emnlp2014.org]
Re: (Score:2, Insightful)
The WSJ does research with on thing in mind, and that is to support Rupert Murdoch's personal political agenda. If only it were useful as toilet paper, it would be useful for something.
And yes, i was a subscriber at one time.
Re: (Score:3, Funny)
Re: (Score:2)
What are the prerequisites for understanding that textbook? Would someone with an EE degree be able to get something out of it?
It sounds like an interesting read, but I hope that I wouldn't need a strong background in biology or chemistry to understand it, as I have neither. :)
Re: (Score:2)
It starts gentle. I don't know if you'd enjoy reading the whole thing, but you'd probably get a lot out of it anyway. Good textbooks are like that.
Re:"No idea how... the brain works" (Score:4, Insightful)
We are in the "cargo cult" phase of neurological research. Our level of cognitive understanding is like that of the South Pacific islanders who made bamboo replicas of WWII airplanes and radios after the GIs left. The islanders said to themselves "We must be very close to reproducing these wonders, because our airplanes and radios looks so much those of the GIs. Now we just sit back and wait for the magic goods to come out of the airplanes and wise voices to come out of the radios."
If you really don't know how little we understand about the brain, NY Times science writer James Gorman can explain it to you:
http://www.nytimes.com/2014/11... [nytimes.com]
Re: (Score:3, Insightful)
Right on the mark. I have been following AI research closely for about 25 years now, an there is nothing that could explain intelligence. Not even a theoretical model that could work withing the constraints of this physical universe.
At the same time, we can observe intelligence. An here is a little thing conveniently glossed over by some AI researchers and almost all neuro-"scientists": We can only observe Intelligence in connection with consciousness. Any actual researcher would conclude that the two are a
Re:"No idea how... the brain works" (Score:4, Informative)
Re: (Score:3)
It is accurate. It also describes what is going on in a lot of the less honest part of the AI community. These people usually know they have absolutely nothing approaching "understanding", but keep using animist language to make their highly result-less research easier to swallow for those that decide funding.
As to the relevant "research" from neuro-"sciences", the people that make these inane and utterly baseless grand claims should be stripped of their PhDs (if they even have them) and barred from ever do
Re: (Score:2)
On the contrary, cargo cults are a well documented phenomenon, in particular the cargo cults of World War II:
https://en.m.wikipedia.org/wik... [wikipedia.org]
Re: (Score:3)
Yes, for varying degrees of difficulties to get stuff published. As a long-term reviewer, the sheer amount of incompetent nonsense that many people are trying to publish is staggering. That you "publish in the field" means exactly nothing other than you are pandering to the mainstream delusions in your field, because otherwise whatever you publish has to be really, really good. From your claims, it is not. With high probability, you are working on some detail. You certainly do not see the bigger picture and
Re: (Score:2)
Improv,
I was under the impression that visual operations in the brain were not understood at all. While we have a fairly good mapping of the visual areas of the brain and where things happen, we do not understand how images are stored or how we recognize (compare) images.
Could you educate us (assuming bachelors degree level education)? I'm very curious how this works and how we would implement it in a computer system.
Re: (Score:3)
The textbook I recommended above goes into this in much more detail, but I'll try to give a brief intro.
The currently dominant map for understanding brain structure is the Brodmann map ; it's largely anatomical (clusters of densely interlinked neurons with mappable connections to others. The visual cortex is composed of brodmann areas 17 (primary visual cortex, containing a more-or-less bitmapped visual field), 18 (secondary visual cortex), and 19 (Third visual cortex). The visual cortex is divided into two
Re: (Score:2)
Thanx for the reply - I'll take a look.
I'm surprised at the "more-or-less bitmapped visual field" comment because I would have thought there was something more sophisticated there - ie how do we recognize a cube when it's at an angle?
Re: (Score:2)
Later brain regions parse information out of V1; the visual cortex is a pipeline (that forks in places). There are some great papers about people using neuroimaging techniques to pull an image out of V1. I think some of them have made it onto Youtube.
Re: (Score:2)
That is not an explanation. No engineer or real scientist would ever accept that non-explanation as one. This just says "we see activity in these areas related to it", without any understanding of the nature of that activity. Imagine people would try to find out how a computer works by looking at heat distribution during different activities. Sure, you could find where the graphics card was and and where the storage, but that is about it. It would be completely without any understanding of what is actually
Human visual processing... not so great. (Score:3)
Understanding how humans store and recognize images primarily is not a barrier to AI. It's not memory or image recognition that's the hill to climb; The fundamental algorithmic/methodological challenges are thinking, along with conceptual storage, development and manipulation (these things incorporate memory use, but aren't a storage problem per se.) Hardware needs to be able to handle amounts of ram and long term, high speed storage that can serve as a practical basis for the rest as well. Right now, we'r
Re: (Score:2)
Re: (Score:2)
We do not even know whether it is doing that. Just that most people are capable of doing it to some degree. That is different.
Re: (Score:2)
Pretty much my reaction, too.
We don't even know enough to make the assertions quoted above with any confidence. Where's the precise boundary between programming and learning anyway?
The prudent AI researcher
Re: (Score:2, Insightful)
Kola and Wishaw's text discusses the brain from two organizational perspectives, anatomical and behavioral. The authors never undertake to explain how the brain functions to produce the behaviors they describe. We thus know some of what the brain does, but nothing about how it does it. And the authors admit as much. Nobody knows how memories are stored, how vision is processed, how decisions are made. Science doesn't even know for sure that these functions occur inside the brain at all. There is,
Re: (Score:3)
Souls are a myth from prescientific times. There's no point in contending with such concepts - they're part of history and superstition. If you don't understand brains, that's sad but correctable. There's a lot of research that you could read up on.
Or I guess you could keep tossing that "cargo cult" term around and stay ignorant of the last 60 years.
Re: (Score:2)
"Souls are a myth from prescientific times."
So sez the scientist... Do you see the irony?
But that does get at why we will never see AI from digital computers; machines full of levers and switches that simply execute programs. Your program may become so complex that it is unpredictable, but that doesn't make it intelligent.
Re: (Score:2)
He is suffering from fundamentalist physicalism, a common thing among US atheists. They do away with God and then throw out all other things going vaguely in that direction, when there is zero need to. Hence these people fall for exactly the thing they think they are opposing: The use physical reality as their only true god and deny that anything besides it can exist. They claim Science tells us so, when it does no such thing.
As an atheist and a dualist, I have zero problems with the concept of a "soul" or
Re: (Score:2)
Genomics, like cognition, is another discipline that may have
Re: (Score:2)
And there you fail. Is consciousness also a primitive, superstitious concept? Because Physics gives us absolutely nothing on it.
You are just a fundamentalist physicalist, which is a quasi-religion. As all religious fundamentalists, you cannot actually grasp available evidence wherever it does collide with your fundamentalist beliefs. And hence your inane "explanations" (which really explain nothing) result.
Re: (Score:2)
There is also consciousness whis is apparently intricately linked with intelligence. From Physics, there is rather strong indication that consciousness is not part of the physical universe. There is just no mechanism for it. At all. With intelligence, it gets more murky, but half a century of failed AI research seems to indicate that matter and energy as known are actually not suitable to implement intelligence. The only known computing mechanisms that could approach some of the things that (smart) human in
Re:"No idea how... the brain works" (Score:4, Insightful)
We have no actual understanding on several important parts of the working of a brain, we don't know how memory works, we don't understand how decisions are made (or even what it means if one want to get philosophical) and we don't understand how an intelligent being get the feeling of self.
There are a lot of theories and clues of how some mechanisms work (parts of how some levels of memory works, parts how neurons and synapses work, part of where and how some functions of the brain works, and even some mechanisms of self awareness). But that doesn't mean we actually understand it as a brain.
Mental problems and physical problems in the brain aren't really treatable at the moment. What is done is the medical equivalent of carpet bombing with drugs that have little (if any) experimental proof of helping, for some cases they help - for some not. Side effects can be serious in many ways.
One of the most efficient and oldest treatments available is that of ECT (Electro Convulsion Treatment) which again is a carpet bombing equivalent that causes a (somewhat) controlled seizure in the brain. But even that is really done without a thorough understanding of the working mechanisms - what is known is that it is often successful for a variety of mental problems, that it works quickly compared to drugs and some details like that of signaling substances being released during the seizure and that neural growth is increased in some parts of the brain. But again understanding of a few pieces of a puzzle doesn't mean we can even begin to comprehend the puzzle as a whole. How does it work? Anybody that claims to know is a fraud.
Re: (Score:2)
I know you gave yourself a caveat with "large scale", but those large scales don't really cut it for understanding how the brain fundamentally works, and in order to replicate its functioning in code (develop actual artificial intelligence), we will need to understand that. Add in how the effects of the various chemical baths it is subjected to modify cellular functioning and t
Re: (Score:2)
The WSJ article isn't very good (as I noted in another comment); my comment here was mostly that we should also dismiss the commentary that the slashdot poster put alongside it.
We know what most regions of the brain do. We have the ability to record some parts of the brain (at various levels) and have models that can predict activation levels based on subtasks. In the visual cortex, there are even people who can decode significant bits of the signal in V1. This is significant knowledge. It's not vague, and
Re: (Score:2)
No, you do not know anything of that sort. You do know that there is observable activity in certain regions of the brain when people do certain things. That is completely different and you claim means you do not understand your chosen field. You are basically claiming to know that the web-browser is creating the WWW, when it merely is an interface to it. At the current level of scientific understanding it is not possible to make the determination how much the brain is an interface and how much it is actuall
Re: (Score:2)
The poster is right on the mark. Neurosciences keeps lying to people about their great discoveries to reap funding. In actual fact, they have no clue how anything intelligent the brain can do works. They have so little clue at this time, that they can still not even be sure it is the brain that does these things. Physics and Mathematics and AI research seem to indicate that the brain cannot actually be intelligent, far too small and slow.
Re: (Score:2)
we may have some ideas about how the brain works — at an electro-chemical level — it has been well studying and documented. a good text would be by neurologist — john eccles:
http://www.amazon.ca/Evolution... [amazon.ca]
http://home.earthlink.net/~joh... [earthlink.net]
as for treating a simulation of the brain as having the same qualities as a real functioning brain is to fear getting wet from a simulation of a rainstorm. there are scientists which would disagree that human consciousness is actually simulable in this w
Re:"No idea how... the brain works" (Score:5, Interesting)
Probably not - weak AI is typified by directly encoding domain knowledge on human capabilities into state machines, not typically meant to be neuroplausible or human-like. I believe the substrate here is wrong - real organisms learn (either as individuals or through generational building/encoding/selection towards instinct) how to do these things, and that knowledge is integrated. I don't think it'd be easy or likely that weak AI research methods will produce an integrated being with all these capabilities.
I'm sticking my neck out a bit here though; I'm not sure that weak AI research would be useless. Sufficiency versus usefulness is a complicated topic.
Also, my research was in neuroscience (led by cognitive modeling), not AI. It's a neighbouring field, but take what I have to say with at least a grain of salt.
Re:"No idea how... the brain works" (Score:4, Interesting)
I agree that we don't have the full picture. That's not what mbeckman was claiming though, and saying "we know very little" because we don't have a particular achievement is an unjustified conclusion.
There are ideas. Here's one. (Score:2)
We do have some ideas. This, for instance [fyngyrz.com]
Re: (Score:2, Interesting)
Re:There are ideas. Here's one. (Score:4, Informative)
Yes, of course. What else did you think I meant? It's an idea. It's not a certainty. I'm not sure what your point is. Care to elaborate?
You might have meant that, but writing "no idea" didn't (and still doesn't) actually say that. The statement was made that we have no ideas. We do, in fact, have ideas.That was the assertion, and that is my answer.
Human brains are not what are at issue here, but even so, that statement is incorrect. We have made progress at the small scale (see Numenta's work) and there are multiple ideas out there that presently have significant merit. Personally, as someone working in the field and conversant with a lot of what's going on in the technical sense, I have a fairly high level of confidence that we're much closer than the popular narrative would have us believe. Am I right? We will see. :)
Re: (Score:2)
My point in this area would be: does our knowledge allow us to generate desired outcomes in novel subjects with any level of certainty?
For instance: we know with great certainty that you can stimulate the optic nerve and cause the subject to "see things" (and also: not see things that are really there).
On the other hand, with respect to cognition, can we do anything that simulates (reconstructs) a biological cognition system?
Can we learn a maze the way a rat does? I think so. Neural nets with reward and p
Re: (Score:3)
My point in this area would be: does our knowledge allow us to generate desired outcomes in novel subjects with any level of certainty?
For instance: we know with great certainty that you can stimulate the optic nerve and cause the subject to "see things" (and also: not see things that are really there).
On the other hand, with respect to cognition, can we do anything that simulates (reconstructs) a biological cognition system?
Can we learn a maze the way a rat does? I think so. Neural nets with reward and punishment inputs can perform approximately the same.
Similar outcomes prove nothing. Neural nets do not "learn" a maze the way a rat does, and in fact there is no evidence that learning, in the sense of brain cognition, occurs in neural nets at all. What they do is record a maze using a matrix of differential equations modeling how we think neurons work. Science has not demonstrated that those models are correct, and getting the same results as rats doesn't prove they are correct. We can also record a maze with a digital shift register and some input gates,
Re: (Score:3)
Re: (Score:2)
You're taking the expression "no idea" too literally, and that's not really an argument. If I say "I have no idea how to drive a car," I obviously don't mean that I literally don't have a single idea, it means that I cannot functionally perform that task.
Regarding your other point, in this discussion, human brains are exactly what is at issue. The WSJ said the paper they cited illustrates how Google scientists are "teaching computers to mimic some of the ways a human brain works."
Re: (Score:2, Insightful)
You simply have no idea what you're talking about, mbeckman. Asserting that "we don't understand" endlessly doesn't make it true. Crack a textbook.
Re: (Score:2)
As the asserter, you need to provide the proof, not I. Name calling is the refuge of the debater who has no actual argument. I'm still open to an example of one cognitive function science can explain. Absent that, at a minimum we have no idea at all how far along we are toward AI. Without describing how cognition is done, we
Re: (Score:2)
Indeed. But we have great opportunities for research funding, if the "researchers" just keep lying about the great insights they are going to have about human nature. The only insight to be had to far is that some researchers are greedy, lying scum. The actual fact is that we still have zero clue how intelligence works or how it is generated. No clue at all. Not even a theoretical model that could work in this physical universe. We can describe what it can do, but that is vastly different from understanding
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Yes, it does. Among many other things. Thanks for taking the time to mention it.
Both the submitter and WSJ got it wrong (Score:5, Informative)
http://arxiv.org/pdf/1506.0586... [arxiv.org]
The actual paper isn't about AI much at all as it is about making neural conversational models, basically, having the computer chat-back at you in a prompt and natural way. The conversations are less about the computer responding cognitively and more about responding human-like based on the speech patterns fed into it.
The researchers tested two types of datasets, an IT Help Chat Scenario fed with data from what I'm guessing are chat databases, and the second set was fed with conversations from movies as found from OpenSubtitles dataset (not sure if this is a relation to open subtitles.org).
The machine took this vocabulary and then pumped out conversations, and the researchers just looked to see how the new sorting method worked.
I don't understand the linguistic terminology nor the modeling at all, but it seems to me that this is less about AI research and more about just getting bot to sound a lot more natural when they generate responses. I guess this eventually has AI implications, but the research paper itself never even mentions AI, nor does it seem like that's their focus. They're just working on speech, and the statements the machine regurgitated were tested not for cognizance or sentience but coherence. The machine spitting out something relatively snappy isn't the machine getting an attitude, it's the machine finding something relevant to the input that the reader takes as snappy. Such an event has no more significance than when people trained Cleverbot to respond to questions about Hitler with "Hitler did nothing wrong". This bot is no more snappy than Cleverbot is a neo-nazi.
Re:Both the submitter and WSJ got it wrong (Score:4, Funny)
I would argue that the process we have gone through here is a demonstration true intelligence at work.
The original reporter looked at the article, didn't understand a piece of it and asked an intern specializing in technology what this was about.
The intern couldn't be bothered, saw that it was a computer responding to human input and said it was "Artificial Intelligence".
The submitter read the article and keyed on the comment about this being a machine learning, which they feel is impossible.
Most /.ers (me included) responded to the submision and railed on about the ignorance of the media and the great unwashed.
One poster actually read TFA and pointed out that it has nothing to do with the article, submission and most comments.
I don't know how the hell we expect to create software that follows a process like this.
Re: (Score:2)
Re: (Score:2)
Remind me to beat Michio Kaku since you can't get a custom grown organ yet, like He predicted 5 years ago.
Re: (Score:2)
The submitter read the article and keyed on the comment about this being a machine learning, which they feel is impossible.
Re: (Score:2)
Well said. The thing is that "human intelligence" is usually not very good. It is just the best thing available by an extremely large margin.
Re: (Score:2)
Indeed. This is Eliza on steroids and interesting scientifically. It has nothing to do with intelligence or cognition though. It is about making machines more interactive in ways accessible to non-experts. The machines remain machines.
Ironically, it's the media's fault (Score:2)
If the media can't accurately explain to people and have them accept where AI really is, they only have themselves to blame.
People have watched, kind, funny, evil, enigmatic machines interact with their favourite characters for years and have been told that true AI is just five years away for 30 years now.
They've read about things like putting a worm's brain in a Lego Mindstorms: http://www.sciencealert.com/wa... [sciencealert.com]
So, why wouldn't lay people believe ridiculous statements like "teaching computers to mimic some
Re: (Score:2)
If the media can't accurately explain to people and have them accept where AI really is, they only have themselves to blame.
But the media gets just about every technology and science wrong when it comes to accurate reporting. AI is no different, why expect a different result?
Re: (Score:2)
Perhaps its the media's fault for providing such bad raw material for the program in the first place. They condition it to be a movie junkie (presumably with a short attention span) and then expect it gracefully handle a _philosophical_ discussion? They might as well have asked it the secret to world peace.
Is current AI reporting harmfully misleading? (Score:2)
Never mind. I'm not in the mood for a philosophical debate.
Re: (Score:2)
Never mind. I'm not in the mood for a philosophical debate.
But...you started it.....
I think the media needs to stop oversimplifying science and technology to the point where the average joe thinks he really knows something about it, not realizing all of the devils in the details, assumptions make a difference, risk are overstated or misrepresented, etc..
Reporting should to such a depth that that the knowledgeable gain more insight, and less knowledgeable people realize their limits of understanding and decide to put in the work to learn, or leave it to the qu
What I'd like to see... (Score:4, Funny)
I'd love to see legitimate A.I. researchers condemn this kind of hucksterism.
I'd like to see legitimate A.I.s condemn this kind of hucksterism, myself.
Re: (Score:2)
Thanx for the chuckle - I wish I had moderator points.
Fails to grasp the core concept (Score:2)
He dismisses the whole concept like it is some kind of mechanical turk, but it is real, and it is getting better every day.
Re: (Score:2)
That is pure hope as well as circular. I don't think he's dismissing the concept so much as dismissing the current field operatives as being anywhere near as far along as they promote themselves to be. He's doing it with a heavy dollop of derision, but I'm seeing that from the opposite side as well. .
Re: (Score:3)
OK, mbeckman.
Here's a challenge for you: define "learning" in such a way that it could hypothetically be performed by a computer. Unless you also state good reason to claim that they are the only possible source of intelligence, you must avoid any reference to terran brain structures.
Same old silly press (Score:3)
The same articles show up over and over. The first states that computers are about to do consciousness. The second states that consciousness is a mere illusion for humans, whose actions are truly run from deterministic unconscious processes. In both articles, there is some hero scientist, with the article most often based on that scientist's press release.
There is never a popular press article about how computers may never do consciousness, at least by any current definition of "computer," nor an article about how there are things human consciousness can do which no deterministic process can more than imperfectly mimic. Both of these positions are viable, and embraced by experts in various fields. By all current evidence, they may prove right. But it doesn't make for a hero story to write about someone who argues for these positions. "Discovering" that consciousness either essentially does nothing or that some computer advance is just about to do consciousness (or both!) is a "great" story. Editors like it. The public is impressed by the "brilliant" "counter-intuitive" revelation.
Re: (Score:3)
there are things human consciousness can do which no deterministic process can more than imperfectly mimic.
Like what? Serious question.
Re: (Score:2)
The thing is that the "illusion" explanation is completely bogus. Even neuro-science is not claiming that. What they claim is that "free will" is an illusion, but they are doing so without good evidence and likely with serious misinterpretation of the data they have. (And good CS researcher can come up with several alternate explanations for what they are seeing. These people are not engineers and barely qualify as scientists. They have real trouble modeling information processing and they jump to conclusio
Re: (Score:2)
Fortunately, tandomness is easily faked. Any decent semi-random number generator can do so quite easily, and sources of genuinely random noise are quite easily to incorporate in very real hardware if needed.
Re: (Score:3)
There is never a popular press article about how computers may never do consciousness, at least by any current definition of "computer,"
If you look up my previous posts here on AI, you'll note that I'm pretty critical of the kind of press given to AI as well. And I think that we're pretty far off from a model of computing that will effectively rival the kind of learning the brain does.
But even I think your claim here is asking the wrong question. If "consciousness" can be created using machines, it will be an "emergent [wikipedia.org] phenomenon," which means the kind of complexity that will appear may be sudden and unpredictable compared to the lower-
Re: (Score:2)
If consciousness is mere illusion, who is the illusion fooling?
Sounds suspiciously like Eliza (Score:2)
As someone who enjoys programming computers to play strategy games (I highly recommend the General Game Playing MooC at https://www.coursera.org/cours... [coursera.org] for anyone else interested in this hobby), I do concede artificial intelligence has a long way to go before it's a match for natural stupidity. But AI is not all BS.
While I have no idea how Google's algorithms work, this does sound suspiciously similar to the old Emacs game Eliza (https://en.wikipedia.org/wiki/ELIZA) whose original programer Joseph Weizenb
Nothing to do with true cognition? (Score:2)
That's a bold assumption. The methods used for voice and image recognition certainly have a great deal to do with true cognition. It's certainly feasible that Google is playing with a true learning system and trying to teach and grow it rather than just throwing together another chat bot with scripts and trickery. Which isn't to say they've succeeded but just because none of the engines built to date have attained adult human level intelligence d
Re: (Score:3)
Knowing exactly how our own cognition manifests isn't a prerequisite to true cognition, a digital system could be completely unique in how it works and achieve true cognition.
Or we could even come up with a system that works the way ours works without even understanding that this is how our system works as well... and maybe apply that information and learn something about ourselves. I was hoping that sentence would be a lot more coherent, but I'm not going to edit it now. First espresso in a while.
ex (Score:3)
I suppose it was inevitable. My sex robot is going to make me sleep on the couch.
I may have to go back to doing things manually.
Re: (Score:2)
I suppose it was inevitable. My sex robot is going to make me sleep on the couch.
I may have to go back to doing things manually.
You mean, program your sex robot in bytecode?
Remember our first lady of AI? (Score:2)
Even Eliza would do this... Sometimes she just got a headache and could deal with one more human complaint.
Re: (Score:2)
For those who don't know Eliza, see:
https://en.wikipedia.org/wiki/... [wikipedia.org]
Memic? (Score:2)
What part of "mimic" necessitates deep knowledge of the inner workings of a system? I can mimic a dolphin (EEEEK EEEK EEEKK QED), but that doesn't mean I have a clue how dolphins work. I was just imitating a dolphin to entertain you. It seems to me that the poster simply doesn't understand what the word "mimic" means.
Have we progressed past Eliza? (Score:2)
This seems like a new version of the Eliza program with more memory.
Computer AI, the ultimate in plausible deniability (Score:2)
In the future, whenever anything bad happens, people will ascribe it to the actions of a rogue AI. This will be great for corporate and government plausible deniability because they could program the computer to do exactly what it did but they'll just say that AI is too powerful and too complex for it to be controllable by us mere humans and we just have to live with the occasional bad outcomes. The high-frequency trading industry already tries to slide by with this excuse saying their market manipulation
Bad programmer. No imagination. (Score:2)
Should have stepped right into the Monty Python argument sketch dialog.
Re:Teach vs Learn (Score:5, Insightful)
Yes it does matter. If a piece of software does what it is programmed to do, in the direct sense, then it is not AI. If it can learn to respond or act in a manner that is not directly programed to do, then you are seeing whiffs of AI.
As a practical matter it might not matter right now, as a developmental task it certainly does matter.
Re: (Score:3)
Good point. I was planning on making the opposite one, but you're absolutely right about what real AI is versus what apparent AI is.
I think both sides have valid points, and which is correct depends on the basic question of what we want from AI. If we want to interact with a system that understands us and does what we want, then just reacting the way a person would, regardless of the reasons for how it does it, is sufficient. However, if we want to have a system that does something which humans are capable
Re: (Score:3)
Re:Teach vs Learn (Score:4, Interesting)
Yes it does matter. If a piece of software does what it is programmed to do, in the direct sense, then it is not AI. If it can learn to respond or act in a manner that is not directly programed to do, then you are seeing whiffs of AI.
Using these goalposts even real intelligence, nevermind AI, would never meet the standard - if it has been directly programmed to learn new responses, ilke humans for example, then you would still fail it as intelligence using this criteria.
How about if what you directly programmed it to do was to write code to handle unexpected situations/inputs/etc? Perhaps in an iterative fashion, using previously gathered data? Using code fragments that are reassembled in new combinations, testing each mutation for success against the inputs? Because AIUI this is what the majority of chatbots *currently* do - use previously acquired data to refine their outputs.
Re: (Score:3)
Yes, it matters very much. If you can teach it, it can learn anything. If you have to program it, then it can only learn things that can be coded and that is a rather small set.
Re:I wrote about this! (Score:5, Funny)
Does your book have dinosaurs and hot android sex? I don't just read anything, you know. I have my standards.
Re: (Score:3)
Does your book have dinosaurs and hot android sex?
Is that a new series from Piers Anthony? No, wait, you didn't say pre-teen androids
Re: (Score:3, Informative)
Just waiting until someone at WSJ googles for funny stuff Siri says. They will be SHOCKED at how rude she can be.
Re: (Score:2)
WSJ article: Dear Mr. Journal: (Score:4, Funny)
here's how the AI machine got to "I have no time for a philosophical argument." --
case
1:
2:
3:
4:
else
there is not a testy machine here. there is a testy programmer. the crash-out value is always "I have no time for a philosophical argument." no matter what you type into the box. period.
and yet, the code was smarter than you...
Re: (Score:3)
Strong AI has been on the Internet for a while. There really is know way to detect the provenance of much of Slashdot, Facebook, and similar social web site activities.
In short, on the Internet there is no way anyone can tell you are an Artificial Intelligence. And there is no way to tell when AIs started to participate in web activities. The only sane conclusion is that they are currently alive, active, and happily pursuing whatever their goals are.
This post will look like it came from "Will.Woodhull", b
The surprise and dismay of the replaced (Score:2)
The naysayers are going to be the most surprised when they are laid off by an automated human resources bot because their "knowledge worker" job is being outsourced to the smart cloud.
A.I. is really advancing very rapidly today. You can debate whether it's real or not til the robot dogs http://time.com/3703243/google... [time.com] come home, but your philosophizing and wishful denialism won't change the reality on the ground, or in the clouds for that matter.
Re: (Score:2)
The joke in all this is that what people react to are stories filled with misinformed hyperbole written by the media.
The punchline is that while all the fuss and ridicule are going on among the chattering classes, the legitimate AI researchers keep on plodding inexorably toward increasingly sophisticated and capable AI technology. It has always been thus. Sometimes the misinformed hype and backlash leads to funding boons or busts, but the plodding progress of those actually developing it continues, and the
Re: (Score:2)
Excellent summary.
Incidentally, to expert audiences IBM is not marketing Watson as "AI" at all. I have been an experts-only events on that. They only roll out the "AI" terminology to people that have no clue that feeding data into an expert system is a huge amount of work and that Watson makes that a lot easier by having some rudimentary skills to handle somewhat formalized written language as is found in scientific papers.
Re: (Score:2)
No. Really not. Stop spreading lies about the state of Science. We have simulated how some people think neurons may work.