Forgot your password?
typodupeerror
AI Supercomputing

When Will My Computer Understand Me? 143

Posted by timothy
from the it-already-does dept.
aarondubrow writes "For more than 50 years, linguists and computer scientists have tried to get computers to understand human language by programming semantics as software, with mixed results. Enabled by supercomputers at the Texas Advanced Computing Center, University of Texas researchers are using new methods to more accurately represent language so computers can interpret it. Recently, they were awarded a grant from DARPA to combine distributional representation of word meanings with Markov logic networks to better capture the human understanding of language."
This discussion has been archived. No new comments can be posted.

When Will My Computer Understand Me?

Comments Filter:
  • Why? (Score:1, Offtopic)

    by AG the other (1169501)

    Should a computer understand us when we can't understand each other?

    • by Stumbles (602007)
      Exactly.

      It would be like a monkey fucking a football.

    • by riverat1 (1048260)

      I was going to say, your computer trying to understand you is probably like a man trying to understand a woman. Not likely to happen any time soon.

      • by fyngyrz (762201)

        No, this is just about turning a set of sonic symbols into the equivalent text symbol(s). It's not about understanding in the sense you mean. It's not AI. It's a multi-d form of pattern recognition with contextual cues.

        There's no AI yet. First of all, we don't know what I is. If it comes soon, it'll be an accident. Which is perfectly reasonable in terms of "could happen", but probably not likely.

        • by riverat1 (1048260)

          Of course I know that, I was going for funny. I think it'll take getting computers a lot closer to the complexity of the human brain (and maybe something totally different than the current digital computers) before AI really starts working.

        • First of all, we don't know what I is.

          Absolutely. I've not seen anything at all that I would consider intelligent come from a computer and yet some people act like Watson is almost a fully fledged AI.

          Like you say, maybe we'll accidentally create one by meshing enough of these supposedly intelligent systems together but I doubt it.

          I've always thought that we are missing something fundamental from Physics that can account for why I actually feel alive. This idea that consciousness just emerges as a result of complex interactions seems simp

    • by Anonymous Coward

      Should a computer understand us when we can't understand each other?

      Why should a computer understand us when most of us don't understand a computer?

    • Saw this article and the one about PRISM, thought for a moment thaat it said:
      "When Will My Government Understand Me?"

      And no, Offtopic is not what this is.

      • by idunham (2852899)

        Argh. "this"== OP ("Why should a computer understand me when we can't understand each other?")

  • Maybe.. (Score:5, Insightful)

    by houbou (1097327) on Friday June 07, 2013 @07:29PM (#43942001) Journal
    Instead of trying to build computers that can understand us, we should be building computers that can learn based on stimuli. If a computer can somehow see, and hear, at the very least and it could somehow capture this information and then over time, develop algorithms to make sense of these things. You know.. the code it would generate could then be used ... Anyways, sounds crazy, but, to me, it makes more sense that way. After all, we didn't just 'communicate' instantly, we learned over time.
    • by Dr. Tom (23206)

      Where do you think the probabilities for the Markov nets come from? They are learned from examples.

    • by stms (1132653)

      That is exactly what neural networks are attempting to do. It's just the first thing we're teaching these neural networks to understand is us, specifically language/writing. Which when you think about it is the most logical place to start. Humans have already been organizing information into writing for a millennium computers are already hooked up to the biggest archive in history (the internet). There's a lot of useful information for computers to start learning with and its probably the easiest way for us

      • by houbou (1097327)
        I think it should be more generic.. instead of trying to understand us, it should try to just learn by experience.. At first, it's mostly about what they can see, hear, obviously 'read' although that is where the internet comes in with the vast amount of data. But in the end, communication isn't just communication, it's an experience. Maybe that's the way to do it for machines. Building something which can gain from experience or should I say, let it experience events and things and let it learn and diss
        • Again this is what a hierarchical hidden markov model does, it's the closest simulation of our own neuronal network. It learns by experience just like we do, just at a much faster pace.
        • Like this? [youtube.com]
        • by stms (1132653)

          Sorry I didn't make this very clear. What I was saying was neural networks are designed to learn just like humans. So they learn just a generally as humans. Language is just the most logical place to start teaching robots for numerous reasons.

      • by Stumbles (602007)
        There are lots of people with neural networks and about all they have learned in life is how to wipe their ass. But I guess it would be OK for a computer to learn that task. It will come in handy when I'm old and wearing adult diapers.
        • by ultranova (717540)

          There are lots of people with neural networks and about all they have learned in life is how to wipe their ass.

          Which, all joking aside, is actually a computationally difficult task - you have to simultaneously maintain balance, map coordinates from skin surface to join positions, control pressure, recognize paper tearing, and recognize when the task is done or, alternatively, get more paper (which requires visual object recognition, mapping from retina to 3D space to joint positions, rudimentary understand

      • Re:Maybe.. (Score:4, Interesting)

        by iggymanz (596061) on Friday June 07, 2013 @10:31PM (#43943229)

        I've watched the AI folk fart around with those things for over 25 years; they've nothing to show.

        Even my preferred hobby of symbolic AI has gotten mostly nowhere in the last 30 years.

        Let's just make certain animals smarter and call it a day. what could go wrong?

        • In 25 years, transistors have gotten around 100,000 times smaller. In 1988, the fastest computer was the Cray-2. It had 32 MB (!) and could achieve 250 MFLOPS (!). The Tianhe-2 just exceeded 30 PFLOPS. That's 120 million times faster than the Cray-2. I think the available computational resources will make a difference at some point.
          • by iggymanz (596061)

            no real breakthroughs other than that extra power gives us bad realtime spell checking and autocomplete.

            1988 fastest would be Cray Y-MP was fasted with 333 MFLOPS, 512MB RAM and also 4GB solid state disk for fast near-line storage.

            So it was something like 1/30 the floating power of the machine I'm sitting at now, with 1/32 the RAM and 1/25 the SSDD.....hey, not too shabby for 25 years ago.

    • Even if they did understand us at some level, would a computer care? I'm seriously asking this question because if the closest thing to a human brain is that of a monkey or ape. Yet we act and interact with the world in completely different ways. Even our desires and expectations are different. In fact, compared to a computer with advanced AI, we would have better luck trying to talk with dolphins. Whatever becomes of AI, it's not going to be HAL 9000. I'm pretty confident of that.

    • Instead of trying to build computers that can understand us, we should be building computers that can learn based on stimuli.

      Then the computers will learn not to understand us, because the task is not possible/pointless. How does that help?

    • by Livius (318358)

      You mean the way human children learn language? Not enough buzzwords.

  • by alen (225700) on Friday June 07, 2013 @07:30PM (#43942009)

    It was on Star Trek only because tv and movies are dialogue driven media. But in reality voice limits input

    Take the Siri sports example
    Ask for your team scores
    Get scores
    Open app for detailed sports news

    Or just open the app and get the scores and news in one step. Same with any other data. Modern GUI's can present a lot more data faster than using voice to ask for the data

    • by fuzzyfuzzyfungus (1223518) on Friday June 07, 2013 @07:38PM (#43942067) Journal

      But let's say, um, hypothetically and all, that a... ah... friend happened to have recordings of a few hundred million people's phone calls and needed a giant computer to be able to interpret them....

      • Hey, smart pants, I want you to understand two things: it's an absolutely necessary tool to fight terrorism, and it didn't happen, so just forget about it.

        On a different note, we are going to severely punish whoever leaked that PowerPoint presentation -- which for him/her is highly classified, but for you (once again) doesn't actually exist.
    • by R3d M3rcury (871886) on Friday June 07, 2013 @07:53PM (#43942191) Journal

      Of course, that depends on what's going on.

      (While wearing my bluetooth headset and working on my car)
      "Siri, How'd the Patriots do?"
      "They beat the Jets 52-10."
      "Woohoo!"

      Or stop working on my car, dig for my cellphone and either launch an app for sports scores (which I have to have on my phone) or launch Safari and search (ie, type) "Patriots Jets" and hope that Google is clever enough to figure out what I want and will put it on the search results.

      I agree that if I want to know the details of the game--number of butt fumbles, interceptions, and what-not--I'm going for the App. But just to get quick answers, voice is far more convenient.

    • Totally depends what you're doing. I can tell Siri "Remind me to call my mom when I get home", and she does it. If I were to input this without voice, It would require me to open up menus to the reminder app, tell the system who I'd like to call, that I'd like a location-based reminder, and what that location is (though I'm not sure iOS can do this without Siri). Even if there were a macro for it, it wouldn't be any faster than asking Siri outright by voice.

      There are absolutely things that are easier to do by hand, but voice certainly has advantages.

    • by swillden (191260) <shawn-ds@willden.org> on Friday June 07, 2013 @11:14PM (#43943471) Homepage Journal

      But in reality voice limits input

      Only if you have to talk to it like you're giving input to a computer.

      Imagine instead that you're talking to a person, and not just any person, but a person who has the world's knowledge at his fingertips and knows you as well as a highly competent personal assistant. Rather than asking for your team scores, you'd say "Giants?" and you'd get back the most interesting points (to you) about the game. Follow that with "anything else?" and you'd get a rundown on the rest of the sports, focusing on the parts that most interest you.

      Voice input with contextual awareness, understanding of the world, and personalization will blow away anything else in terms of input speed, accuracy and effectiveness.

      Modern GUI's can present a lot more data faster than using voice to ask for the data

      You're conflating two issues here. One is input, the other is output. Nothing is likely to ever be as efficient as voice for input. I'm a pretty fast typist and not a particularly fast speaker, but I talk a lot faster than I type, even on a nice full-sized keyboard. Output is a different issue. Text and imagery has much higher information bandwidth than voice. However, you can't always look at a screen, so being able to use voice output at those times is still very valuable.

      Even now, I find my phone's voice input features to be extremely useful. Earlier today I was a little late picking up my son from karate. While driving, I told my phone "call origin martial arts". Now, I don't have an address book entry for Origin, in fact I've never called them before. But my phone googled it, determining that the intended "Origin Martial Arts" is the one near my home, and dialed the phone number for me. That's just the most recent example, but I use voice queries with my phone a half-dozen times per day because it's faster and easier than typing or because I'm doing something that doesn't permit me to manipulate the phone a lot.

      Voice is the ultimate input mechanism for most humans. Right now it's pretty good (especially if you use Google's version of it; Siri is kind of lame), and it's going to get much, much better.

      • by ultranova (717540)

        Voice input with contextual awareness, understanding of the world, and personalization will blow away anything else in terms of input speed, accuracy and effectiveness.

        Temporal neural copy? The computer takes a snapshot of your mind, then evolves it in time while feeding input from the game, and when it's done it gets folded back into your neurons, thus effectively having you perform two things at the same time. Or 20. Or read every page of Wikipedia at the same time, for that matter. And the real fun beco

    • The problem has nothing to do with voice. Even typing in free-form questions or even worse, trying to tell your computer to do something with just an English (or other natural language) command is still way off.

    • by master_p (608214)

      It wouldn't work like that. You'd simply ask for

              latest news and scores for team X

      and the computer would fetch the info you want.

  • by Joe_Dragon (2206452) on Friday June 07, 2013 @07:35PM (#43942045)

    voice recognition will need to be a lot better

    • by Gertlex (722812)

      Considering how bad a lot of customer service phone bots are at understanding the word, "yes"... this!

    • by peragrin (659227)

      My favorite is saying something properly and then coughing, gagging, etc and see what pops up in response. sometimes it can't figure it out but sometimes a fart will dial your work for you.

  • by Anonymous Coward

    Instead of translating a human natural language to an interpretation in binary space, why not construct a conlang that sits in the middle. No expressions with double interpretation like in natural language, but also no command-line sentences that mimic for-loops and the like.

    Take the best of 2 worlds.

  • by msobkow (48369) on Friday June 07, 2013 @07:54PM (#43942203) Homepage Journal

    ... as soon as men understand women.

    • This old chestnut? Really? My dad used to peddle this bullshit to me when I was kid, and I didn't buy it then either.

      I understand my mother, my wife, my daughter, my female coworkers and friends as well as I understand all male analogues throughout humanity. Those men and women who are somewhat limited in their capacity to understand people shouldn't a) project those limits onto other men and women and b) perpetuate the bullshit that it's some inherent insurmountable gap between monolithic halves of human
      • by narcc (412956)

        This old chestnut? Really? My dad used to peddle this bullshit to me when I was kid, and I didn't buy it then either.

        How long has that joke been going over your head?

        Gender is not a monolith, and treating it as such leads to discriminatory indictments lobbed carelessly in both directions (I'm looking at you, feminists).

        Oh, you're one of those people. Why am I not surprised?

        • Oh wow, nothing but vague dismissal and no substance. Good job.
          • by Anonymous Coward

            There's no substance in the GP's, but you haven't done any better in your original post. All you've done is express your opinion.

      • by msobkow (48369)

        I doubt very much that you do.

        People don't understand other people's thoughts. At best they have a mental map that roughly corresponds to the gist of what they're saying and which triggers a patterned response thought in their heads.

        While what I said was very tongue-in-cheek, it's also true. Even couples who love each other spend a large part of their time essentially shrugging their shoulders and thinking "Whatever" while going along with the situation or demands in order to avoid an unnecessary fig

        • by femtobyte (710429)

          I doubt very much that you do.

          What does the general lack of understanding-the-other have to do specifically with women? The grandparent poster claimed to understand the women in his life as well as their "male analogues," not to have any superhuman telepathic ability. Yes, understanding other people is hard --- but not on account of their particular genitalia.

          • by msobkow (48369)

            Ever here of having a sense of humour? Must everything be dry and boring for the pedantic?

        • by ultranova (717540)

          People are not logical in their communications. They're fragmented and riddled with assumptions about culture, phrasing, and slang.

          Which is very logical, as it allows for shorter [wikipedia.org] messages and thus more efficient communication. Indeed, it's impossible to communicate without making any kind of assumptions about the receiver.

          Getting a voice recognition system to deal with accents is far from trivial, but even that is trivial compared to getting a system to grasp concepts from around the world.

          True. It's prob

    • by fabio67 (1372097)
      if I want someone to undestand me, I buy a dog
    • by houghi (78078)

      Although obviously funny, there is a lot of truth in it.

      Language is not the most straight forward way of communicating. Then there also is interpretations and expectations. e.g. "Does my ass look fat in this." requires an enormous amount of knowledge about many things.
      First there is gender. Is the person a male or a female. Then there is the knowledge if you are male or female.

      Next there is the relationship and what you want from that relationship in the short and long term. So to understand what you must a

    • by Krneki (1192201)

      Meh, I have more faith in a computer. At least once they understand us we can finally get a google translator woman -> men.

  • by Colonel Korn (1258968) on Friday June 07, 2013 @07:56PM (#43942209)

    I'm struck by how much more accurate and responsive Dragon Naturally Speaking was in 1999 on my Pentium 2 than is Siri on my iPhone 5 and Apple's cloud servers today. Maybe it's a microphone problem, but in that case why was the $4.99 tiny microphone from Radioshack in 1999 better than the microphone in my iPhone 5 today?

    • I use Dragon 10 (same era?), and am continually amazed by how accurately it transcribes my voice (Midlands English, of all dialects!). I use it on a regular basis to dictate documents and to voice-write* recorded audio.

      *also known as "parroting", this is simply using DNS or other speech recognition software trained to your voice, a decent mike for recording (the best ones are the headset ones that settle the microphone close to the mouth) and headphones so you can hear the original audio. You repeat the aud

  • by Kjella (173770) on Friday June 07, 2013 @08:01PM (#43942251) Homepage

    Other people don't understand WTF you're talking about either, they're just better at faking it.

  • by mugnyte (203225) on Friday June 07, 2013 @08:03PM (#43942279) Journal

    Each time I've researched NLP solutions, the full sensory experience is ultimately found to play a role in full context and meaning. This begins in a very tight locale, and expands outward, or hopping around locations/time as part of context.

    Instead, when most solutions attempt to pick a "general corpus" of a language, they pick such a general version of the language that contextual associations are difficult to follow for any conversation. Even the most ubiquitous vocabulary, such as in national broadcast news, there are assumptions that point all the way back to simplistic models of our experiences via sight/hearing, taste/smell, touch/movement and planning/expectation. Even in our best attempts, nothing such as metaphor or allusion is followed well, and only the most robotic - formal - language understood. This interaction is certainly nothing "natural".

    I don't believe NLP problems will be (as easily) solved until we begin to solve the "general stimulus" for input, storage, searching and recall across the senses that humans have - their true "natural" habitat that language is describing. So that when apple goes from "round" to "red" to "about 4in" to "computer" to "beatles" to "not yet in season here" to "sometimes bitter" to "my favorite of grandma's pies", etc - and onward, like potential quantum states until the rest of the conversation collapses most of them - we may be able to get a computer to really understand natural language. This isn't possible in just the manipulation of pieces of text and pointers.

    • by mugnyte (203225)

      Having re-read the article, this "dictionary without a dictionary" is a frozen-in-time corpus, which won't be able to *converse* with people because it's built from written text, which is dramatically different. Now, if this body of statistical word association was tied into just the language of a single town, and everyone's spoken conversations in that town for the past 10 years, then it might be easier for those particular people to use this tool, but still far from using "natural" language.

    • by swillden (191260)
      Do you actually need general stimulus input? I don't think so. I think what you describe can also be achieved by providing the system with a general knowledge map so that it understands all of those things and the relationships between them. Even better if you can then personalize the knowledge map, strengthening and weakening nodes and vertices based on what the human knows and doesn't know.
      • by swillden (191260)

        strengthening and weakening nodes and vertices

        Er, I meant edges and vertices, of course.

      • by mugnyte (203225)

        I think we're talking about two different things. NLP breaks down into the "hard AI" problem for any sufficiently complex conversation.
        See my other response above in this thread.

  • by Culture20 (968837) on Friday June 07, 2013 @08:05PM (#43942291)
    When computer scientist guys understand what it means to understand. Go read some epistemology books. You'll understand.
    • by Anonymous Coward

      I read some, and now I really don't understand ... but at least now I know dozens of abstruse ways to say so.

  • by Anonymous Coward

    When will my computer understand me?

    I am sorry, but I do not know when Mike's uterus will unhand you.

  • in the likeness of a man's mind. -Orange Catholic Bible

  • by Okian Warrior (537106) on Friday June 07, 2013 @08:17PM (#43942371) Homepage Journal

    When will your computer understand you? Not for awhile.

    Speech recognition is a part of AI, to the extent that the computer understands what you're saying. Sure, programs like SIRI or ELIZA can put words together, but only so long as we can anticipate the form and context of the question. SIRI only knows about the things it has been programmed to do, which is (unfortunately) not nearly the amount we expect an intelligence to do.

    AI has languished for about 60 years now, mostly because it is not a science. There is no formal definition of intelligence, and no roadmap for what to study. As a result, the field studies everything-and-the-kitchen-sink and says: "this is AI!".

    Contrast with, for example, Complexity [wikipedia.org]: a straightforward definition drives a rich field of study, producing many interesting results.

    In this particular misguided example, they are using Markov logic networks, even though the human brain does not make the Markov assumption [wikipedia.org](*). We have no definition for intelligence, and the model they work on is demonstrably different from the only real-world example we know of. This may be interesting mathematical research, but it isn't about AI.

    Not to worry - most AI research isn't really related to AI.

    This is why your computer doesn't understand you, and won't for quite some time.

    (*) Check out Priming [wikipedia.org] and note that psychologists have measured priming effects three days (!) after the initial stimulus.

    • AI has languished for about 60 years now, mostly because it is not a science. There is no formal definition of intelligence, and no roadmap for what to study. As a result, the field studies everything-and-the-kitchen-sink and says: "this is AI!".

      You're assuming that AI is supposed to mean something like HAL 9000. The overwhelming majority of AI researchers are just trying to figure out good ways to solve much smaller problems. A tiny minority are trying to model some behavioral or cognitive phenomenon. Only cranks and con artists are trying to make something like HAL 9000.

      Some things AI researchers have been doing are being adopted for commerce and industry. And that appears to be accelerating.

      • by narcc (412956) on Friday June 07, 2013 @08:54PM (#43942641) Journal

        Yes, computationalism is long dead. Now, can we stop using the term AI? Keeping the term around serves only to further confuse the general public and decision-makers both public and private. I'd go as far as to say that the continued misuse of the term is precisely what has kept the cranks and con artists in business!

    • by Baldrson (78598) *
      Okian Warrior asserts: "There is no formal definition of intelligence, and no roadmap for what to study"

      Yes there is. It's defined by a field called "Universal Artificial Intelligence [hutter1.net]" and the roadmap says what to study.

  • by hcs_$reboot (1536101) on Friday June 07, 2013 @08:19PM (#43942387)
    Is the current "lack" of power of current computers an excuse for not being able to make a "clever" computer? In other words, is main the problem computer power or is it the design of algorithms that run on the computer (Power vs method)? Hard to say until someone realizes that clever computer, but the recent "history" of electronic devices would let me think the problem is the method (algorithm).
  • When I say nothing is bothering me, it means something is actually bothering me.

  • I'll repeat what I said in a related thread:

    "Larry Page's advisor at Stanford, Terry Winograd, wrote a book with Fernando Flores in 1987 titled Understanding Computers and Cognition.
    It is a profound critique of the mental representation approach, based on biological and philosophical considerations. A must read for anybody interested in the AI field."

  • Language is not precise and computers like precision. The same words can mean entirely different things depending on context, where the speaker is from, how they say the words, etc. Furthermore, language evolves at a very rapid rate, new words are created on a daily basis. We're used to language and the vagueness that it implies but that translates very poorly to computer logic and it will never be perfect because it relies on variables that the computer will never know.
  • Currently, there are not even any convincing theories how strong AI could be implemented. Thatindicates this is >50 years inthe future, but alsocould be >1000 years or never. There may be fundamental physical limits on play here. All the people promising this based on NLP databases, Markov Modells, etc. are lying and usually know it.

    • Currently, there are not even any convincing theories how strong AI could be implemented.

      And there never will be until someone comes up with a definition of "strong AI" that we can all (or at least most of us) agree on.

      • by gweihir (88907)

        That is not going to happen either. The term "strong AI" was coined to distinguish it form all the scientists that did market their decidedly non-AI things as "AI" in order to produce grants. However, in honest company, "strong AI" simply refers to "AI, but not side-results from AI research". For example, IBM Watson is not AI, but a side-result. To experts, IBM does not claim Watson is AI, only a result from AI research. To the generl public, this may sound a bit different.

  • I've theorized AI before back in like 2002. I figure Natural Language is straightforward if you describe it in a 3d imagination. My old page [goodnewsjim.com] I'm not really tempted to get into AI as a solo project though myself as it would be over a life time of coding, and there is no profit in it until you have it completed. What point is there in being intelligent, hard working and broke?
  • Switch to an easier to understand, universal language (both easier for computers and humans). Simple solution. Have everyone learn it. You're trying to put a rectangular prism into a round hole. Until we have a rectangular hole, use a cylinder...
  • The real problem with language recognition is context. When we talk, our spoken words contain half of what we mean. The rest depends on external parameters, from our body language, to the time and place at the moment.

    So, unless a computer can understand the same context, there is not gonna be serious language recognition, as we see it in sci fi.

  • Ziggy says there is a 85.45% chance of that being true
  • ...how well do you understand your computer? A relationship is a two-way street, you know.

  • and thank God for that.

  • I can't imagine how this technique could go beyond a graph of word associations to an understanding of their meaning. The graph seems to be entirely self-referential.

    Humans do it the other way - if they are asked about the relatedness of words, they infer it from the meanings they attribute to the words.

A holding company is a thing where you hand an accomplice the goods while the policeman searches you.

Working...