Why Hal Will Never Exist 325
aengblom writes "Researchers at the University of Maryland's
Human-Computer Interaction Lab are suggesting what many of us have already
guessed. The future of human-computer interaction won't be through speech--it
will remain visual (they explain why). The
Washington Post is running a story
about the researchers and how they think we will get computers to do what we want. The article is a fascinating read and is joined by a great
video clip (real
or quicktime)
of the researchers and their methods. The Post is holding an online
discussion with the researchers tomorrow. Also check-out Photomesa
the lab's software program that helps track images on a computer. (Throw a directory
with a 1,000 high-res files at this thing and you can justify that pricey new
computer you bought)."
Meet the machines half-way... (Score:5, Insightful)
Of course, these are programming languages, but I don't see why some highly structured, relatively unambiguous language couldn't be constructed to talk to computers.
The success of the Palm Pilot can be traced, in my view, to the fact that it didn't strive for full hand-writing recognition (like, say, a Newton). Instead, it required the human to meet it half-way. You get decent accuracy/speed for a small investment in learning.
We accept these compromises in many of our dealings with computers. I don't understand why people aren't promoting a similar compromise in voice communications?
Re:it's all the same.... (Score:2, Insightful)
and you can't say two things at the same time... (Score:2, Insightful)
Single Modality? (Score:5, Insightful)
The dubious argument about interfering with memory is pretty weak, and I would love to hear a good memory expert in psychology comment on that. Even if that's strictly true, it only applies when one is interrupting some particlarly "vocal" activity, like writing or reading. There are plenty of times I'm using the computer when I'd rather speak to it than move my eyes or my hands.
This researcher seems to have latched onto a single modality instead of considering what we use day to day to communicate with each other, a combination of many communication forms.
I know I don't roll my eyes or gesture to ask someone to pass the salt... unless my mouth is full.
problem... (Score:2, Insightful)
i doubt i'll be telling my computer to do that vocally since anybody can italicize a word with their keyboard/mouse faster. telling a computer to fetch data for you (through colloquial SQL queries, if such a thing exists) is what i believe to be one such application of voice commands...
show me all the stocks that rose in price more than 30 percent between January and April
the problem with human/computer interaction research these days is the way researchers seem to insist on applying new ways of interacting with the computer to work on old applications. example: italicizing a word (old application) through vocal commands (not so common way of interacting with a computer system).
if anything, a computer that's able to understand voice commands should be able to determine whether or not to italicize a word for me because of the way i emphasize my words (through dictation, for example). applications such as italization of a word is only useful to people when they want to see information (stored through speaking or typing) on a screen. going back to the data query, a computer can either give me the data that i had asked for (through voice commands) on a screen (with optional italization) or something easier to digest, like the return set being given to me with majel barrett's voice.
peace.
Re:Meet the machines half-way... (Score:3, Insightful)
I think there are some very good applications of speech technology, but it's not going to replace the keyboard and mouse. Speech technology works best when you need to do one thing while directing the computer to do something else. Like handfree mode on cell phones. My guess is that it will find its way into cars before it reaches desktops (if it reaches desktops at all).
Re:Single Modality? (Score:3, Insightful)
Re:Single Modality? (Score:2, Insightful)
I think he' right about graphical sliders and giving weight to search criteria... imagine putting in keywords and then weighting them with a slider from 0 - 100 and getting instant feedback on how your manipulations affected the search. Very 'analog' in some ways...
Amazing, wish I'd thought of it myself. I'm willing to bet it will be implemented soon, just because it has been talked about now.
any thoughts?
I don't really agree (Score:5, Insightful)
He is of course right about that. However, if you add AI to the mix, the computer will be able to take initiative and have some level of understanding about what you are saying. Hal was more than just speech recognition, it was more like a very clever secretary.
Say you need to go to some place and need a plane ticket and a hotel and directions for getting around. This is the kind of stuff you would let a secretary do for you and a good one wouldn't bother you with trivialities. You definately would not want to sit next to him/her and provide detailed directions on where to look, compare prices and so on because that is the stuff that takes time and the main reason you're delegating the work.
An intelligent computer would have enough information given a pretty vague expression like "hey I need to there and there for conference X, book me a plane and a hotel". Assuming you've worked together for some time, it should have enough information to figure out most information (like window or aisle seats, smoking/non smoking hotel room, price range for hotels, etc.). And it can always ask for additional information either verbally or non verbally depending on where you are and what you are doing. It could actually call you on your cell phone and ask but it could also send an email or an instant message.
IMHO we are at least decades away from building such systems all of the basic techniques needed to accomplish this are still immature (although very usefull already).
MS is often loathed for unleashing clippy onto this world but clippy was the result of extensive research into usability and human computer interaction by MS. It was rushed to market and a genuine pain in the ass (mostly because of its lack of intelligence) but the concept of some AI program watching what you are doing and intervening and offering you usefull options is not bad.
Re:Speaking is just plain messy... (Score:2, Insightful)
Wrong (Score:4, Insightful)
> He's convinced our eyes will do better than our voices at helping us control the digital machinery of the 21st century.
It's really very simple. There are two sides to HCI, computer->human, and human->computer. Now visual stuff is great for computer->human communication, but not for human->computer communication. Or to put it another way, the eye is a higher bandwidth input port than the ear, but the eye is no use for output. We cannot effectively communicate our needs to a computer by drawing pictures. Although simple, this is not understood which is why every so often some twit produces an abortive attempt at a "visual programming language". It's also why purely visual interfaces are fundamentally less powerful than command line interfaces.
I'm not convinced visual methods always win for computer->human either. Even though our eyes are higher bandwidth than our ears, we are not used to processing purely visual information in a cummalitive way. With language the information content of the message can grow exponentially with the length of the message.
Many people are brainwashed by that crap about a picture being worth a 1000 words. Draw me a picture of "misguided".
Finally... (Score:5, Insightful)
Sorry, but I put no stock in this at all, and I'll tell you why (of course, that's why we all get on our soap boxes here). I can't do voice dictation at all. I suck at it. I had IBM's ViaVoice for a while and I couldn't write anything that way.
Does that mean this guy is right? Of course not. Most people in my parents' generation can barely type, because they didn't have to growing up. Now almost every kid and young adult in the U.S. can type quite well. Why? Practice.
My uncle used to use a dictaphone (he was a U.S. senator) to dictate all of his speeches. He had no problem. Why? Practice, of course. He had no problem thinking and talking at the same time. It's just what he was used to. He couldn't type worth a damn.
I don't put much stock in people telling us what the future will bring. Look at all the brilliant people who were telling us that all these dot coms were the future. Poof, they're gone. Look at all the brilliant people that said we'd never cross the oceans, fly, go to the moon. Sorry, but a lot of smart people are wrong, quite often!
This guy is dealing with people who haven't grown up doing voice dictaton and are used to typing. The human brain (and I can point to about a million studies to back this up), is quite adaptable. That's one reason why we we're here and the Neanderthal's aren't. Our brains are amazingly flexible. Our brains can sometimes re-learn to do tasks that have been lost due to damage. It's especially adaptable in young people. Get a voice interface that children can deal with, and I guarantee you that that generation of kids will grow up speaking to computers. We typists will struggle and fumble, and feel "old" for not being able to pick it up as easily as them.
But then that's just me on my soapbox. I could be wrong, but so could this guy.
Voice interfaces in movies are just for show (Score:3, Insightful)
to speak to your computer, and the only reason
they do it in movies and TV-shows (ST comes
readily to mind) is to allow the viewer to better
follow what is going on.
Personally, I'm waiting for the direct
computer - brain - visual nerve interface.
Problems with the article (Score:4, Insightful)
Second, speech is like a command line - it is largely modeless if it is done right. That's the big attraction; that's what most of the posters here are saying: They want to be surfing/gaming/whatever, and be able to say "computer, do this" so that they don't interrupt what they are doing. In short, they want to use speech as a low bandwidth auxillary channel. When I am in my car, I would love to be able to say to my MP3 player "Neo: play Rock-Boston-all" so that I can keep my eyes and most of my attention on the road . However, that is VASTLY different than putting most of my attention on a phone conversation whilst half-assed paying attention to the car I am tailgating.
Third, speech is a very low bandwidth output compared to other solutions: when I am typing, I have the bandwidth to change case, activate/deactivate bold (in a word processor - pity Mozilla cannot be instructed to insert a <b> on a ctrl-b) or whatever. Trying to do that with speech just wouldn't work because speech doesn't have the "out of band" channels of CTRL, SHIFT etc. Sure, you COULD try to use inflection or non-speech sounds, but then the processing gets to be even worse. (Although it would be fun to hear a Perl programmer speaking a program using Victor Borge's phonetic punctuation....)
In short, this article makes the same mistake most articles on user interaction make - it assumes there is some uber-interface, and all other interfaces are inferior. Wrong - speech where speech works, 2D where 2D works, 3D where 3D works, haptic where haptic works, etc. I wouldn't want to drive my car with a joystick, and I wouldn't want to code with a steering wheel.
Playing with Voice Recognition (Score:3, Insightful)
This eventually got kind of annoying, and I pulled it off that system. I don't regret for a second playing with it. It taught me some valuable lessons about the arena of voice recognition.
1. I don't want to talk to my computer. You'd have to try this for a while to see for yourself, but the process is exhausting compared to just typing and clicking on stuff.
2. I never realized how much people tend to slur words used in context, but pronounce them properly by themselves. In the training session where this app learns your voice, I found that I say "Open File" differently when reading it than when I'm just saying it aloud.
3. Context is critical. For a person to determine the true meaning of words there's all kinds of voice inflection, and body language that needs to be read. I'm not sure I'd want to see a computer that smart!
Personally, I don't see a huge problem with the whole desktop metaphor interacting with a keyboard anyway. It may have a lot to do with those folks that honestly don't wish to use a computer, they just want a machine to think for them. I would think anyone who does tech support might appreciate what I mean here.
Bottom line, the only audio I want my computer to ever deal with is music playing in the background.
Voice-operated pianos, computerphobic executives (Score:5, Insightful)
So why the "voice command" fantasy in the first place?
When the PC revolution was just starting to take off, most people had not learned to type in high school. Typing was considered a skill for secretaries, who, of course, were poorly paid, low in social rank, and referred to as "girls."
For many years, computer technology did not penetrate the higher corporate levels because directly handling machines was considered beneath the dignity of an executive. "I don't have time to learn to use that gear, I have people to do that for me," was the typical attitude. Execs would have their secretaries print out all their email for them, dictate replies, and have their secretaries keyboard them back in.
This changed when the young MBA's started arriving with their computer spreadsheets.
Most people, even wealthy people who can afford chauffeurs, drive their own cars, and most people now operate their own computers... Time to retire the whole "voice interface" concept, except for people with special needss.
Why Hal Will Never Exist (Score:2, Insightful)
Re:When you think about it... (Score:2, Insightful)
Star Trek (Score:2, Insightful)