Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology

Human-Computer Interfaces From 2003 to 2012 324

Roland Piquepaille writes "My favorite forecaster, Gartner, is back with a new series of predictions about the way we'll interact with our computing devices. Here is the introduction. 'Human-computer interfaces will rapidly improve during the next decade. The wide availability of cheaper display technologies will be one of the most transformational events in the IT industry.' Not exactly a scoop, isn't? But wait, here is a real prediction. 'Computer screens will become ubiquitous in the everyday environment.' Ready for another prediction? 'Through 2012, more than 95 percent (by volume in gigabytes) of human-to-computer information input will remain keyboard- and mouse-based.' Check this column for a summary."
This discussion has been archived. No new comments can be posted.

Human-Computer Interfaces From 2003 to 2012

Comments Filter:
  • by Zog The Undeniable ( 632031 ) on Friday December 13, 2002 @02:52PM (#4882422)
    It has the crappiest usability and the highest per-byte costs of any form of communication since Morse code telegraphy, but it's wildly popular. Amazing.
  • by og_sh0x ( 520297 ) on Friday December 13, 2002 @02:54PM (#4882441) Homepage
    A technology prediction that predicts that the radical changes in human interaction previously predicted won't happen overnight. Non-senationalist predictions of the future? Wow. Irony would be if there was suddenly a major breakthrough in speech recognition and he's wrong.
  • by SoVi3t ( 633947 ) on Friday December 13, 2002 @02:54PM (#4882448)
    Consider plans for alot of gaming consoles (Sony is interested in Hive technology, for instance) to become integrated with your household. I can see your entire home being hardwired into a single pc, and you can just go room to room, turn on any tv/monitor, and play whatever games you own, watch any tv shows or movies, or surf the web. Can't see all this still operating with even advanced Mouse and Keyboard technology.
  • Gartner is useless (Score:4, Insightful)

    by geophile ( 16995 ) <jao@NOspAM.geophile.com> on Friday December 13, 2002 @02:55PM (#4882459) Homepage
    Here, in one sentence, is everything that's wrong with Gartner: ... more than 95 percent (by volume in gigabytes) of human-to-computer information input will remain keyboard- and mouse-based (0.6 probability). ...

    Let's break it down:
    • Mindless extrapolation of the obvious: "... will remain keyboard- and mouse-based."
    • Authoritative sounding numbers pulled out of the air: "... more than 95 percent ... 0.6 probability ..."
    • Sheer idiocy: "... 95 percent (by volume in gigabytes) ..." (If it's a percentage, then why does the unit matter?)

  • by tomhudson ( 43916 ) <barbara,hudson&barbara-hudson,com> on Friday December 13, 2002 @02:56PM (#4882469) Journal
    ... and most of that will be generated by the application marking up your "hello, wordprocessor" with all sorts of xml tags, headers, default styles, embedded info, etc. ...

    My prediction: I see, you see, we all see ASCII. Yep, plain text will still be there.

  • Displays (Score:5, Insightful)

    by zapfie ( 560589 ) on Friday December 13, 2002 @02:57PM (#4882476)
    'Human-computer interfaces will rapidly improve during the next decade. The wide availability of cheaper display technologies will be one of the most transformational events in the IT industry.' Not exactly a scoop, isn't?

    More of one than you think.. I don't think he's talking about your monitor. In almost all consumer electronic devices, know what the most expensive component usually is? Yup, it's the display. Reduce the price of that, and all of a sudden, those consumer devices have a lot more to work with. More screens, better screens, enhanced power, cheaper price, etc... if we can reduce the cost of the display significantly, it can only mean good things for consumer electronics.
  • by MrCode ( 466053 ) on Friday December 13, 2002 @03:02PM (#4882517)

    Through 2012, more than 95 percent (by volume in gigabytes) of human-to-computer information input will remain keyboard- and mouse-based (0.6 probability).


    I think this is fine. All the effort to "perfect" speech and handwriting recognition, while worthwhile from an academic standpoint, is not really necessary.

    I personally can type much faster than I could dictate, and most definitely faster than I can write by hand. I'm not even a "real" touch typist either.

    Can anyone imagine dictating a long paper, or forget it, a complex program? You would go hourse and/or insane before 3 pages were done.

    Therefore effort in speech recognition should focus on perfecting the simple command interfaces ("Computer, turn on the kitchen lights") instead of trying to perfect dictation. Speech recognition should be used to enhance keyboard based interfaces, not supplant them. Many times typing is the best way to input the data.
  • Re:Re push vs pull (Score:5, Insightful)

    by Angst Badger ( 8636 ) on Friday December 13, 2002 @03:08PM (#4882581)
    Their prediction that almost all data will be "push" instead of "pull" sounds way off to me.

    It sounds off because it is. "Push" is one of those stillborn ideas that marketroids insist on resurrecting every few years, like the impending death of the PC, the ascendance of subscription-everything, thin clients, household automation, and so on.
  • Is this true? (Score:2, Insightful)

    by addaon ( 41825 ) <addaon+slashdot.gmail@com> on Friday December 13, 2002 @03:17PM (#4882664)
    more than 95 percent (by volume in gigabytes) of human-to-computer information input will remain keyboard- and mouse-based

    Is this even true today? I doubt it's true of my own work. I own a digital camera. I don't take many pictures; I'm not very photogenic. I figure I take about 50 pictures a month... let's call that one a day, to be conservative. 1600x1200x8, uncompressed (I use a raw format that sends 8-bit intensity data for each pixel, as each pixel in a digital camera is only one color), comes out to very close to 2MB per image. In a given day, I also spend about 8 hours sitting in front of my computer. I type at ~60 words per minute (never said I was fast), coming out to about 160kB/day. Now, I don't use my mouse too much, since it hurts my wrist. But even if it sends 4-byte updates 300 times a second when I'm not moving it at all, that comes out to 35MB a day... hardly a realistic number, but let's run with it. So my total keyboard/mouse input is 36MB a day, at an absurd maximum (I do stop for breath occasionally), while my non-keyboard/mouse input is 2MB/day, at a rather absurd minimum. And just with those numbers, I have (slightly) less than 95% of input being keyboard/mouse based.

    I know a lot of people who take more pictures than me. One person taking 10 pictures a day is enough to offset 9 people who take none. A few people use speech recognition... that's relative high-bandwidth input. And I'm sure at least one person in ten thousand has a digital video camera...

    So, does anyone think this 95% number is true even today?

    --
  • by jmichaelg ( 148257 ) on Friday December 13, 2002 @03:17PM (#4882665) Journal
    "Prediction is very difficult, especially if it's about the future."
    ~Niels Bohr

    Unfortunately, I can't vouchsafe the quote. John Perry Barlow circulated it a few years back and when I asked him where he found it, he couldn't remember. So perhaps if Bohr didn't say it, he should have.

  • Re:Re push vs pull (Score:3, Insightful)

    by tomhudson ( 43916 ) <barbara,hudson&barbara-hudson,com> on Friday December 13, 2002 @03:19PM (#4882677) Journal
    Agreed, except on the "thin client" thingee; not the way that the powers-that-wanna-be had it, where your thin client connects to their server. More like you have one or more servers, and several specialized thin clients around (PDA, PVR, smartPhone, email reader, mp3 jukebox, game box, etc).
  • Re:Displays (Score:3, Insightful)

    by John Whitley ( 6067 ) on Friday December 13, 2002 @03:32PM (#4882770) Homepage
    It torques me off that "human-computer interfaces" is used to mean "displays". Propagation of a bad meme. Don't get me wrong; technology improvement can be useful. But far and away the greatest challenge and opportunity for improvement of the user experience is to improve the design and usability of consumer electronics.

    There are many current and old examples showing that good design can work with here-and-now technology quite well. Bad design will take that 39-cent display with SuperDuper-VGA resolution and turn it into a glossy usability nightmare.

  • by overunderunderdone ( 521462 ) on Friday December 13, 2002 @03:42PM (#4882834)
    Sheer idiocy: "... 95 percent (by volume in gigabytes) ..." (If it's a percentage, then why does the unit matter?)

    I'm pretty sure I agree but I'm trying to be charitable and come up with a reason why this makes sense... To be fair he could be saying it that way to make it clear exactly what is being measured. 95% by volume (in bits, bytes, etc.) is different from 95% by time. For instance it might be that in 2012 we spend 40% of our time computer input time using the sylus on our PDA and 20% of our time using voice recognition to talk to our home entertainment system but still 95% of the input by *volume of data* using keyboard and mouse.
  • by ferreth ( 182847 ) on Friday December 13, 2002 @03:43PM (#4882841) Homepage Journal
    Gartner's words sound like PHB (Pointy Haired Boss) fodder to me.

    Here's a real predition: Integration of devices will result in the replacment of single-use items such as PC's, TV's, cell phones, PDA's with portable and fixed units that have multiple functions. Consumers will buy "multi-media consoles" capable of several functions, that are more flexible and cheaper than indivdual components. Wireless networking will be the standard communication method between devices given the cost of adding wiring to a house, and the flexibility of putting your console anywhere. As a result, the lines between media types will blur, as 'television' as we think of it now will cease to exist with the advent of services that allow you to watch programming at a press of a button rather than on a schedule. You will read, listen to music, and shop, all from the same console. Integration will make the price of a large console about the same as a current mid-range PC, so consumers will buy several units in a family setting. Portable units will allow you to take your shows/music/information with you, and allow you to still use all the features your big console has while within network service range.

    Barriers to adoption of such integrated devices will come mostly from the companies that control the current media types as they will be concerned about losing their current revenue streams. The companies that successfully come up with new payment schemes that are both profitable to the company and palatable to the consumer will end up breaking the barriers until eventually getting to the point where you can subscribe to any service from your integrated console.
  • by kfg ( 145172 ) on Friday December 13, 2002 @03:51PM (#4882898)

    Ubiquitous.

    Ubiquitous to the point that your very idea of "consumer electronic devices" is obsolete.

    The existence of light emiting and electrically conductive liquid polymers that air cure is going to be completely transformative. Both display and electronic circuits are going to be printable on anything you can feed through your inkjet printer.

    Think about that for a minute. *ANYTHING* you can get to feed through a printer ( or anything you can adapt a printer to print on) can be both a display and electronic circuit to provide driver and logic functions.

    Think of all the things that are printed right now. Now think of all of them having "embeded" display and logic functions.

    Like your paper placemat at the diner. And yes, they are even working on being able to provide *power*, self contained, in that paper placemat.

    Your computer monitor will be pretty cool too. It could be nothing more than a sheet of 1/8" Lexan with the pixels printed on it. In fact, that same sheet of 1/8" Lexan could be your entire PDA or tablet PC if your data storage requirements aren't too great. Or on a sheet of polyethylene film you can fold up and put in your pocket.

    All that will be pretty cool.

    But it's the paper placemat thing that will be transformative. *Anything* can be a simple logic and display device. *Anything* can be a consumer electronic device.

    Like Junkmail. Ready to get your free AOL *device*?

    KFG
  • by dprice ( 74762 ) <daprice@nOspam.pobox.com> on Friday December 13, 2002 @04:14PM (#4883115) Homepage

    All the 'gigabytes' and 'probability' numbers Gartner puts in their reports are there to give the reports a sense of legitimacy. They make their money off of people in suits at big corporations who spend big bucks on outside consulting. The suits love to have meetings with Powerpoint slides with lots of figures, and they get a lot of those figures from consultants. The public figures Gartner reports are just a summary of a more detailed report that corporations can purchase to fatten their presentations and corporate strategies.

    The 10 year figures probably don't mean much since they are long forgotten by the time one could validate the prediction. Much like weather forecasts, the predictions shift over time as the real date approaches, and those predictions tend to get more accurate as the time to the prediction shrinks.

    What I'd love to see is a port-mortem on the predictions from all these consultant companies like Gartner. I wonder if someone keeps a record of predictions that Gartner made ten or more years ago and compares them to what really happened. My suspicion is that most of their long term predictions are junk, but they produce the figures since corporations want to pay for those figures.

  • volume? (Score:3, Insightful)

    by Chris Canfield ( 548473 ) <slashdot.chriscanfield@net> on Friday December 13, 2002 @04:19PM (#4883143) Homepage
    Through 2012, more than 95 percent (by volume in gigabytes) of human-to-computer information input will remain keyboard- and mouse-based (0.6 probability).

    Umm... MP3's? Video Recorders? Cameras? -C

  • by sapped ( 208174 ) <mlangenhoven.yahoo@com> on Friday December 13, 2002 @04:28PM (#4883209)
    The other problem with speech based systems, whether it is input or output, is that they simply would not work in today's "cube styled" offices. The noise levels would make everybody insane within weeks.
  • by Nicolai Haehnle ( 609575 ) on Friday December 13, 2002 @05:27PM (#4883541)
    Some interesting ideas, but there are problems.
    I'm in no way a HID designer or anything, but while reading your comment I tried to pay closer attention to the way I read it. Turns out that I (think I) try to get a hierarchical overview of the comment.

    In other words, we don't read text in a linear fashion, letter by letter. Instead we first look at the general outline (paragraphs), then we begin at the first line, picking out individual words. Sometimes we might look back half a line to rescan a word and get the context right.
    Note that there _are_ scrolling displays, often seen in public places (and that tag). Now of course those are sub-optimal anyway because of their slow speed, but you should be able to observe how the eye jumps trying to obtain context on them.
    So I don't think moving the text on the display is a particularly good idea. There are related things one could try, though. For example text in smaller columns could make it easier to jump to the start of the next line.

    Auto-compressing text to (selected) acrynoms is a very interesting It obviously needs to adapt to the user, but it's definitely worth a try.
    The more radical changes like gestures etc... could theoretically yield real efficiency gains, but they're all so difficult to learn...
  • by dfay ( 75405 ) on Friday December 13, 2002 @05:38PM (#4883604)
    I predict that 99% of predictions will be made by other organizations than Gartner by 2013. (0.7 probability.) Gartner's worth will have been reduced to 0.01% (by volume in $1000's of USD) because no one will be interested in their stupid attempts at reading the tea (maybe pot) leaves.
  • by loosenut ( 116184 ) on Friday December 13, 2002 @06:40PM (#4883965) Homepage Journal
    One change that already exists is to have computers read the text out loud. Unfortunately, while most people can speak much faster than they can type (or write), it is doubtful that most people can listen faster than they can read. One reason is that spoken language, with its elided sounds and lack of spelling, is less informationally dense than written language. Thus it is faster for a person to speak than to spell, but slower for he or she to listen than to read. While computer reading is a boon for people with certain disabilities, it does not speed up how fast data flows from computer to person.

    While your conclusion is sound, I disagree with the statement that speech is less infomationally dense than the written word. Think about how many bytes are required to represent this text. Then read it out loud and record it at a low bit rate. It requires vastly more information to store as audio.

    Anybody that has ever tried to carry on a conversation with email is aware of the limitations of that medium. You don't have the subtle expressions, the flucuations in speech timing and volume. THAT is information.
  • predictions (Score:3, Insightful)

    by Barbarian ( 9467 ) on Friday December 13, 2002 @06:46PM (#4884016)
    I seem to remember reading predictions in PC Mag in 1995, that by 2005 we'd still have the mouse and keyboard, but would be mostly communicate by voice speaking in a natural voice, the computers would be smart, with "smart agents" doing a lot of the work for us, that we would all login with fingerprints or retinal scans... we're no where on our way to being there by 2005, computers are a lot faster, but finger print and retinal scans don't seem to be that popular, and "smart agents" turned out to be not so smart.. anyways, at least this article gets the "mostly by keyboard and mouse" part right.
  • by dpuu ( 553144 ) on Friday December 13, 2002 @08:27PM (#4884616) Homepage
    There is a difference between data and information. I read your post and see the opposite conclusion. The fact that the audio file for the information is bigger than the text file (for the same information) suggests that the text file is has a much greater information density.
  • by TastySiliconWafers ( 581409 ) on Friday December 13, 2002 @11:59PM (#4885341)
    Reading speed is really not the big problem of human-computer interaction. Vision is a human being's highest bandwidth channel. Ordinary people typically read at least 150 words per minute and it's not unusual for an experienced speed-reader to read over 1000 words per minute. Computer output is not the problem. The low bandwidth channels that need augmentation are the ones necessary for data input (voice, finger movement). Even the best typists can't enter data as fast as they can read it. Voice recognition, assuming we can bring the accuracy up to near 100%, would be a big improvement, but humans still can't talk as fast as they can read.
  • by jesterzog ( 189797 ) on Saturday December 14, 2002 @04:19AM (#4886067) Journal

    I was quite disappointed by this article -- I don't know if ZDNet is providing the whole thing, but overall it was very short. It also missed one of the main development areas that I think is important, which is a whole lot more ubiquitous computing.

    The article doesn't really predict anything except the continuation of the same old stuff that's already happening. "Computer screens will become more convenient." This is hardly a big surprise. Neither is the amazing prediction that speech synthesis will be used more as it gets better. These things are boring -- they're essentially saying that what we already have will get better. Well duh!

    On the other hand, there aren't any interesting predictions because they're all already obvious. What about clothes that sense how dirty they are and indicate to a washing device how [much] to wash them? For that matter, what about clothes that adapt to downloaded designs and properties so a user doesn't have to buy new ones to look different? What about intelligent feedback audio systems that aren't speech related? What about intelligently using vibrations and other kinetic methods to indicate information so people's eyes aren't distracted?

    These are just off the top of my head, and they're the sorts of things that everyone can't come up with easily. For one thing, they actually require some genuine investigation and research to predict, if they can be predicted at all. A few decades ago, a computer was a building sized juggernaut -- almost nobody predicted that they would be on desktops and in everyday devices. That would have been an interesting prediction.

Say "twenty-three-skiddoo" to logout.

Working...