Forgot your password?
typodupeerror
Technology

Human-Computer Interfaces From 2003 to 2012 324

Posted by michael
from the year-end-blather dept.
Roland Piquepaille writes "My favorite forecaster, Gartner, is back with a new series of predictions about the way we'll interact with our computing devices. Here is the introduction. 'Human-computer interfaces will rapidly improve during the next decade. The wide availability of cheaper display technologies will be one of the most transformational events in the IT industry.' Not exactly a scoop, isn't? But wait, here is a real prediction. 'Computer screens will become ubiquitous in the everyday environment.' Ready for another prediction? 'Through 2012, more than 95 percent (by volume in gigabytes) of human-to-computer information input will remain keyboard- and mouse-based.' Check this column for a summary."
This discussion has been archived. No new comments can be posted.

Human-Computer Interfaces From 2003 to 2012

Comments Filter:
  • HID!! (Score:2, Funny)



    Is there a HID with a the robocop spike on the horizon?
  • by .sig (180877) on Friday December 13, 2002 @01:48PM (#4882390)
    So he's predicting that things will pretty much stay the same, with just the usual slow progress.

    Pretty wild ideas there, I hope he doesn't try to patent the keyboard and mouse or something.....
  • by Drakonian (518722) on Friday December 13, 2002 @01:49PM (#4882393) Homepage
    Many Slashdot readers don't like Microsoft!

    It is estimated that this will not change by the year 2012.

    • by Anonymous Coward
      In 2012, you will have to pay Microsoft for moving your mouse: 3 cents per foot. This is in addition to 2 cents for each keystroke (modifier keys are free.. but don't even think about writing a "Control-key morse-code input device" that'll get you slapped with the DMCA2).

      Don't complain though, you get 5 free DVD-ROM ejections and insertions per month, after that it's only $0.99 .. what a deal!

      I love microsoft! They sell me Freedom(tm)!
    • In case y'all haven't read the news... [theonion.com]
  • Oh darn. (Score:4, Funny)

    by m_chan (95943) on Friday December 13, 2002 @01:50PM (#4882398) Homepage
    Through 2012, more than 95 percent (by volume in gigabytes) of human-to-computer information input will remain keyboard- and mouse-based (0.6 probability).

    I guess robot love dolls won't be on the market until 2013. (99.4 probability)
  • by volume in Gigabytes

    And what about in MB or KB?
    The same??
    Ok, that was just to use a buzzword, I understand better now!

  • by trentfoley (226635) on Friday December 13, 2002 @01:50PM (#4882403) Homepage Journal
    ...when you pry my qwerty keyboard from my cold, dead, carpal-tunneled hands.
  • Media Labs (Score:4, Informative)

    by VoidEngineer (633446) on Friday December 13, 2002 @01:50PM (#4882406)
    Check out the University of Chicago's Computing Cluster & Cybercafe"> and MIT's Media Lab [mit.edu] for more information about human user interfaces. This article is behind the times, in regards to stuff that's already been produced in the laboratories.
    • Re:Media Labs (Score:4, Interesting)

      by KKin8or (633073) on Friday December 13, 2002 @02:11PM (#4882604)
      Being produced in labs and being used by the general public are two very different things. Not only do the labs have to produce it and test it, they then have to sell the idea to someone. When one of these fancy new interfaces first goes to market, they'll probably be pretty expensive, since it's unlikely it'd be mass produced yet. For a large chunk of the general public to actually start using a spiffy new interface, enough tech hounds have to shell out dough for the early ones for the manufacturer to bother mass producing, and thus lowering the cost of, the new gadget. Plus it has to have a large enough benefit over existing interfaces that people are actually willing to take the leap to pay for and try it (or at least enough people to make it "trendy").

      Take the mouse, for example. According to this article [ideafinder.com], the mouse was invented in 1968. And it didn't become popular until the Mac came out in 1984. That's 16 years of obscurity before general adoption. Granted, there wasn't really any general widespread use of computer technology in that 16 years, so these days it'd be a good bit less. Still, people are really slow to switch away from something familiar that "works".


    • Okay, there are innovations in user interface. But how likely is it that any of them will become widely adopted in the near future?

      Light pens seemed like the Next Big Thing in I/O devices 20 years ago... how many practical applications do they have today?
  • Re push vs pull (Score:5, Interesting)

    by tomhudson (43916) <barbara.hudson@NOSpAM.barbara-hudson.com> on Friday December 13, 2002 @01:51PM (#4882413) Journal
    Their prediction that almost all data will be "push" instead of "pull" sounds way off to me.

    Some of the problems with push technology

    1. Piggy-back of spam, unwanted data, etc
    2. Security in general
    3. Cunsumers have already made it clear they don't want it
    4. Wasted bandwidth
    5. Wasted time filtering out the unwanted stuff in the feed
    The rest of the story was also pretty ho-hum. Nothing to see there ... move along ... why this is news is beyond me. Oh - right, today's Friday, and we've got to set up a bunch of stories to be repeated Monday ... :-)
    • Re:Re push vs pull (Score:5, Insightful)

      by Angst Badger (8636) on Friday December 13, 2002 @02:08PM (#4882581)
      Their prediction that almost all data will be "push" instead of "pull" sounds way off to me.

      It sounds off because it is. "Push" is one of those stillborn ideas that marketroids insist on resurrecting every few years, like the impending death of the PC, the ascendance of subscription-everything, thin clients, household automation, and so on.
      • Re:Re push vs pull (Score:3, Insightful)

        by tomhudson (43916)
        Agreed, except on the "thin client" thingee; not the way that the powers-that-wanna-be had it, where your thin client connects to their server. More like you have one or more servers, and several specialized thin clients around (PDA, PVR, smartPhone, email reader, mp3 jukebox, game box, etc).
    • Perhaps not so much wasted bandwidth, if much of the push data is multicast.
    • The lame "push tech" of the internet past was, and still is, a complete failure because no one wants to view a constant stream of ads over their limited bandwidth. Pull is much more efficient for sipping content over a straw.

      You're forgetting, however, that the distribution of video over cable (with immense analog bandwidth) to a TV would qualify as "push" as well. TV is so immensely popular that the average american watches it almost as much as he sleeps! If you throw a TiVo or other set-top box, you've got push-content through a computer. The odd thing about this is that TiVo is designed to make a strictly push technology more pull like :) I think that shows that the ultimate goal is PULL, even on a TV.

      Push's place is where technology isn't good enough to allow for pulls. This is the reason push of video content via analog or digital means will outpace pull for some time. After that time, I'll start watching TV again.
      • Using a TiVo is no more different than using a VCR, in terms of whether it's push or pull. In this case, it's neither. It's "broadcast", and the recipients pull out what they want from the stream.

        Tv-on-demand is also a "pull" technology - you specify what you want, and it's delivered to you.

        Push is good for the vendor and lousy for the client - which is why we've seen so many attempts at push technology, and why clients still resist it.

  • by Zog The Undeniable (632031) on Friday December 13, 2002 @01:52PM (#4882422)
    It has the crappiest usability and the highest per-byte costs of any form of communication since Morse code telegraphy, but it's wildly popular. Amazing.
  • Through 2012, more than 95 percent (by volume in gigabytes) of human-to-computer information input will remain keyboard- and mouse-based.

    I think 94 percent will be mouse generated (e.g., the new "Hello, WordProcessor" document would have several KB of different font styles, markup, colors, and all that jazz (all mouse based), and only a couple of dozen bytes of text (via keyboard).

    S

    • ... and most of that will be generated by the application marking up your "hello, wordprocessor" with all sorts of xml tags, headers, default styles, embedded info, etc. ...

      My prediction: I see, you see, we all see ASCII. Yep, plain text will still be there.

      • My prediction: I see, you see, we all see ASCII. Yep, plain text will still be there.
        Good grief... If the majority of the Computing Universe hasn't standardized on Unicode [unicode.org] by 2012 I will have no hope for Humanity...
        • Well, you might want to consider a few things:
          1. Legacy data, apps, source code
          2. The trend for english to become the "standard" language world-wide
          3. Inertia
          Keeping the post on-topic: the article said, basically, that things aren't going to change much. Which is really a non-news item. ASCII will be around, we'll be typing on qwerty keyboards, and clicking with mice. :-)
          • 2. The trend for english to become the "standard" language world-wide
            That's the part I'm worried about.

            Unicode is backwards-compatible with ASCII, so the legacy/source code argument is irrelevant. There are already compilers available (such as Vector Pascal [pascal-central.com]) that interpret Greek, Cyrillic, Katakana and Hiragana. Heck, by 2012 I want to be able to code in Klingon [unicode.org]!
  • by og_sh0x (520297)
    A technology prediction that predicts that the radical changes in human interaction previously predicted won't happen overnight. Non-senationalist predictions of the future? Wow. Irony would be if there was suddenly a major breakthrough in speech recognition and he's wrong.
  • by sanpitch (9206) on Friday December 13, 2002 @01:54PM (#4882445)
    How about having a computer for a secretary? DARPA is funding [eetasia.com] a "enduring personalized cognitive assistant." The system will be able to "reason, use represented knowledge, learn from experience, accumulate knowledge, explain itself, accept direction, be aware of its own behavior and capabilities as well as respond in a robust manner to surprises."
  • Digital Paper (Score:3, Interesting)

    by 9Numbernine9 (633974) on Friday December 13, 2002 @01:54PM (#4882447)
    E-Ink or digital paper
    Maybe it's just me, but I can't see this becoming a reality anytime in the near future.

    Firstly, there is a certain tactile "feel" to writing on actual paper that would be very difficult to replicate - and if it feels too different, I suspect people won't adopt it.

    Secondly, cost - could this be brought down to a price that would be economically feasible? If it's not as cheap as paper, it isn't gonna happen.

    That's not to say that I wouldn't like to see it introduced; we could all have our workplace documents on those little pads, similar to theones in Star Trek, and I'm all for anything that will stop the slaughter of forests - I'm just highly pessmisitic. The author seems to be of a "more of the same" persuasion as well. Maybe someday, but I don't think we'll see it in the next ten years.
    • Re:Digital Paper (Score:4, Informative)

      by cybermace5 (446439) <g.ryan@macetech.com> on Friday December 13, 2002 @02:18PM (#4882676) Homepage Journal
      I think it's just you. Demonstrable E-Ink displays already exist, how long do you think it will take to refine them?

      And, why do we have exactly duplicate the feel of paper? E-Ink is supposed to duplicate the flexibility and static display capabilities of paper, while adding digital versatility. The feel of writing on paper is learned, not instinctive.

      Finally, why does it have to be as cheap as paper? It's much better than paper, it has many more uses, but it makes no sense to feed E-Ink into a laser printer or to hang it next your toilet. Digital ink keeps you from having to buy paper all the time.
  • Consider plans for alot of gaming consoles (Sony is interested in Hive technology, for instance) to become integrated with your household. I can see your entire home being hardwired into a single pc, and you can just go room to room, turn on any tv/monitor, and play whatever games you own, watch any tv shows or movies, or surf the web. Can't see all this still operating with even advanced Mouse and Keyboard technology.
  • Gartner is useless (Score:4, Insightful)

    by geophile (16995) <jao@NOsPaM.geophile.com> on Friday December 13, 2002 @01:55PM (#4882459) Homepage
    Here, in one sentence, is everything that's wrong with Gartner: ... more than 95 percent (by volume in gigabytes) of human-to-computer information input will remain keyboard- and mouse-based (0.6 probability). ...

    Let's break it down:
    • Mindless extrapolation of the obvious: "... will remain keyboard- and mouse-based."
    • Authoritative sounding numbers pulled out of the air: "... more than 95 percent ... 0.6 probability ..."
    • Sheer idiocy: "... 95 percent (by volume in gigabytes) ..." (If it's a percentage, then why does the unit matter?)

    • to add to your mindless extrapolation of the obvious:... "They should identify a clear return on investment before engaging in implementations. "

      looks like someone spent a minute or two analyzing the dot com era of the late 90's eh?
    • by goon america (536413) on Friday December 13, 2002 @02:11PM (#4882602) Homepage Journal
      (If it's a percentage, then why does the unit matter?)

      To come up with their predictions, analysts sit around and huff paint thinner until they lose consciousness. Once in a full state of dementia, fully developed predictions appear in rounded pod form from the brilliant, corpulent, snake-like ether of the true ultrafied space-time ribbons, at which point the analyst must delicately pluck them from the mind-hive before they can be sold to the public. Sometimes it comes out in both percents and gigabytes.

      It's not a perfect system.

    • Sheer idiocy: "... 95 percent (by volume in gigabytes) ..." (If it's a percentage, then why does the unit matter?)

      The unit still matters.

      For example, he wanted to be sure you knew he wasn't talking about information measured by "volume, in liters."
    • by Traa (158207) on Friday December 13, 2002 @02:15PM (#4882636) Homepage Journal
      as much as I think the article was a little light on interesting details, lets not get carried away by ridiculing mr Gartner.

      If you can't figure out from the article that these statements and numbers are part of a bigger document then I'll do it for you:

      Mindless extrapolation of the obvious: "... will remain keyboard- and mouse-based."
      Try the same sentence without the "keyboard- and mouse-based" part. It doesn't work.

      Authoritative sounding numbers pulled out of the air: "... more than 95 percent ... 0.6 probability ..."
      One of many phrases that are probably pulled out of a document where those numbers are explained. Blame ZDNet on leaving out the link to the original work by mr Gartner.

      Sheer idiocy: "... 95 percent (by volume in gigabytes) ..." (If it's a percentage, then why does the unit matter?)
      Same as above. There are numbers that go with these phrases. The numbers are in gigabytes (duh) and the blame lies with the reporter Alexander Linden for not refering to the original document. The dork prolly just cut and paste without looking at the content.

      Now if someone can be so good to find us the complete works of mr Gartner.
    • In other news Gartner Group buys keyboard and mouse manufacturing companys.
    • Sheer idiocy: "... 95 percent (by volume in gigabytes) ..." (If it's a percentage, then why does the unit matter?)

      I'm pretty sure I agree but I'm trying to be charitable and come up with a reason why this makes sense... To be fair he could be saying it that way to make it clear exactly what is being measured. 95% by volume (in bits, bytes, etc.) is different from 95% by time. For instance it might be that in 2012 we spend 40% of our time computer input time using the sylus on our PDA and 20% of our time using voice recognition to talk to our home entertainment system but still 95% of the input by *volume of data* using keyboard and mouse.
    • by dprice (74762)

      All the 'gigabytes' and 'probability' numbers Gartner puts in their reports are there to give the reports a sense of legitimacy. They make their money off of people in suits at big corporations who spend big bucks on outside consulting. The suits love to have meetings with Powerpoint slides with lots of figures, and they get a lot of those figures from consultants. The public figures Gartner reports are just a summary of a more detailed report that corporations can purchase to fatten their presentations and corporate strategies.

      The 10 year figures probably don't mean much since they are long forgotten by the time one could validate the prediction. Much like weather forecasts, the predictions shift over time as the real date approaches, and those predictions tend to get more accurate as the time to the prediction shrinks.

      What I'd love to see is a port-mortem on the predictions from all these consultant companies like Gartner. I wonder if someone keeps a record of predictions that Gartner made ten or more years ago and compares them to what really happened. My suspicion is that most of their long term predictions are junk, but they produce the figures since corporations want to pay for those figures.

  • Interfaces? (Score:3, Funny)

    by grub (11606) <slashdot@grub.net> on Friday December 13, 2002 @01:55PM (#4882461) Homepage Journal

    .. but will these new interfaces work with my flying car?
  • by mark_lybarger (199098) on Friday December 13, 2002 @01:56PM (#4882462)
    By 2008, retinal imaging and augmented reality will become available in mobile devices (0.6 probability).

    i've been on the mobile subway devices of NYC and D.C., and let me tell you... the reality there is extremely augmented. normally, i've found peak augmentation to occur around 4:20 in the afternoon for some reason.
  • by NineNine (235196)
    I didn't see anything mentioning the bugeoning ubiquity of people reverting to the CLI. Somehow I'm not surprised.
  • Displays (Score:5, Insightful)

    by zapfie (560589) on Friday December 13, 2002 @01:57PM (#4882476)
    'Human-computer interfaces will rapidly improve during the next decade. The wide availability of cheaper display technologies will be one of the most transformational events in the IT industry.' Not exactly a scoop, isn't?

    More of one than you think.. I don't think he's talking about your monitor. In almost all consumer electronic devices, know what the most expensive component usually is? Yup, it's the display. Reduce the price of that, and all of a sudden, those consumer devices have a lot more to work with. More screens, better screens, enhanced power, cheaper price, etc... if we can reduce the cost of the display significantly, it can only mean good things for consumer electronics.
    • Re:Displays (Score:3, Insightful)

      by John Whitley (6067)
      It torques me off that "human-computer interfaces" is used to mean "displays". Propagation of a bad meme. Don't get me wrong; technology improvement can be useful. But far and away the greatest challenge and opportunity for improvement of the user experience is to improve the design and usability of consumer electronics.

      There are many current and old examples showing that good design can work with here-and-now technology quite well. Bad design will take that 39-cent display with SuperDuper-VGA resolution and turn it into a glossy usability nightmare.

      • ...SuperDuper-VGA resolution and turn it into a glossy usability nightmare.

        Agreed on the point that a better user experience should not be linked to a better display, but hopefully displays will steer clear of that path. What I'd like to see is screens with better visibility indoors and outdoors, cheaper (power-wise) backlighting, more accurate color reproduction, better viewing angles, better durability, etc. Unfortunately, you may be right about the "glossy SuperVGA nightmare".. anyone who has spent a fair bit of time with a GameBoy Advance knows what a nightmare that display is. Great color and resolution, but good luck trying to SEE the damned thing.
    • by kfg (145172) on Friday December 13, 2002 @02:51PM (#4882898)

      Ubiquitous.

      Ubiquitous to the point that your very idea of "consumer electronic devices" is obsolete.

      The existence of light emiting and electrically conductive liquid polymers that air cure is going to be completely transformative. Both display and electronic circuits are going to be printable on anything you can feed through your inkjet printer.

      Think about that for a minute. *ANYTHING* you can get to feed through a printer ( or anything you can adapt a printer to print on) can be both a display and electronic circuit to provide driver and logic functions.

      Think of all the things that are printed right now. Now think of all of them having "embeded" display and logic functions.

      Like your paper placemat at the diner. And yes, they are even working on being able to provide *power*, self contained, in that paper placemat.

      Your computer monitor will be pretty cool too. It could be nothing more than a sheet of 1/8" Lexan with the pixels printed on it. In fact, that same sheet of 1/8" Lexan could be your entire PDA or tablet PC if your data storage requirements aren't too great. Or on a sheet of polyethylene film you can fold up and put in your pocket.

      All that will be pretty cool.

      But it's the paper placemat thing that will be transformative. *Anything* can be a simple logic and display device. *Anything* can be a consumer electronic device.

      Like Junkmail. Ready to get your free AOL *device*?

      KFG
    • Re:Displays (Score:3, Funny)

      by dmorin (25609)
      In almost all consumer electronic devices, know what the most expensive component usually is?

      The Windows license?

      :)!

  • Alternate prediction (Score:4, Interesting)

    by Tsar (536185) on Friday December 13, 2002 @01:59PM (#4882498) Homepage Journal
    'Through 2012, more than 95 percent (by volume in gigabytes) of human-to-computer information input will remain keyboard- and mouse-based.'

    By volume in gigabytes? Call me a contrarian, but I'll bet videocameras will exceed keyboard input by that standard. Wanna test that notion, Gartner? Point your text editor at a file, and I'll fire up my webcam recorder. Ready? GO!
    • I'll bet videocameras will exceed keyboard input by that standard

      I'm guessing that such things don't fall into the category of "human-to-computer information input".
    • A webcam is not human-to-computer communication, at least not in the sense that Gartner is talking about. What he means is simply that discrete information input such as text or choosing from a menu will continue to be mostly based on mouse and keyboard rather than speech, handwriting or gestures.

    • Yes, but many more people use keyboards and mouses for input that webcams, and more often.

      1000 people touch-typing and/or dragging the mouse cursor across the screen will easily generate more bytes than 1 person with a webcam.

      Gartner's prediction (and mine) is that this is not likely to change much very soon.

  • by MrCode (466053) on Friday December 13, 2002 @02:02PM (#4882517)

    Through 2012, more than 95 percent (by volume in gigabytes) of human-to-computer information input will remain keyboard- and mouse-based (0.6 probability).


    I think this is fine. All the effort to "perfect" speech and handwriting recognition, while worthwhile from an academic standpoint, is not really necessary.

    I personally can type much faster than I could dictate, and most definitely faster than I can write by hand. I'm not even a "real" touch typist either.

    Can anyone imagine dictating a long paper, or forget it, a complex program? You would go hourse and/or insane before 3 pages were done.

    Therefore effort in speech recognition should focus on perfecting the simple command interfaces ("Computer, turn on the kitchen lights") instead of trying to perfect dictation. Speech recognition should be used to enhance keyboard based interfaces, not supplant them. Many times typing is the best way to input the data.
    • Agreed. And which is quicker anyway : Saying "Computer, turn on the lights" or flipping a switch.

      Now if they could develop one that responded to "Computer, where did I leave [ the remote | my glasses | my brains for wasting time reading another Gartner prediction ]", that would be something :-)

    • I personally can type much faster than I could dictate, and most definitely faster than I can write by hand. I'm not even a "real" touch typist either.
      In a perfect speech recognition system, the dictation rates would be at the levels of today's micro casette recorders. Well over 100 words a minute.

      More obvious cases are when a user is walking, driving, etc., when speech input becomes overwhelmingly advantageous. Even for devices with small form factors.

      Imagine in the future, if I want to issue commands to my watch "fetch me headlines from slashdot and read level 5 comments from the first 6 stories"*.

      (*) granted, if the watch interprets your request as "first *sex* stories", you'd have a differnet brand of entertainment :) S

      • More obvious cases are when a user is walking, driving, etc., when speech input becomes overwhelmingly advantageous. Even for devices with small form factors. Imagine in the future, if I want to issue commands to my watch "fetch me headlines from slashdot and read level 5 comments from the first 6 stories"*.
        You shouldn't be using a computer while driving. That's way worse than a cellphone (illegal in some states) or watching a movie/TV while driving (illegal in almost all states).

        Walking down the street *might* be an advantage, but I for one (and I know I'm not alone on this) would be wholly embarassed to walk down the street talking to my computer and would also be irritated by people who would do it (just like I'm irritated by people who walk around talking on their cellphones). Plus there are some safety issues, although smaller than those while driving.

        Also, I think it would be really weird to have a computer talk back to me (not to mention a little inconvenient, how many times do you actually read entirely through a webpage? Not very often. Usually you just skim it.) and I would much rather interface with the computer using a semi-transparent glasses display.
    • by Theaetetus (590071) <.moc.liamg. .ta. .todhsals.suteteaeht.> on Friday December 13, 2002 @02:36PM (#4882799) Homepage Journal
      Can anyone imagine dictating a long paper, or forget it, a complex program? You would go hourse and/or insane before 3 pages were done.

      "Computer, go to slashdot-dot-com.... no, slashdot-dot-com... no, slash!...
      Oh, fine, just show me some porn."

      -T

  • I was amazed... (Score:4, Interesting)

    by craenor (623901) on Friday December 13, 2002 @02:06PM (#4882569) Homepage
    When I bought a Canon EOS-1 Camera and it could focus on different areas inside the viewing area, depending on where my eyes were trained.

    The question is...how long before this technology makes its way into mainstream computers, or something like it.

    Wouldn't it be nice to just look at the monitor, blink twice and have the folder open. Careful where you look though!
  • Is this true? (Score:2, Insightful)

    by addaon (41825)
    more than 95 percent (by volume in gigabytes) of human-to-computer information input will remain keyboard- and mouse-based

    Is this even true today? I doubt it's true of my own work. I own a digital camera. I don't take many pictures; I'm not very photogenic. I figure I take about 50 pictures a month... let's call that one a day, to be conservative. 1600x1200x8, uncompressed (I use a raw format that sends 8-bit intensity data for each pixel, as each pixel in a digital camera is only one color), comes out to very close to 2MB per image. In a given day, I also spend about 8 hours sitting in front of my computer. I type at ~60 words per minute (never said I was fast), coming out to about 160kB/day. Now, I don't use my mouse too much, since it hurts my wrist. But even if it sends 4-byte updates 300 times a second when I'm not moving it at all, that comes out to 35MB a day... hardly a realistic number, but let's run with it. So my total keyboard/mouse input is 36MB a day, at an absurd maximum (I do stop for breath occasionally), while my non-keyboard/mouse input is 2MB/day, at a rather absurd minimum. And just with those numbers, I have (slightly) less than 95% of input being keyboard/mouse based.

    I know a lot of people who take more pictures than me. One person taking 10 pictures a day is enough to offset 9 people who take none. A few people use speech recognition... that's relative high-bandwidth input. And I'm sure at least one person in ten thousand has a digital video camera...

    So, does anyone think this 95% number is true even today?

    --
  • by jmichaelg (148257) on Friday December 13, 2002 @02:17PM (#4882665) Journal
    "Prediction is very difficult, especially if it's about the future."
    ~Niels Bohr

    Unfortunately, I can't vouchsafe the quote. John Perry Barlow circulated it a few years back and when I asked him where he found it, he couldn't remember. So perhaps if Bohr didn't say it, he should have.

    • Not Niels Bohr (Score:3, Informative)

      by infolib (618234)
      Here in Denmark the quote is usually attributed to Robert Storm Petersen. [latinamerican-art.com]
      On the other hand this page [lundskov.dk] (in danish) says that it originated in a danish parliamentary debate of the period 1935-39. This is according to the memoirs of the politician K.K. Steincke. He doesn't remember who said it though. (Basic political instinct, I suppose.) It has also been attributed to Markus M. Ronner (whoever that is)

      Niels Bohr was apparently the first dane to bring the expression abroad, and hence he has recieved credit.
  • by sawilson (317999) on Friday December 13, 2002 @02:25PM (#4882716) Homepage
    As an esteemed predictionaire of sorts, with full
    backing of the predictionationization society, here
    are my predictions for the next decade:

    #1 Algebra won't be hard someday

    #2 Grass will mow itself

    #3 The Aliens people have encountered will be
    revealed to be the "geek" or "dork" aliens. The
    Jock aliens stay back on marklar and get laid and
    drink. They are much bigger and stronger.

    #4 Trendy computer users will start doing
    "case piercing" and the truly EXTREME will try
    out hard drive piercings. They will be made of
    steel at first, but aluminum will become the rage.

    #5 Wireless wires will be invented to replace the
    wired wires.

    #6 The "tornado in a can" will become "the can"
    in your bathroom. Flushing dead goldfish will
    never be boring again.

    #7 Top ten lists will transmogrifimorphicate into
    top 7 lists.
  • Paradigm shift (Score:3, Interesting)

    by Daetrin (576516) on Friday December 13, 2002 @02:26PM (#4882729)
    At first i was disapointed with this prediction about another 10 years of video screens (of various kinds) and keybaord and mouse input. However look at the alternatives he's suggesting, handwiritng and voice recognition.

    I really couldn't care less about those modes of input. Can you imagine everyone in the office talking to their computers at once? And it wouldn't really help that much for programming or data entry, the tasks that a lot of computers get used for. As for handwriting, my hand starts to hurt after about five minutes of writing stuff on paper, and i usually give up and open up Notepad. And that's not even considering that my handwriting sucks and would be about ten times as difficult to process as "normal" handwriting.

    What this guy really isn't saying much about is direct optical feeds; ie, beaming visual information onto your retina, or inserting false visual signals higher upsteam in your nervous system, and direct mental input; either in the form of reading the synapses in your brain, or recording your motions as you type and guesture in the air.

    That's the kind of technology that will cause a major shift in the way we use computers, and is so different from our current modes of interaction that you can't really extrapolate from here to there. I'm sure scientists during the 40s and 50s were predicting great advances in vacuum tubes (the science fiction authors certainly were at least) that never materialized, or at least that were never utilized, because of the development of the microchip.

    I have no idea if those kinds of technologies will be fully developed in the next ten years or not, but i don't think this guy has any better of an idea than the rest of us.

  • No more screens (Score:2, Interesting)

    by Un pobre guey (593801)
    Here is my prediction, and you can throw it back in my face 10 years from now:

    By 2012 computer displays as we now know them (LCD, CRT) will have been relegated to inexpensive embedded systems. Bleeding edge office information devices will function by tracking the user's movements and speech, as well as manipulation of common objects in her work environment. They will serve the same purpose as graphical icons do today. The computer screen will have been subsumed into dynamic surface markings and other detectable changes in the objects in her environment. They will have reflective (as opposed to backlit) display surfaces where information can be encoded in textual, graphical, color, or texture attributes, and sometimes some degree of 3D physical configuration changes. These will range from writing surfaces that resemble paper, cards, packaging materials, and other document-like entities, to instrument or appliance control panels and communications devices. User interactions with these items can produce changes in both the displays and the underlying data repositories. Moving them, rearranging their relative locations, adjusting them, speaking into them, and other as yet unforeseeable user interactions will effect the state changes that embody the user's day to day tasks. Think of a cube with an environment of intelligent interactive devices that visibly and audibly change as work gets done. The devices themselves will also be communicating and interacting as needed.

  • I love it!

    All prophets should use probabilities. That way, nobody can ever prove you wrong.

    Say "The world will end on 1/11/2003 with probability 0.6" Suppose it doesn't end, so what? Someone's going to come back and say "Ha! It didn't end! The probability couldn't have ever been higher than 0.3!"

    Suppose you say "Buy Acme Widget stock. It will go up 120% in the next 6 weeks, probability 0.8" People buy it. It goes up 120% in the next 6 weeks. They get rich. Are they going to come back and say, "Well, yeah, SURE, it did that, but you said the probability was 0.8 and it was really only 0.7"
  • Interface Idea (Score:5, Interesting)

    by spurton (634014) on Friday December 13, 2002 @02:36PM (#4882795)
    After watching Minority Report, I liked the idea of using gestures to interface with your computer. However having to wave your hands around like that would get tiresome real quick. The most time consuming part of getting things done on a computer(aside from the software) is having to go back and forth between the mouse and keyboard. Even with keyboard shortcuts, it is unavoidable. I started thinking of other ways to use the same type of gesture interface but with your hands only. No keyboard, no mouse. Muscle movement memory is very efficient. It only takes a few repetative movements to get used to a static environment. Have you ever stuffed envelopes? You get pretty efficient in no time. The reason a keyboard and mouse is not like this(mostly the mouse) is because its position is always different. Your hand has to find it. A keyboard is much better because once you get used to the layout, your hands pretty much stay in the same place. So how does a gesture-based interface fit into this? What I envisioned was using only your fingers to do the gestures. To change tools, like from cursor movement to keyboard you could use finger movements or a combination of two fingers moving in a direction as a switch, or even lifting your hand higher. This interface would not require you to touch anything. Your hands could be anywhere and in any position. The hardware would monitor your finger and hand movements. You could be standing and resting your hands on your legs while doing it. Imagine your hands are resting on a hard surface and you are typing, there would have to be a tactile feedback like the little dots on f and j on most keyboards that tell you where you are at. Maybe a range of motion field gets established in relation to your hand positions at that time. Also the hardware would have to provide this tactile feedback like sleeves on your fingertips or gloves. Once the area is set it would be easy to get a feeling where the keys were. Tactile feedback to determine a key-click would be important. When you need to switch to a pointer, you make a gesture with your fingers or hand(s) and fingers. Or you determine a position above the set keyboard space that is the pointer. Like moving your index finger up 2 inches above the keyboard field and using it as a pointer. I know it wouldn't be as simple as that. It is just a starting point. I also can imagine if it were done correctly you could basically haul ass moving through windows, multi-tasking etc. The current issues are we have a set area for our keyboard and mouse. We leave that area, we lose our interface. People move around, we use laptops, we like to keep our interface setup consistent when we switch computers. The mouse is never exactly where we expect it to be and is too far away from the keyboard. The position of a keyboard and mouse on a table in front of us is not always the most ergonomic or comfortorable positions. Gesture interfaces are better because gestures are easier to remember. They can eliminate having a single area for an interface. They are more configurable. You can keep their configuration consistent for any computer you use. It is more comfortorable being able to put your hands anywhere and still be able to work. You could possibly customize the tactile feedback to suit your taste. Gestures can signify complicated tasks to be performed in an application.
  • Over the last several years, keyboard have taken many evolutionary steps. We've got ergo-keyboards, enhanced-keyboards, laser-projected-keyboards, etc.

    We may very well still be using keyboard 10 years from now, but they'd probably differ at least a bit from the ones we use now.

    We're not going to get rid of the old QWERTY (or for some odd few, the DVORAK) until perhaps we can plug into ourselves, or until that one-handed keyboard comes around.

    Personally... I'd like to plug myself in, but viruses would really suck then.
  • I think they're a bit late with that one. For a while now I've been spending every day sitting in front of a computer display...

    Oh, wait...
  • by TastySiliconWafers (581409) on Friday December 13, 2002 @02:38PM (#4882810)

    Display technology has vastly improved. I'm now just waiting for the price to come down on IBM's T221 LCD so I can have one on my desktop. We purchased one at my workplace and it just blew me away. It is the first display I have ever seen that can be reasonably compared to quality laser printing on paper for its rendering of sharp, crisp, readable text. 9.2 million pixels in the thing and NOT ONE OF THEM IS DEAD. Yep, none, nada, zilch.

    As far as interaction goes though, I doubt we're going to see much improvement. Programmers do a terrible job of UI design and a lot of companies are just too cheap or ignorant to hire professional user interface designers or else provide in-depth training for whoever is doing the UI design regarding usability issues. Most companies are also too cheap to do real usability testing. They might test out the new UI on the guy three cubicles away, but he's hardly representative of your customers. Until that changes, human-computer interaction is not going to improve.
  • by ferreth (182847) on Friday December 13, 2002 @02:43PM (#4882841) Homepage Journal
    Gartner's words sound like PHB (Pointy Haired Boss) fodder to me.

    Here's a real predition: Integration of devices will result in the replacment of single-use items such as PC's, TV's, cell phones, PDA's with portable and fixed units that have multiple functions. Consumers will buy "multi-media consoles" capable of several functions, that are more flexible and cheaper than indivdual components. Wireless networking will be the standard communication method between devices given the cost of adding wiring to a house, and the flexibility of putting your console anywhere. As a result, the lines between media types will blur, as 'television' as we think of it now will cease to exist with the advent of services that allow you to watch programming at a press of a button rather than on a schedule. You will read, listen to music, and shop, all from the same console. Integration will make the price of a large console about the same as a current mid-range PC, so consumers will buy several units in a family setting. Portable units will allow you to take your shows/music/information with you, and allow you to still use all the features your big console has while within network service range.

    Barriers to adoption of such integrated devices will come mostly from the companies that control the current media types as they will be concerned about losing their current revenue streams. The companies that successfully come up with new payment schemes that are both profitable to the company and palatable to the consumer will end up breaking the barriers until eventually getting to the point where you can subscribe to any service from your integrated console.
  • by shunnicutt (561059) on Friday December 13, 2002 @02:51PM (#4882902)
    Cheaper display technologies will surely shake up how we interact with our information, but I think that everyone is missing something very important.

    Prognosticators have been chasing this dream of a paperless office for decades now, with very little realization. Indeed, some researchers have indicated that we like paper because it lends itself to spatial organization of information -- you're likely to remember where you left a paper document even long after you've last used it.

    With cheap displays, we can make small, portable displays -- sort of like Microsoft's failed eBooks, but you get to view whatever information you want, whether from your own library or on the net.

    And get this -- these would be cheap enough that you could have a small collection and sit down at your desk and leverage your brain's built-in spatial organization strengths. And when you don't need that information anymore, just call something else up.

    Many people use multiple monitors. This would be like multiple monitors that you can stack, reorganize or just toss into your outbox.

    I don't know if the designers of Star Trek:TNG had this sort of thing in mind, but in that series and every one since then, you'll see characters sitting at a desk surrounded by a mess of these little things.

    Interface design, speech and handwriting recognition, sure. But just being able to move data around in real space is going to be very comfortable for us.
  • The article seems to be slashdotted. Here's the text:

    The future of computer interfaces
    provided by Gartner

    Human computer interfaces will not get worse during the next decade. In fact, they will get better. This is really important, so listen up.

    Development will be slow, so not much will happen at first, but later on something will come out and you'll be like "whoa!" Something cool might happen really soon with displays because I read in PC magazine something about Tablet PCs, which seem kinda new. And we saw an article in Popular Science about new OLED screens which seem pretty good, so we'll probably see those for sale some time far in the future. But you can already buy the Tablet PCs, so those will probably catch on sooner, we think.

    Analysis

    More and more people use computers, so we think that probably this will continue for a while. Because computers need displays, we feel that as we get more computers, we'll probably get more displays too. That means they'll become ubiquitous, like McDonalds.

    Products will also not get worse, but better! That means they will be cheaper, easier to use and more powerful. This may come as a suprise, but it's true! In particular, we're pretty sure computers will advance on the following fronts:
    • Input devices - like mice and speech recognition. These will get better, probably.
    • Output devices - like screens. These will get better, if that whole OLED thing happens. We dunno.
    • Advanced interaction metaphors - well, we needed three bullets, so we made up one. Not only will we have input and output devices, but there will be a synergy between the two. We call this advanced interaction.

    Prediction

    More computers! More screens! Crazy research like OLED and Speech Recognition may or may not be big time in 2020, but for now they are a niche.

    Also, we noticed that most people still use CRTs, but more and more people use LCDs 'cause they're better. Probably this will be true when the OLEDs come out too, 'cause they're even better.

    All that crap about digital ink and flexible paper? And those little eye screens like the borg have? Those will probably happen sometime by 2012, when no one will remember this article and we'll be off the hook.

    Prediction

    Remember that thing about advanced interface metaphors we talked about? Well we thought of some:
    • Personalization - one day, your computer will assemble pages of information from a variety of sources, personalized for you! You might have a notification about new mail, appointments for the day, news from your favorite web sites and weather all on one page. Imagine that!
    • Taxonomies or "knowledge maps" - we don't know what this means, but they are big words, so we'll slip it in between these other two bullets to sound fancy.
    • Search - imagine being able to type in key words or little bits of information and have the computer find lots of relevant documents for you!
    • Active alert - people are getting pissed whiney computers "uprade this" and "blah blah overheated" and whatnot. This would be a good thing for Microsoft to work on.

    Thus endeth the suggestions and analysis of Gartner group. Printed versions of this document with pretty charts for power point presentations are available for $299.
  • The voice recognition software we have today, will never catch on in the workplace, unless everyone is given their own private office. Cubicles won't cut it.

    How are you supposed to dictate a message to /. if you are at work "working"?
  • Dave (Score:2, Funny)

    by inerte (452992)
    Input interfaces will enable computers to sense their environment and the identity of their users, and personalize interactions appropriately.

    PC: Dave, I feel so cold. Did you let the windows open?

    Dave: Ohhh... poor boy... here, I will get a blanket for you.

    PC: Dave, no... do like you did last time, overclock my AMD...
  • volume? (Score:3, Insightful)

    by Chris Canfield (548473) <slashdot&chriscanfield,net> on Friday December 13, 2002 @03:19PM (#4883143) Homepage
    Through 2012, more than 95 percent (by volume in gigabytes) of human-to-computer information input will remain keyboard- and mouse-based (0.6 probability).

    Umm... MP3's? Video Recorders? Cameras? -C

  • by AdamBa (64128) on Friday December 13, 2002 @03:29PM (#4883213) Homepage
    This is a little article I wrote a while ago called "Can We Improve Computer-to-Human Bandwidth?" which I haven't done anything with...so I might as well post it here:

    --------------- begin article --------------

    I bet I can guess something about you: right now you are reading something on your computer screen. The text is displayed on a display set near eye level, probably in black text on a white background, or white text on a black background. You read all the text that is visible on your screen, then you press a key or click a mouse button to scroll down to see more text.

    Was I right?

    Since the early days of computing, fifty years ago, that is the way data has been transmitted from computers to people. The improvements have been quite modest, involving sharper displays, more readable fonts, better choice of foreground and background colors, and so on.

    In the same time period, there have been many attempts to improve how data flows the other way, from people to computers. Different keyboards layouts have been designed. Voice recognition may be just around the corner. The mouse has changed how data is input, possibly not speeding it up for power users, but enabling a whole new class of users to communicate with a computer at all.

    Data flow in the other direction has remained the same, an exact simulation of reading text on a printed page. Yet computers are much more powerful than a printed page. Is it time to take advantage of this? How could this be done?

    Certainly the real limit on how fast people can read is how fast they can process the underlying information. But some part of a reader's brain is occupied with deciphering the text on the screen. For some dense texts that percentage will be trivial, but for many others it won't be, so the question becomes how much of that can be removed, getting people closer to their theoretical limit.

    One change that already exists is to have computers read the text out loud. Unfortunately, while most people can speak much faster than they can type (or write), it is doubtful that most people can listen faster than they can read. One reason is that spoken language, with its elided sounds and lack of spelling, is less informationally dense than written language. Thus it is faster for a person to speak than to spell, but slower for he or she to listen than to read. While computer reading is a boon for people with certain disabilities, it does not speed up how fast data flows from computer to person.

    A more radical idea would be to reconsider why the text stays still and the user's eyes move. Why not scroll the text so the eyes can stay still? Of course the computer would have to adjust the scroll rate for different users. Since your hands aren't doing much of anything when you are reading, so I could imagine reading text that was scrolling by with one hand on the mouse, with the left button slowing down the scroll rate and the right button speeding it up.

    What about changing how the text itself is displayed? It's risky to get too far away from this because everyone has a lifetime of training in reading printed text in books. Still you can speculate. What if different parts of speech were color-coded on the fly, or displayed in different fronts, or in a slightly different location on the line? What if the computer compressed certain words as they appeared (such as compressing George W Bush to GWB - the reverse of a trick that writers use: typing frequently-used phrases in shorthand, then going back and replace them later, or letting Word's auto-correct feature do it for them). This may be disconcerting at first, but it may turn out that with practice, this can improve the transmission speed for people who need to quickly digest a lot of information coming at them from their computer.

    Moving beyond text, consider the fact that a sign language translator can keep up with spoken language, and is also limited in speed by the need to move hands and arms around. One of the advantages of sign language is that location within space can be used to convey information; for example a room can be laid out visually and then movement within that room conveyed by changing where the signs are shown. Could computers use a similar trick on the screen to speed up how fast information is displayed? It could be a lot of work to learn how to interpret this, just as learning sign language is a lot of work, but the payoff could be worth it.

    The main thing is to get out of the mindset that static text on a screen is necessarily the best way to present information. Once that assumption is shattered, interesting ideas should follow.

    ---------------- end article ---------------

    - adam
    • From reading your post, it came to me that a good addition to scroll bars might be the ability to assign a constant "scrooling speed" that you could start off, and read as it went along, and pause or resume when you liked.

      I know from watching computer logs and other text scroll past on a screen that you can make sense of a LOT of information scrolling past very quickly.

      It would be interesting to see how annoying it would be to have a browser start scrolling automatically as soon as a page was loaded, or if it would be of use...
    • Some interesting ideas, but there are problems.
      I'm in no way a HID designer or anything, but while reading your comment I tried to pay closer attention to the way I read it. Turns out that I (think I) try to get a hierarchical overview of the comment.

      In other words, we don't read text in a linear fashion, letter by letter. Instead we first look at the general outline (paragraphs), then we begin at the first line, picking out individual words. Sometimes we might look back half a line to rescan a word and get the context right.
      Note that there _are_ scrolling displays, often seen in public places (and that tag). Now of course those are sub-optimal anyway because of their slow speed, but you should be able to observe how the eye jumps trying to obtain context on them.
      So I don't think moving the text on the display is a particularly good idea. There are related things one could try, though. For example text in smaller columns could make it easier to jump to the start of the next line.

      Auto-compressing text to (selected) acrynoms is a very interesting It obviously needs to adapt to the user, but it's definitely worth a try.
      The more radical changes like gestures etc... could theoretically yield real efficiency gains, but they're all so difficult to learn...
  • by dh003i (203189) <dh003i@@@gmail...com> on Friday December 13, 2002 @03:38PM (#4883272) Homepage Journal
    Through 2012, more than 95 percent (by volume in gigabytes) of human-to-computer information input will remain keyboard- and mouse-based. (0.6 probability)

    60% probability? Are you nuts? How about 100%? I can consistently and constantly type at 100 words per minute, but I certainly wouldn't want to talk that fast. I doubt I could, and even if so, it would hurt my throat after a while.

    Writing pads are nice, but again, I can -- and most other people -- can type alot faster than they can write.

    Other forms of inputting data into computers will remain niche at best. Voice recognition will be used to quickly convert professor's lectures into documents, and hand-writing recognition will be used to convert hand-written notes into documents.

    However, no one will be writing 10-page papers by hand or speaking them. Could you imagine it?

    "While I was, umm, 6x backslash, going to the park and um, 8x backslash, I saw a..."

    In short, its not going to happen. Outside of planned presentations, people speak in a manner which is specifically for dialogue and which does not make much sense on paper, except in a dialogue.
  • by axxackall (579006) on Friday December 13, 2002 @04:12PM (#4883451) Homepage Journal
    I think that the file folder structure is obsolete. Hierarchy does not describe well the real world. That's why we use symbolic links. However symbolic links have lots of limitations.

    What if I want to swap a symbolic link with the primary inode? What if I want to inherit many custom-defined attributes? What if I want a multiple inheritance - several equal parent folders, not just a parent with second-class s-links?

    I agree with the prediction about taxonomies and knowledge maps.

  • by timothy (36799) on Friday December 13, 2002 @05:04PM (#4883760) Homepage Journal
    well-weighted, machined edge, free-spinning, finger-friendly, LED illuminated, multi-purpose jogwheels.

    The Griffin powermate is a cool-looking device (I just ordered one, have not yet had a chance to play with it), and I hope will meet that description pretty well -- I am curious (and pessimistic, but willing to wait) about its free-spinny-ness ... I want something I can give a spin, have it keep going for a while, and have it stop (within reason) only when I drop my fingers again to arrest the spin.

    I'd prefer a spinning jog wheel to a mouse wheel for the same things that mouse wheels are used for right now.

    More importantly, I'd like a jogwheel for both playing and editing sound and video. In Mplayer, for instance, rather than the arrow keys + space bar (though those are fine), I'd rather be able to tap a jogwheel for pause / play, roll it forward for fast motion, roll it backwards for backwards fast motion, etc.

    I'd like the GIMP to be jog-wheel improved, too, so any operations which have a slider could be activated by the jogwheel instead.

    Multiple reconfigurable jogwheels would make video editing more fun, too -- say, one for standard audio track volume, one for added voice over or music track, one for moving around in the video stream itself. (For which a real video mixing board would be nice too, but less useful for other things).

    Another example of using several jogwheels might be this (and I'm thinking of the way the powermate works, as I understand it -- there's the wheel itself of course, and a single "button" which is to say that the whole assembly acts like a mouse button when pressed down):

    In Mozilla, have a triplet set up for
    1) scroll up / down current page; button might
    2) sroll sideways through all open tabs
    3) open and scroll down the bookmarks file

    Idea: For all these things, a small and bright LCD display on the base of the wheel would be cool, so it's easy to keep track of its current function.

    Also, playing breakout-style games with a mouse is just lame" Think jogwheel = atari paddle :)

    Are there any truly suprelative jogwheels I should know about? A few old video games had good ones, but I don't remember their names ...

    timothy
  • NIME (Score:4, Interesting)

    by RobPiano (471698) on Friday December 13, 2002 @05:19PM (#4883832)
    Some of the most exciting new interfaces come from music.

    New interfaces in Musical Expression will be in Montreal this year.

    Check it out at http://www.nime.org

    Rob
  • by SkewlD00d (314017) on Friday December 13, 2002 @05:22PM (#4883847)
    So where are the Star Trek terminals? What's the instruction cycle length on those suckers? How come they don't have to reboot their computers every 10us? Damn TV technology: 99% bullshit, 1% interesting concepts.
  • predictions (Score:3, Insightful)

    by Barbarian (9467) on Friday December 13, 2002 @05:46PM (#4884016)
    I seem to remember reading predictions in PC Mag in 1995, that by 2005 we'd still have the mouse and keyboard, but would be mostly communicate by voice speaking in a natural voice, the computers would be smart, with "smart agents" doing a lot of the work for us, that we would all login with fingerprints or retinal scans... we're no where on our way to being there by 2005, computers are a lot faster, but finger print and retinal scans don't seem to be that popular, and "smart agents" turned out to be not so smart.. anyways, at least this article gets the "mostly by keyboard and mouse" part right.
  • oh great! (Score:4, Funny)

    by PyroX_Pro (579695) on Friday December 13, 2002 @08:09PM (#4884789) Journal
    "hearing the text of his or her e-mail read aloud while riding in a car"

    This is just what I need, as if road rage isn't already a problem...

    [sys] enlarge you penis now! this new medicine will..
    [me] DELETE!
    [sys] lose 100 lbs in 5 days!
    [me] DELETE!!!
    [sys] hot and sexy webcam sluts want your..
    [me] DELETE!
    [sys] mr. obertoneryan wants you to help him get his money out of africa...
    [me] OH FOR THE LOVE OF GOD!
  • by jesterzog (189797) on Saturday December 14, 2002 @03:19AM (#4886067) Homepage Journal

    I was quite disappointed by this article -- I don't know if ZDNet is providing the whole thing, but overall it was very short. It also missed one of the main development areas that I think is important, which is a whole lot more ubiquitous computing.

    The article doesn't really predict anything except the continuation of the same old stuff that's already happening. "Computer screens will become more convenient." This is hardly a big surprise. Neither is the amazing prediction that speech synthesis will be used more as it gets better. These things are boring -- they're essentially saying that what we already have will get better. Well duh!

    On the other hand, there aren't any interesting predictions because they're all already obvious. What about clothes that sense how dirty they are and indicate to a washing device how [much] to wash them? For that matter, what about clothes that adapt to downloaded designs and properties so a user doesn't have to buy new ones to look different? What about intelligent feedback audio systems that aren't speech related? What about intelligently using vibrations and other kinetic methods to indicate information so people's eyes aren't distracted?

    These are just off the top of my head, and they're the sorts of things that everyone can't come up with easily. For one thing, they actually require some genuine investigation and research to predict, if they can be predicted at all. A few decades ago, a computer was a building sized juggernaut -- almost nobody predicted that they would be on desktops and in everyday devices. That would have been an interesting prediction.

In order to get a loan you must first prove you don't need it.

Working...