Human-Computer Interfaces From 2003 to 2012 324
Roland Piquepaille writes "My favorite forecaster, Gartner, is back with a new series of predictions about the way we'll interact with our computing devices. Here is the introduction. 'Human-computer interfaces will rapidly improve during the next decade. The wide availability of cheaper display technologies will be one of the most transformational events in the IT industry.' Not exactly a scoop, isn't? But wait, here is a real prediction. 'Computer screens will become ubiquitous in the everyday environment.' Ready for another prediction? 'Through 2012, more than 95 percent (by volume in gigabytes) of human-to-computer information input will remain keyboard- and mouse-based.' Check this column for a summary."
GyroMouse (Score:1, Interesting)
Re push vs pull (Score:5, Interesting)
Some of the problems with push technology
Related: DARPA funds "cognitive assistant" (Score:4, Interesting)
Digital Paper (Score:3, Interesting)
Firstly, there is a certain tactile "feel" to writing on actual paper that would be very difficult to replicate - and if it feels too different, I suspect people won't adopt it.
Secondly, cost - could this be brought down to a price that would be economically feasible? If it's not as cheap as paper, it isn't gonna happen.
That's not to say that I wouldn't like to see it introduced; we could all have our workplace documents on those little pads, similar to theones in Star Trek, and I'm all for anything that will stop the slaughter of forests - I'm just highly pessmisitic. The author seems to be of a "more of the same" persuasion as well. Maybe someday, but I don't think we'll see it in the next ten years.
Alternate prediction (Score:4, Interesting)
By volume in gigabytes? Call me a contrarian, but I'll bet videocameras will exceed keyboard input by that standard. Wanna test that notion, Gartner? Point your text editor at a file, and I'll fire up my webcam recorder. Ready? GO!
I was amazed... (Score:4, Interesting)
The question is...how long before this technology makes its way into mainstream computers, or something like it.
Wouldn't it be nice to just look at the monitor, blink twice and have the folder open. Careful where you look though!
Re:Media Labs (Score:4, Interesting)
Take the mouse, for example. According to this article [ideafinder.com], the mouse was invented in 1968. And it didn't become popular until the Mac came out in 1984. That's 16 years of obscurity before general adoption. Granted, there wasn't really any general widespread use of computer technology in that 16 years, so these days it'd be a good bit less. Still, people are really slow to switch away from something familiar that "works".
My predictions. (Score:1, Interesting)
Chip makers will get a clue and start offering cooler chips. The refrigeration wars begin. By the end, all chips are cooled to below-freezing temperatures, and geeks are left stating, "It doesn't matter at all anymore."
The falling cost of immense broadband in Europe and Asia (but not the US) gives rise to a new spin on an old game - muds. Graphical muds (IE, EQ, DAoC) are able to be run by non-corps. The first few take immense skill and talent, causing people to say, "Woah." Soon, thanks to open source, it becomes impossible to find these original games among all the clones started by 14 year olds who thought the administrators of the originals were a bunch of nazis.
Speaking of graphics, NVidia is wiped off the face of the planet when their latest graphics card malfunctions and goes critical. Engineers fail to activate the safety coolant systems, and the card explodes in a glorious mushroom-shaped cloud of nuclear energy. And users of ATI will rejoice.
Paradigm shift (Score:3, Interesting)
I really couldn't care less about those modes of input. Can you imagine everyone in the office talking to their computers at once? And it wouldn't really help that much for programming or data entry, the tasks that a lot of computers get used for. As for handwriting, my hand starts to hurt after about five minutes of writing stuff on paper, and i usually give up and open up Notepad. And that's not even considering that my handwriting sucks and would be about ten times as difficult to process as "normal" handwriting.
What this guy really isn't saying much about is direct optical feeds; ie, beaming visual information onto your retina, or inserting false visual signals higher upsteam in your nervous system, and direct mental input; either in the form of reading the synapses in your brain, or recording your motions as you type and guesture in the air.
That's the kind of technology that will cause a major shift in the way we use computers, and is so different from our current modes of interaction that you can't really extrapolate from here to there. I'm sure scientists during the 40s and 50s were predicting great advances in vacuum tubes (the science fiction authors certainly were at least) that never materialized, or at least that were never utilized, because of the development of the microchip.
I have no idea if those kinds of technologies will be fully developed in the next ten years or not, but i don't think this guy has any better of an idea than the rest of us.
No more screens (Score:2, Interesting)
By 2012 computer displays as we now know them (LCD, CRT) will have been relegated to inexpensive embedded systems. Bleeding edge office information devices will function by tracking the user's movements and speech, as well as manipulation of common objects in her work environment. They will serve the same purpose as graphical icons do today. The computer screen will have been subsumed into dynamic surface markings and other detectable changes in the objects in her environment. They will have reflective (as opposed to backlit) display surfaces where information can be encoded in textual, graphical, color, or texture attributes, and sometimes some degree of 3D physical configuration changes. These will range from writing surfaces that resemble paper, cards, packaging materials, and other document-like entities, to instrument or appliance control panels and communications devices. User interactions with these items can produce changes in both the displays and the underlying data repositories. Moving them, rearranging their relative locations, adjusting them, speaking into them, and other as yet unforeseeable user interactions will effect the state changes that embody the user's day to day tasks. Think of a cube with an environment of intelligent interactive devices that visibly and audibly change as work gets done. The devices themselves will also be communicating and interacting as needed.
Interface Idea (Score:5, Interesting)
Displays Improved, Interaction Probably Not (Score:4, Interesting)
Display technology has vastly improved. I'm now just waiting for the price to come down on IBM's T221 LCD so I can have one on my desktop. We purchased one at my workplace and it just blew me away. It is the first display I have ever seen that can be reasonably compared to quality laser printing on paper for its rendering of sharp, crisp, readable text. 9.2 million pixels in the thing and NOT ONE OF THEM IS DEAD. Yep, none, nada, zilch.
As far as interaction goes though, I doubt we're going to see much improvement. Programmers do a terrible job of UI design and a lot of companies are just too cheap or ignorant to hire professional user interface designers or else provide in-depth training for whoever is doing the UI design regarding usability issues. Most companies are also too cheap to do real usability testing. They might test out the new UI on the guy three cubicles away, but he's hardly representative of your customers. Until that changes, human-computer interaction is not going to improve.
Think outside the box! (Score:4, Interesting)
Prognosticators have been chasing this dream of a paperless office for decades now, with very little realization. Indeed, some researchers have indicated that we like paper because it lends itself to spatial organization of information -- you're likely to remember where you left a paper document even long after you've last used it.
With cheap displays, we can make small, portable displays -- sort of like Microsoft's failed eBooks, but you get to view whatever information you want, whether from your own library or on the net.
And get this -- these would be cheap enough that you could have a small collection and sit down at your desk and leverage your brain's built-in spatial organization strengths. And when you don't need that information anymore, just call something else up.
Many people use multiple monitors. This would be like multiple monitors that you can stack, reorganize or just toss into your outbox.
I don't know if the designers of Star Trek:TNG had this sort of thing in mind, but in that series and every one since then, you'll see characters sitting at a desk surrounded by a mess of these little things.
Interface design, speech and handwriting recognition, sure. But just being able to move data around in real space is going to be very comfortable for us.
My random thought on the subject (Score:5, Interesting)
--------------- begin article --------------
I bet I can guess something about you: right now you are reading something on your computer screen. The text is displayed on a display set near eye level, probably in black text on a white background, or white text on a black background. You read all the text that is visible on your screen, then you press a key or click a mouse button to scroll down to see more text.
Was I right?
Since the early days of computing, fifty years ago, that is the way data has been transmitted from computers to people. The improvements have been quite modest, involving sharper displays, more readable fonts, better choice of foreground and background colors, and so on.
In the same time period, there have been many attempts to improve how data flows the other way, from people to computers. Different keyboards layouts have been designed. Voice recognition may be just around the corner. The mouse has changed how data is input, possibly not speeding it up for power users, but enabling a whole new class of users to communicate with a computer at all.
Data flow in the other direction has remained the same, an exact simulation of reading text on a printed page. Yet computers are much more powerful than a printed page. Is it time to take advantage of this? How could this be done?
Certainly the real limit on how fast people can read is how fast they can process the underlying information. But some part of a reader's brain is occupied with deciphering the text on the screen. For some dense texts that percentage will be trivial, but for many others it won't be, so the question becomes how much of that can be removed, getting people closer to their theoretical limit.
One change that already exists is to have computers read the text out loud. Unfortunately, while most people can speak much faster than they can type (or write), it is doubtful that most people can listen faster than they can read. One reason is that spoken language, with its elided sounds and lack of spelling, is less informationally dense than written language. Thus it is faster for a person to speak than to spell, but slower for he or she to listen than to read. While computer reading is a boon for people with certain disabilities, it does not speed up how fast data flows from computer to person.
A more radical idea would be to reconsider why the text stays still and the user's eyes move. Why not scroll the text so the eyes can stay still? Of course the computer would have to adjust the scroll rate for different users. Since your hands aren't doing much of anything when you are reading, so I could imagine reading text that was scrolling by with one hand on the mouse, with the left button slowing down the scroll rate and the right button speeding it up.
What about changing how the text itself is displayed? It's risky to get too far away from this because everyone has a lifetime of training in reading printed text in books. Still you can speculate. What if different parts of speech were color-coded on the fly, or displayed in different fronts, or in a slightly different location on the line? What if the computer compressed certain words as they appeared (such as compressing George W Bush to GWB - the reverse of a trick that writers use: typing frequently-used phrases in shorthand, then going back and replace them later, or letting Word's auto-correct feature do it for them). This may be disconcerting at first, but it may turn out that with practice, this can improve the transmission speed for people who need to quickly digest a lot of information coming at them from their computer.
Moving beyond text, consider the fact that a sign language translator can keep up with spoken language, and is also limited in speed by the need to move hands and arms around. One of the advantages of sign language is that location within space can be used to convey information; for example a room can be laid out visually and then movement within that room conveyed by changing where the signs are shown. Could computers use a similar trick on the screen to speed up how fast information is displayed? It could be a lot of work to learn how to interpret this, just as learning sign language is a lot of work, but the payoff could be worth it.
The main thing is to get out of the mindset that static text on a screen is necessarily the best way to present information. Once that assumption is shattered, interesting ideas should follow.
---------------- end article ---------------
- adam
Keyboard won't be superceded (Score:3, Interesting)
60% probability? Are you nuts? How about 100%? I can consistently and constantly type at 100 words per minute, but I certainly wouldn't want to talk that fast. I doubt I could, and even if so, it would hurt my throat after a while.
Writing pads are nice, but again, I can -- and most other people -- can type alot faster than they can write.
Other forms of inputting data into computers will remain niche at best. Voice recognition will be used to quickly convert professor's lectures into documents, and hand-writing recognition will be used to convert hand-written notes into documents.
However, no one will be writing 10-page papers by hand or speaking them. Could you imagine it?
"While I was, umm, 6x backslash, going to the park and um, 8x backslash, I saw a..."
In short, its not going to happen. Outside of planned presentations, people speak in a manner which is specifically for dialogue and which does not make much sense on paper, except in a dialogue.
Electron beams are bad for you (Score:2, Interesting)
I find it interesting that he predicts the death of CRT screens so far away. I see that as happening right now.
New idea for scrolling... (Score:3, Interesting)
I know from watching computer logs and other text scroll past on a screen that you can make sense of a LOT of information scrolling past very quickly.
It would be interesting to see how annoying it would be to have a browser start scrolling automatically as soon as a page was loaded, or if it would be of use...
Re:A Solution Looking For a Problem (Score:3, Interesting)
Walking down the street *might* be an advantage, but I for one (and I know I'm not alone on this) would be wholly embarassed to walk down the street talking to my computer and would also be irritated by people who would do it (just like I'm irritated by people who walk around talking on their cellphones). Plus there are some safety issues, although smaller than those while driving.
Also, I think it would be really weird to have a computer talk back to me (not to mention a little inconvenient, how many times do you actually read entirely through a webpage? Not very often. Usually you just skim it.) and I would much rather interface with the computer using a semi-transparent glasses display.
taxonomies vs file folders (Score:3, Interesting)
What if I want to swap a symbolic link with the primary inode? What if I want to inherit many custom-defined attributes? What if I want a multiple inheritance - several equal parent folders, not just a parent with second-class s-links?
I agree with the prediction about taxonomies and knowledge maps.
I hope there are more jogwheels, too (Score:3, Interesting)
The Griffin powermate is a cool-looking device (I just ordered one, have not yet had a chance to play with it), and I hope will meet that description pretty well -- I am curious (and pessimistic, but willing to wait) about its free-spinny-ness
I'd prefer a spinning jog wheel to a mouse wheel for the same things that mouse wheels are used for right now.
More importantly, I'd like a jogwheel for both playing and editing sound and video. In Mplayer, for instance, rather than the arrow keys + space bar (though those are fine), I'd rather be able to tap a jogwheel for pause / play, roll it forward for fast motion, roll it backwards for backwards fast motion, etc.
I'd like the GIMP to be jog-wheel improved, too, so any operations which have a slider could be activated by the jogwheel instead.
Multiple reconfigurable jogwheels would make video editing more fun, too -- say, one for standard audio track volume, one for added voice over or music track, one for moving around in the video stream itself. (For which a real video mixing board would be nice too, but less useful for other things).
Another example of using several jogwheels might be this (and I'm thinking of the way the powermate works, as I understand it -- there's the wheel itself of course, and a single "button" which is to say that the whole assembly acts like a mouse button when pressed down):
In Mozilla, have a triplet set up for
1) scroll up / down current page; button might
2) sroll sideways through all open tabs
3) open and scroll down the bookmarks file
Idea: For all these things, a small and bright LCD display on the base of the wheel would be cool, so it's easy to keep track of its current function.
Also, playing breakout-style games with a mouse is just lame" Think jogwheel = atari paddle
Are there any truly suprelative jogwheels I should know about? A few old video games had good ones, but I don't remember their names
timothy
Comment removed (Score:4, Interesting)
Volume estimates... (Score:1, Interesting)
I wonder whether the author was aware that the usual keybord use is probably well below 120 words/minute. Let's say they type 4 char's a second. That's 4 bytes/s.
By just trying to speak to your computer your soundcard is probably sampling it at 44.1KHz, 16-bits mono, even that it might later downsample it to something like 8KHz to make it manageble and more in line with what the software think is needed to recognize what you say.
Still, one second of human speech would become ~88KB/s. That's roughly 22.000 times (!) as much data in a second than the average user can input while typing.
Am I the only one that see this "prediction" as either "one in 88.000 people will ever speak to their computers" (goddamit machine from hell may you and Micros~1 go the same way when I run you over!) or "Basically, any other input has been so flawed that it won't count"? But one single users voice input would be equal to the input amount of 88.000 ordinary users pounding their keyboards.
Just to toss som more fuel under the fire, how many times a second do you think your joystick is polled if you play a game supporting a joystick? How much data do you think that is giving your computer as input...
OpenSource opportunities to redefine the UI (Score:2, Interesting)
Right now there is no standard for any of these areas. There are no expectations on the marketplace for the look & feel. There is nothing to copy from the world of Unix or from the Mac or from Parc or from Windows. This new field is totally wide open.
Will the free software community step up and demonstrate creative leadership in humane, truly empowering, open approaches to these new UI opportunities? Or, since it's not a chance to either bash Microsoft or promote Linux, will the free software world sigh, yawn, scratch its collective butt and then complain ten years later that corporations are controlling the world's access to these crucial software technologies?