Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Technology

Making Computing More Human-Centered 193

buzzdecafe writes "Interesting article in InfoWorld about the future of interface technologies, e.g. pointing your finger to move files around, etc. The story focuses on MIT's Project Oxygen, which aims to make computing more anthro-centric. (Check out the Visual Interaction stuff.)" We've written about Project Oxygen before.
This discussion has been archived. No new comments can be posted.

Making Computing More Human-Centered

Comments Filter:
  • The question is if it is better to point your finger to move something on your screen, or the mouse. Using your finger requires more effort and arm movement (which could tire some people out), yet using the mouse only requires a flick of the wrist... which to me would be 10x faster than pointing with my finger...
    • which to me would be 10x faster than pointing with my finger...


      well not 10x, but it would be less tiring, and with the mouse pointer, you can select smaller objects a lot easier than your relatively fat finger...
    • by prnz ( 33032 )
      A mouse works well while sitting at a desk with a ~20in. monitor displaying at most 2K x 1.5K resolution, but try increasing the screen size by 10 or 100 so that it takes up an entire wall. Now using a mouse requires a lot more effort to move the pointer all the way across (and speeding up pointer movement comes at a cost of accuracy). It's much easier to point at what your looking at, especially if the pointer follows your finger movement.

      How about 3D? It's difficult to map a mouse that moves in a 2D plane into a 3D environment. 3D joysticks and other controllers really aren't that much better. People already know how point to objects around them, and that's the whole point of the project; use the communication methods people already know instead of forcing them to learn new, mostly unintuitive ones. Just because typing and point-and-click are easy to learn and work fine with today's computers doesn't mean there's no room for improvement in HCI while everything else getter bigger and better.

      Paul
      • he has a point. My new 21inch monitor at 1600x1200 is starting to tire my wrist out. I'm thinking about reorganizing my desktop to keep everything I click on in one area.

        A friend who worked for Amazon.com said they have all the taskbars on the left side of the screen there. Supposedly this decreases the amount your mouse will travel and thus increases productivity.

        Makes sense to me, as I close most tasks via the taskbar rather than the little "X".

        • if your getting tired on 1600x1200 than maybe youn should turn up the mouse sensativity... either that or there are setting where the mouse moves slow for precise clicking if you move it less than 2 mm or something (user configurable) otherwise it speeds up... so an inch of movement on the mouse = 2 inches on the screen but 2 inches of movment on the mouse = 5 inches on the screen... depending on speed... this i show i do it
        • mouse acceleration works... goes slow when you go slow, but if you speed it up quick, it moves across the screen really fast with a small flick of the wrist
    • If we're not careful, we might accidentally make Americans exercise.
    • Bet me. I allready point with my finger to move files. First I point it at the 'm' key, then the 'v', then the space, ect... Yeah I'm being facetious but the keyboard IS a human centered input device and it is certainly faster than a mouse, and probably faster than pointing your finger to where you want something to move, as you pointed out.
  • Sounds alot like in minority report.

    I really don't think were ready for that, computers aren't really fast enough. When you have fingers moving around, the interface must be moving REALLY fast. I mean, you can't wait for windows to open or applications to load... it would be really frustrating.
  • by eyegor ( 148503 ) on Friday June 21, 2002 @09:11PM (#3747632)
    As an Anthro-American, it's high time we had better access to them new-fangled corm-puters. I bought an Apple Macintosh, took it home, and all it did was sit there looking snide.
  • What if (Score:5, Funny)

    by ctj ( 210968 ) <christopher DOT ... ey DOT ac DOT nz> on Friday June 21, 2002 @09:12PM (#3747634)
    What if you gave your computer the finger would it get upset and crash :-)
  • This (in particular, speech processing) is the place. Other than improved graphics / simulation (which is mostly dependant on the video card, in any case), user-independant speech processing and AI are the only things I can think of that will need the processing power that is coming down the track.
  • by bunyip ( 17018 ) on Friday June 21, 2002 @09:14PM (#3747641)
    I've following Project Oxygen for a couple of years, I seem to remember an article in Scientific American about it some time back.

    Anyway, despite the big dollars spent at MIT on it, I'll probably see it on the next generation cell phone from Nokia or Ericsson. Some of the ideas are pretty cool, I can't wait.

    In other words, I believe that as lot of fundamental research will happen here (I live in the US for the moment), but that engineering and delivery will be elsewhere.
  • by shmuc ( 70684 )
    Use your finger to move the mouse cursor, you must.
  • by Knife_Edge ( 582068 ) on Friday June 21, 2002 @09:18PM (#3747657)
    The idea that all machine interfaces can be simplified to the point where they are intuitive enough for any untrained human to use seems questionable to me. The problem is, computers as tools allow abstract tasks to be performed that were not even possible before they existed. They aid humans not only in performing such tasks, but present ways of thinking about tasks that was not possible before. This may or may not have any equivalent at the level of hand gestures or speech. The machines may force us to adapt to some degree, but I would like to point out that at this time humans are vastly more adaptable than even the most sophisticated information technology.
    • You bring up a lot of great points, but I don't think that the original premise, that all interfaces need to be simplified, is correct. I feel that the real question is how to design these interfaces to match the (ever-evolving) context of end-users.

      Computer pioneer Doug Engelbart said it better: if ease of use was the only critera, we'd be riding tricycles instead of bicycles. However, this should not be an excuse to make difficult-to-use interfaces! We have to look at deeper issues, including usability, learnability, reliability, and expertise, as well as fundamental ones like flow, aesthetics, user experience, and quality-of-life, when designing systems.

      After all, the point of building all these tools and gadgets is so that they work for us, and not the other way around.
      • You're right but I think what most people in this community are afraid of is the continual insistence that clicking through a bunch of menu items is easier than (and absolutely must replace) simply typing 'make'. I personally feel beleagered by interfaces that make me use a mouse for every task simply because a mouse is More Intuitive (TM).
        • I personally feel beleagered by interfaces that make me use a mouse for every task simply because a mouse is More Intuitive (TM).

          The only "intuitive" interface is the nipple. After that, it's all learned. — Bruce Ediger

          • That whole nipple thing only refers to it's most basic and rudimentary function... obtaining sustenance as an infant.

            I question all /.ers out there whether they know how to use a nipple effectively as an adult? It can be a much more intricate exercise with a multitude of variations of interactions. Personally it is very intuitive but I've heard from many female friends that not all guys have this 'intuition' as adults.

            Remember that the nipple is a sensitive interface and that it is definitely subject to 'gestures' and 'clicks' not to mention 'click and drag' elements as well as plenty of 'multitasking'. Also there are several types of 'input devices' which will open up your options and provide more precise manipulation. I suggest avoiding the 'command line' as it is a 'blunt' interface in this context and your partner will appreciate a little more subtlety.

        • people in this community are afraid of is the continual insistence that clicking through a bunch of menu items is easier than (and absolutely must replace) simply typing 'make'

          I don't know about that, I just press F7 (Visual Studio).
  • by shoppa ( 464619 ) on Friday June 21, 2002 @09:20PM (#3747665)
    While these new user interfaces are great for those who want to further enhance the Point-and-Drool [tuxedo.org] experience, I don't see how they generalize to the degree of expression you get from more traditional command-line interfaces.

    I mean, renaming a file to a new directory by pointing your finger is fine if you just want to rename one file. But to suggest that this is an improvement over the command line if you've got thousands of files to shuffle around is completely ignoring the computer's ability to do mind-numbing repetitive jobs quickly and accurately. Instead it's insisting that a human interact at every mind-numbing repetitive step. This is not progress, people!

    • Instead it's insisting that a human interact at every mind-numbing repetitive step...

      Only if you don't also have the option of a command-line interface. There's nothing wrong with adding more potential features for those that want them--as long as they can also be disabled for those that don't want them.

      It's the lack of this last option that has embittered you, probably. Being forced to use kindergarten-style interfaces. What's needed is a sophistication-level adjustment, starting with Command Line Only or even Just the Bits and ranging all the way up through My Very Own Computer Helper(TM).

    • In a mouse-based file manager, you don't move thousands of files one at a time. You highlight all the thousands, and move the group. Why should handwaving be any different?

      Similarly, I would expect to point my finger three times: "Starting with this file *jab*, and ending with that file *jab*, move them all over to there *negligent evil overlord wave*."

      • I believe the person was referring to using 'regular expression' to copy or move a selection of files using a recursive search through a heterogenous directory... ie: copy all .jpg files in /pub to /usr/jpg/

        # cp -R /pub/*.jpg /usr/jpg/

        # the * means match all files that end in .jpg

        that's the simplest way to grab all .jpg files in the /pub directory and copy them to /usr/jpg if you happen to be looking for all .jpgs in /pub.

        moving them all isn't much more difficult though it requires more effort because it is more dangerous in that it can seriously break your system if applied incorrectly.

        Of course regular expression is much more powerful than that and can do things like matching all .jpgs except the ones with a ._ in them and/or those with a number in the name, etc. and then rename them all by looking at their creation date and numbering them sequentially, for example. It won't help you pick out the best images for an ad campaign but it will help you make a well organized thumbnail collection.

        BTW I've been using OS X for two years now... since public beta/(alpha) and I'd like to thank the 20 years of Unix developers for their contributions to CS and the awesome power those contributions have given the world over the mysterious byte.

        Before OS X I was a point and click addict and thought that it was just amazing to be able to change the colors of my illustration
        by using a color picker instead of having to redo the design with markers.

    • I mean, renaming a file to a new directory by pointing your finger is fine if you just want to rename one file. But to suggest that this is an improvement over the command line if you've got thousands of files to shuffle around is completely ignoring the computer's ability to do mind-numbing repetitive jobs quickly and accurately.

      Finger gesturing interfaces are not meant for tasks like renaming files. You're right. If someone were to try to implement that, it would be laughable. But personally, if we start using finger pointing gestures to control computers, I would hope that the mere notion of filenames are irrelevant if not antiquated. Spatial gestures for spatial problems, text descriptions for text problems.

      Now, use a command-line interface to sort a thousand stock photographs according to whether they're appropriate for your latest advertising campaign, or to determine if a polyketone molecule could theoretically wiggle through a hole in a proposed crystalline lattice. Massively complicated code or a few seconds of human mindpower? I've got your finger gesture right here.

      • No, but a combination of speech and finger pointing could be useful.

        'Pull up directory tree'.

        'Move all files in directory' 'to directory' 'and transform from gif to png' -> Computer moves from dir1 to dir2 and runs them through a gif to png filter (rename file, change contents).

        Aside from noise levels in offices, this could be a quick way of getting stuff done. Not code though. Saying the names for various brackets is way too time consuming -- even if you go with square, round and squiggly rather than proper names.
        • Not code though. Saying the names for various brackets is way too time consuming -- even if you go with square, round and squiggly rather than proper names.

          Usually my co-workers know quite good what I mean, when I tell them what to type at command line or in an editor, from the context, even if I leave out the punktuation. That would be what I expect from the ideal machine, too. A simplistic example, if I tell you (in an URL context):

          "virtual com port 8080 query c g i <pause> i d equals 10 level equals 3"
          (might be a bit off, because I am not a native English speaker, but you get the idea)

          you would probably know that the correct URL should be

          http://virtual.com:8080/query.cgi?id=10&level=3

          Of course, the context is important. If I say the same while you are sitting in front or a C++ program, your interpretion would be a bit different. ;-)
    • The Unix ideal is small programs doing simple tasks. If you want something more complex you string them together. The command line works great for that and is arguably better than using a GUI to do the same sort of things, i.e. move a bunch of files, run them through a perl wringer, dump output to the printer, etc. all at once.

      Some tasks don't fit the model of simple steps. There's more to life than just shuffling files around. How about writing a novel? Touching up a photograph? Building a UT map? Give me a GUI with point-and-drool over edlin any day.

      The point of a good interface is no to require a human interact at every step, but to ONLY require human interaction when absolutely necessary: at the point a decision needs to be made. There's a lot of work that goes into an interface like that, whether it be a mass of linked scripts and batch files or an app that compresses the tedious work of drawing a Jurassic Park dino into a single button (sorry, 3DSMax joke). But it's worth it to the end user to be able to just point and say "Do what I want with that" (someday). Might even be worth money ;-)

      Paul
      • Building a UT map? Give me a GUI with point-and-drool over edlin any day.


        Not an unreal map per say, but I do my 3d Modeling in Rhino3D, which has, besides a kick ass GUI, a kick ass CLI integrated into it.

        At this point in the game (after, granted, quite a few years of use) I have memorized darn nearly every command that I use and I just type it in. Any day now I am just going to nuke the GUI button bars and just keep the 4 view pans and the CLI command entry bar and history log up.

        Much more efficient way to do 3D modeling, want to move something? Type in the word Move. Want to draw a circle? Type Circle. Want to draw a box? Type in box. Sphere? Sphere. Cone? Cone. Lovely system that.

        Want to trim something? Type in trim. :) Rotate? Rotate. Scale in 2 dimensions? Scale2d. Scale in one dimension? Scale1d. Scale in 3 dimensions? Just plain old Scale will do, thank you so much.

        Sweep a curve along two rails? Sweep2. Sweep a curve along one rail? Sweet. Make a polar array of objects? Arraypolar.

        (and much much more!)

        And of course you just type in Save when you are done. :)

        Naturally a GUI is used to actually place the objects, and an excellent GUI at that. The best as a matter of fact, it feels wholly natural and like a third arm. But without the CLI it just would not be the same. The perfect blending of the two worlds.

        Exactly how natural is this system?

        After an extended use of Rhino3D today I found myself trying to type commands into Photoshop. . . . ::sigh::
        • Naturally a GUI is used to actually place the objects, and an excellent GUI at that. The best as a matter of fact, it feels wholly natural and like a third arm. But without the CLI it just would not be the same. The perfect blending of the two worlds.

          Heh, my brother is the 3D modeler, not me (I just dabble in UT maps), so I'll bow to your superior knowledge and experience with the tools. And I agree with you, once you get the feel for an app and interface, everything becomes natural. I'm almost to that point with Codewright (still fighting some of the quirks) and the VDOS CLI built into it much nicer than searching for a cmd box in 40 or so open windows. Although, as nice as Codewright is for source editing, I'd give it up in a heartbeat if I could just say "Fix this code so it runs on the new platform. Format it nicer. Add the comments that should have been there in the first place." And then take an early lunch. :-)

          After an extended use of Rhino3D today I found myself trying to type commands into Photoshop. . . . ::sigh::

          lol. I have the same problem with Opera and using mouse gestures in everything else.

          Paul

  • I'm still waiting for programs that keep track of how often there ran, and then when the mouse cursor approaches them the OS is smart enough to begin loading that application based on the speed of the mouse cursor and location and the amount of time it ran, and the time frames that its ran in.
    • Intelligent interfaces like you describe wouldn't be incredibly difficult to code, the problem is that they'd be unreliable. What if the interface "thinks" that you're going to attempt to load one program, when in reality you're actually moving for the program next to it? Now computer resources are being used up to preload the long program, slowing the time it takes to load the actual program you want to use. Everyone needs to stop hoping and waiting for the next "nifty" interface and realize that function over form is still better.

      Like someone else said, for most tasks you just can't beat the CLI. Everything else is just going to slow you down.

      What people should be waiting for is a method to make a CLI more accessible and easy to use. Problem is, I can't think of any CLI front-end that would be as quick and easy to use as the CLI itself.
      • Re, What if the interface "thinks" that you're going to attempt to load one program, when in reality...

        The ONE new feature I would like best on my computer is for it to know what I mean when I say "Stop! No! I didn't mean that!"
        • The ONE new feature I would like best on my computer is for it to know what I mean when I say "Stop! No! I didn't mean that!"

          Here's a thought - imagine a system where you use journaling and checkpointing to track *all* changes to both filesystems and program states, and give the user the ability to roll back changes arbitrarily and to great distance.

          This would definitely be useful in recovering from catastrophic user errors, and might even be implementable without having to rewrite every application in the universe (take an image of an application's processes' memory spaces, and either carefully note the state of all file descriptors (especially device handles), or wait until they're in a sane state before checkpointing).

          One of the cluster job distribution tools I've come across already does this to some degree ("condor", which can be set up to do checkpointing if desired).
      • Intelligent interfaces like you describe wouldn't be incredibly difficult to code, the problem is that they'd be unreliable. What if the interface "thinks" that you're going to attempt to load one program, when in reality you're actually moving for the program next to it? Now computer resources are being used up to preload the long program, slowing the time it takes to load the actual program you want to use.

        The solution being for everybody to stop coding bloated crap so that things do not take so darn long to load. :)

        Seriously, now and then programmers should take time out from adding /new/ features and start optimizing the ones that they already have. Programmers that DO already do this for small freeware apps consistently turn out insanely small applications that have negligible loading times.

        Unfortunately everybody elses applications. . . . ::sighs::

        The true wish here should be for applications that do not take so long to load that a preloading feature is neccisary!
    • Spider-sense loading?

      I don't really know that that is a priority...
  • by jaaron ( 551839 )
    Personally, I don't really care if I can wave my fingers in front of my computer screen, or if the mouse follows my eye movements or something like that. I would much rather have a very efficient interface with the computer, which is why I often use just the command line (my laptop runs RedHat and I almost NEVER use X windows). It's just very efficient.

    But it's not very intuitive. And that's the tradeoff. Intuitive interfaces are usually not very efficient when you really think about interfacing with the computer system and getting a lot of work done with little effort. I don't think there a problem with either approach. In fact, we need both. That's one reason I do like Unix/Linux -- when I need intuitive, I run X-windows (okay, it's not as intuitive as a Mac, but it's better than nothing), when I need speed and efficiency, I'm on the command line writing a script or perl or something.

    Anyways, my point is, there are going to be lots of geeks who say, "heck, who needs finger pointing? I don't even use a mouse!" But that's the wrong attitude. Intuitive interfaces have their place and need to be improved upon.
    • Intuitive interfaces have their place...

      Exactly right. I'll never forget getting my hands on a Xerox Star system in about 1983. I'd used a mouse as a CAD drafter, but to use one working with text, to be able to add images, drag items, and so on--it opened a whole new world. And who can forget the impact of the first Mac? It changed the world. So could this.

      Suppose a geek prefers not to use finger-pointing. Fine. Don't use it. But they shouldn't waste their time being contemptuous of the average user's needs. Anything that empowers the individual is a step forward. Besides different levels of knowledge, people have different styles of interacting with the world--verbal, visual, etc. Let 'em all have what they need.
    • The tradeoff between efficiency and intuitiveness will probably get smaller in the future, but progress requires both faster systems and better UI design. The faster part is easy, and in fact, a lot of the performance over time of a 1+GHz PC isn't used except by the newer games (at least that's true on my PC).

      It's the development of better interfaces by the people who use computers day in and day out that needs to push forward. And the parent poster is right: the "CLI RuLeZ a11 d00d!" attitude of a large part of the geek community as seen right here on Slashdot isn't helping.

      I'll bet dollars to donuts (mmm...dollars) that Microsoft R&D engineers aren't sitting on their asses, content that that Windows is the be-all and end-all of user interfaces. (Their Marketing might say that, but Marketing will say anything. And no, the MS R&D thing isn't just a joke...they don't really have an Apple to steal from anymore when it comes to interfaces). They know that if they don't, someone is going come up with a better interface and at that point, everyone else will be struggling to catch-up. Project Oxygen and similar efforts are promising, but without help they might end up to MS like Xerox PARC did to Apple (a bit of cash, but no future).

      Paul
      (That last sentence makes me sound like I'm with Project Oxygen. Nope, just interested. I _am_ working toward a Master's in HCI though.)
    • I want to be able to yell questions at my computer form the other side of the room and have it spit me back an answer. "Computer what do I have to do today?" My computer should also have a map of my area and keep a data base of tasks I do frequently, where I have to do them, and how long it usually takes me to do them. That way I can just spit it a list of 15 - 20 things I have to do in a day, and have it find the most efficient order for me to do them in. I should be able to plug in multiple databases so I can ask it questions like: what is the most or least per-capa populated place in the country, what is its average temperature, how far is it from the beach etc. Map the 20 best places to live based on.. What are some popular electives to take with.. There will be huge advancements in the way humans use computers when they become that good of a source of information.
      PS. I am currently looking for a good program to keep apointments in. treepad is a good one I found recently to store information and notes.
    • Intuitive depends on what you are doing at the time. For working with plain text a command line is obviously the best. If you want to do formatted text it gets to be much more complicated and unwieldly.

      Most programmers can get more done with command line because they work with plain text only for 80% of their work. Object Oriented programmers probably get less use out of CLI and anyone working with a full project with lots of dependencies will achieve dramatic improvements with an IDE just for organizational reasons.

      Likewise any job that requires high levels of multitasking will also see a marked improvement via the use of a visual navigation environment... it's the 'overview' effect of using a map/flowchart or similar for strategizing instead of trying to hold all the details in your head or as a list of information.

      These seem to be the two most common perspectives at work in an environment, the detailed single perspective and the general overview many perspective. Each has a range of intuitive methodologies, some are similar and some are radically different. All jobs utilize these in differing proportions.

      Each tool should recognize this as well as support integration with the other tools available, such as the capability to script via the command line a particular visual interface or the ability to import or pipe the output of a command line into a visual interface for further manipulation.

      We do understand these things as evidenced by the plethora of tools available which support this working model of detailed/overview integration. In fact it seems that this has been improving substantially ove the last couple of years as fields like graphic design and programming become entwined via web design, etc.

      No conclusion until I actually read the article.

  • Personally, I'm more interested in the MUI -- Mental User Interface. The command line/language combination is hard to beat (unless you're talkin' porn, of course.)

    What I'm looking forward too is the self-aware machine with acceptable communication skills and the ability to do contextual reasoning. This would be far more useful and interesting than more visual stuff as concerns my needs.

    Of course, talkin' porn might have merit too.

  • by dmarx ( 528279 )
    The question is 'what's the market,'

    What's the market? This is a technology that will make computers more intuitive and easy to use, and this guy wants to know what the market is? I'll tell you what the market is: it's everyone who wants a computer, not a hobby.

  • The zsh command line is the easiest, fastest, most efficient interface ever devised (though home/end support would be appreciated). KDE/GNOME tie for second. Windos and macintosh interfaces are down the line aways.
    • The zsh command line is the easiest, fastest, most efficient interface ever devised

      I've always preferred ed [gnu.org] myself.
  • by Anonymous Coward
    This only works with good one-mouse-button GUI model.

    Ehen a finger is used to drag, press, select, tap, double tap, e erything is fine on a touch sensitive 2005 AD flat-mat computer.

    but OSes that require multiple mouse buttons mandatorily as part of GUI (all osses except NeXTStep OS and the MAcintosh OS) will be left behind.

    Why?

    because as Steve jobs predicted (and I) back in 1982... a computer will never know WHICH FINGER you used.

    Unless it knows WHICH FINGER was used to tap with, only GUIs that are based on a one-mouse-button priciple can be truly integrated into these futuristic computers that are nothing more than flexible flat dinner-mats (no physical keyboard, just a temprary video overlay keyboard if needed at times.

    i have mentioned this 4 times on slashdot, once every year or so.

    Nobody seems to undestand why I keep warning people to remember to use one mouse button in designs so that we can progress.

    If you never used a mac or NeXT, you will never understand how a gui works well with one mouse button so dont bother flaming this. You need to try it for a while to understand that 3 4 or two button mice are not good to demand as minimum GUI design principles.

    • by Anonymous Coward
      In 1985 a head mounted ultrasonic eyeball-mouse existed for the mac.

      It was patented, and soon the patent runs out. A human neck is VERY VERY steady and accurate. Tape or fuse a broomstick to a human head or helmet and see how non-trembling and steady a long rod is. Well this ultrasonic pinted was a three resceptor ultrasonic headband receiver that extrapolated what the user was looking at on the screen and MAGICALLY moved the mouse to wahtever they looked at!

      It cost about 300 bucks for the mac version that had a cord running to headband. They never released a cordless version. The cord was a miserable shackle because suddenly standing up, if forgetting the cord, or kicking back away from the desk on a rolling chair ... BOOM you accidentally knock everything around that the cord interferes with as you yank it.

      Again.... a single mouse interface works best because if they aever added "strong-blink" detection for a clisk or some other clamping mechanism based on jaw angle the computer gui would run flawlessly.

      Single mouse button designs allow all sorts of non-messy input methodologies.

    • by Anonymous Coward
      If you never used a mac or NeXT, you will never understand how a gui works well with one mouse button so dont bother flaming this. You need to try it for a while to understand that 3 4 or two button mice are not good to demand as minimum GUI design principles.

      I've been working with Macs for more than ten years, and the multi-button mouse is one of the best things to come to Mac in that time -- command-clicks, option-clicks, click-and-hold requires more effort on a one-button mouse, and if you're doing any kind of involved design work, a one-button mouse is a fast train to RSI-ville. If you've ever used a Mac, you understand how much less intuitive [command-click] is compared to [thumbclick].

      I'm sorry, but in my experience, two buttons and a scrolling wheel are the absolute minimum if you want to talk about good interface design, and if you think the interface of the future is going to be a touch screen of any kind...well, have you ever heard the term "gorilla arm?"
  • Please tell them not to do it like how Tom does in Minority Report.. If I have to use both my hands to operate the PC, I wouldn't be able to, er.. massage myself...
    • ---"Please tell them not to do it like how Tom does in Minority Report.. If I have to use both my hands to operate the PC, I wouldn't be able to, er.. massage myself.."

      Aww, shut up and get off of autopr0n.com :-)
  • by toupsie ( 88295 ) on Friday June 21, 2002 @10:09PM (#3747816) Homepage
    Unless you can plug an I/O interface directly to my brain, you are not going to beat the keyboard for computer access. 100+ keys in approximately 1 1/2 foot range. We are very adroitness as mammals with our fingers. Eddie Van Halen and Jimmy Page are perfect examples of this ability. You will never see one of our nearest mammalian cousins, the chimpanzee, do "Stairway to Heaven" without pissing everyone off at the local guitar store sound room.

    The common human can manage the 1 1/2 foot distance of a keyboard fairly well as evidenced by the number of God awful personal web pages on Geocities. Even though I use the most "gooey" Graphical User Interface, Apple's Mac OS X [apple.com], if I want to manage data, files, etc., I jump to the "Terminal" and do it through the Command Line Interface. Even with Mac OS X's speech control and IBM's Via Voice software, I can still type faster than I can talk -- in an intelligible manner.

    I always find it funny in "near future" films how complicated the input interfaces are. They are dancing their hands in a virtual space acting like data had a form that you could grab and move. What a waste of effort. If you have to flail your arms around for 8 hours, you are going to be exhausted...but at least you will only have to buy one ticket to fly Southwest [washtimes.com]. The amount of effort required to manipulate the 100+ keys of a standard QWERTY keyboard is minimal. Though I have never had problems, I am sure the keyboard design can be improved to prevent repetitive injuries to certain users. We are all different shapes and sizes in various regions of our anatomy. Its hard to pick the "average human being" for a generic device.

    The keyboard is a powerful input device. Even with the 130 year-old QWERTY keyboard [earthlink.net], human kind has been able to create wonders -- without it, we would have never made it to the moon. Compared to the original 1872 keyboard layout by C. L. Sholes, my clear plastic keyboard that came with my Dual G4 is not much different. I know it so well, I don't think I will ever use the Dvorak keyboard [utk.edu] but my future kids might.

    • The keyboard does serve us well, but if it ever gets an update this could be it [slashdot.org]. The virtual keyboard might be hard to use at first, but it also might be able to switch instantly from one mode to another using simultaneous voice commands or (what I've been waiting for a long time) foot pedals. Not just Shift and Ctrl but Roman/Cyrillic/Hiragana quick changes, and of course your own macro "keys" by the dozen.

      Virtual keyboard possibilities:
      • Change the size of keys to your own specs. I want that!
      • Modify their relative positions--not the alphabet keys but the others.
      • Have the arrow keys an inch in the air above the rest.
      • Have certain keys play a note when I get near them because they're the ones I always hit by accident when trying to do something else, or make them three inches off to the side or...
      • ...in a vertical invisible panel perpendicular to the "keyboard." This starts to merge with the finger-pointing idea.

      P.S. Slashdot meetup URL [meetup.com]
      • The virtual keyboard might be hard to use at first, but it also might be able to switch instantly from one mode to another using simultaneous voice commands or (what I've been waiting for a long time) foot pedals.

        The virtual keyboard is a cool idea. The foot pedals are a must. If a monkey can eat a banana using his foot, why can't I use mine to select the Shift key? The only problem with a virtual keyboard is tactile feedback. Hitting virtual space with meat space mitts might be a little hard on the brain. Touch is such an important sense.

    • Great points, but keyboards don't work as well for small mobile devices, as well as large wall-sized devices (like SmartBoards). Another problem with keyboards is that it's difficult to do collaborative work with them. There's an implicit one-user-at-a-time model built into the modern desktop computer.

      There's also a question of leveraging natural human abilities, such as speech, sketching, gesturing, and so on. These are the ways people communicate with one another, why can't we do this with computers?
    • by PotatoHead ( 12771 ) <doug@NoSpAM.opengeek.org> on Saturday June 22, 2002 @12:14AM (#3748168) Homepage Journal
      You are right on with this. I have been working with a design package called FreeForm that uses the new Sensable Technologies Haptic input device.

      Being able to feel your model has significant advantages, but there is one subtle downside. You get *tired* quickly. One can run a MCAD or ID design package all day with a mouse and keyboard and not blink an eye. The haptic will leave your arm sore after a few hours.

      Personally I feel we have not done near enough with audio input...

    • "I can still type faster than I can talk -- in an intelligible manner."

      No, you can type faster than your PC can understand you talking. For now.
    • I have to agree. The only improvement we could possibly make over the keyboard is intuitive voice input as seen on Star Trek.
      Until this technology is common-place the keyboard is King.
      A while back the UK magazine Linux Format had an article about accessibility. One of the devices they had a picture of was a huge almost horseshoe shaped wotnot. You placed the wotnot between yourself and your display and then pointed at the display through the wotnot. Sensors in the wotnot detected your finger and translated it into a position on screen. In a sense you could control the display by pointing at it. But you needed this huge unsightly wotnot. My immediate reaction was "Stop it! This is silly!"
    • Only hicks, rednecks, and other trailer trash fly on Southwest Airlines. I think Slashdot readers will be safe enough.
  • by bilbobuggins ( 535860 ) <bilbobuggins@@@juntjunt...com> on Friday June 21, 2002 @10:35PM (#3747882)
    who decides though, what is human centric?
    if you think about, the whole reason humans are in the position they are in is our ability to adapt to things NOT 'human centric'.

    by the same token you could argue that we have developed in such way, so well suited for our environment, that EVERYTHING is 'human centric'.
    the fact that we can pick up a stick and use it for a tool, does that make us more adaptable or does it make a stick more 'human centric'?
    given that we can interact fairly well with just about anything, how are they deciding what actions/motions are more native to humans?

    they didn't answer this either, from the project site it seemed that most of the 'improvements' focused on voice commands and having the computer do menial tasks for you, meaning less interaction, not easier interaction.

    • The key to human-centered computing is to start with real human needs, not with technology. It's about empathizing with end-users, understanding their current tools, their work practices, their tasks, and their organizational context, and then iteratively desiging software to match their needs.


      As University of Maryland Ben Shneiderman writes, "The old computing is about what computers can do. The new computing will be about what people can do."

    • who decides though, what is human centric?

      The user.

      The article is a red-herring. The future of usable interfaces lies in making it behave the way you want, not the designer.

      The MIT made an interface where you move things with your finger. Good ... for some people. Others might feel more comfortable interacting in other ways; speech, mice, etc. etc.

      Usability research is excellent, improvements are always welcome, but it is still the software producers saying "this is the interface you have to use".

      When you can interact fully the way you feel like at that moment will be when computers are human centric.
    • C'mon, are you serious or just trolling? I have a hard time believing the former, but here goes...

      Of course people _can_ adapt to many different technologies, but that doesn't mean the experience is particularly enjoyable.

      Fashioning spears out of sticks and flint is very time-consuming, but at one time it put food on the table. Nobody goes around making lithic tools anymore because they have better ways to spend their time now.

      Likewise, many people have better things to do than fumble around with arcane computer technology. Some have trouble using a mouse, which isn't a particularly intuitive device. Others just don't want to be bothered fighting the machine to get what they want done.

      While we may be adaptable, we take to some motions/idioms easier than others. That's the whole point of human-centric: making things more natural.
  • Big whoop (Score:2, Funny)

    by Alien Being ( 18488 )
    Our local weatherman can move entire hurricanes around with a wave of his hand.
  • by Art Tatum ( 6890 ) on Friday June 21, 2002 @11:38PM (#3748065)
    Or is that MIT page just an overblown dramatic way of saying, "Hey! We're doing usability testing!"
  • Good idea (Score:5, Funny)

    by PD ( 9577 ) <slashdotlinux@pdrap.org> on Friday June 21, 2002 @11:38PM (#3748067) Homepage Journal
    Somebody once said that the only intuitive interface was the nipple. Everything after that had to be learned.

    • The problem is, the closest nipple-input device I can think of would have to be some sort of rubberized micro-joystick that's manipulated with the tongue...and what about the RSIs there!?
    • Somebody once said that the only intuitive interface was the nipple. Everything after that had to be learned.

      Well, I'm on my laptop right now and getting the hang of the nipple sure took some practise. :/
    • Somebody once said that the only intuitive interface was the nipple. Everything after that had to be learned.

      Only problem is that new born children even need to learn how to handle the nipple, so the nipple is a learnt interface, too. What a pity, your above quote used to be one of my favourites until I learned by becoming a father that it's just not true. :-)

  • I still maintain faith that Rover, Shelly the turtle, and all the Microsoft Bob gang will show us the way to a new userfriendly future!
  • ...tell your computer to move a folder inside another, and just by pointing with your finger, it would happen... a natural language, multilingual conversation system that can understand and respond to normal speech... a self-configuring, decentralized wireless network... He points to electricity as a resource that works this way, in that it's ubiquitous, but also unobtrusive.

    I don't know about him, but I generally interact with electricity by putting little plugs in little sockets. The interface makes it quite difficult to accidentally do something different from what I wanted, and there's nothing left up to interpretation. Things always work exactly the same way, unless something's gone terribly wrong.

    With a good interface, you can tell a new user how it works, and the user will be able to predict everything that will happen when they do anything with it, and will be able to do the things they want to do reliably without ever doing something unintended.
  • Imagine the possibilities that this will open up for us! Go from clicking on the keyboard to actually using a writing utensil, some kind of light based pen. I know many people that hate the layout of a keyboard and they would love to be able to write instead of typing. Besides, you have more control that way, you can add in comments, cross things out, and do all sorts of things (write in the margins) a lot easier than on a normal computer keyboard.

    And hey, instead of looking at things on a bright computer monitor that hurts your eyes, how bout we get some of that new epaper that will display things you're writing. You can have a stack of it sitting on your desk ready anytime you need to write something.

    And hey, instead of having that bulky mouse to move things around your computer (where files and folders are so easy to lose in the maze of your hard drive) we could have a large group of drawers organized in a certain way... we'll use alphabetically for now. Anytime I get finished with a piece of epaper I can put it by hand into those "file cabinets" (trademark pending biotches) exactly where I want to put it. I'll never lose a note again, since everything will be in one place and I don't even have to worry about losing everything to a hard drive crash.

    The future is now! Lets get people working on this project! I can see the end of the tunnel already! We need funding!!!

    oh wait...
    • Ah but what if you could do all those things while sitting at the stip club or the ball game in between 'plays'... how about ordering up the bio on that girl or the stats on the player who just scored... then filing away the best parts for your scrapbook/journal plus the video footage your eyes just recorded?

      That's what I'm talking about. Enjoy real life to the fullest and bring your office with you.... currently we only have the option of sitting in the office and living real life vicariously through cheap versions on the internet.

      This of course only applies to the ones of us unlucky enough to have to work for a living but lucky enough to have a job.

  • . . .but I downloaded and watched a few clips from the Project Oxygen website. From the amazing stuff I've seen coming out of MIT (especially their media lab), I expected more. . .

    The voice recognition was noticeably laggy, and in the intelligent meeting clip, the guy has to say "computer" twice. I remember dictating my final essay for high school english using Dragon Dictate, and its accuracy and speed didn't even require me to slow down or speak deliberately.

    Also, the "sketch" demo was rather lame, even if it makes great PBS material that even computer-phobics may enjoy watching. The little 2D physics simulator looks exactly like a program (the name escapes me) that I had, again, in high school. We used to spend hours making little goldberg machines instead of working. What's new here? They've added a little pen-style pointer? That's hardly a new paradigm for human-computer interaction. With the kind of lofty language that the project oxygen site uses, I would've expected more. They claim that current interfaces are cumbersome and require us to do a lot of the work for the computer, well, having one guy wearing a headset microphone and using a keyboard to issue terse monotone commands seems pretty unnatural to me.

    • Hey, hey...this is MIT we're talking about, not some crass corporation only responsible to its stockholders. MIT students don't use crass corporate software. MIT students create the future!

      Remember, we're talking about MIT here.

      • Hey, hey...this is MIT we're talking about, not some crass corporation only responsible to its stockholders. MIT students don't use crass corporate software. MIT students create the future!

        Are you involved in the project somehow? How do you know this?
  • I can't wait to jedi mind trick one of these things.

    "These aren't the files your looking for"

    Error: move aborted: files not found

    "You can go about your business. Move along."

    Resuming job [537]: wget -r http://www.autopr0n.com
  • You've used a touch screen. Now imagine trying to edit on one. Any questions? With a low-resolution pointing device, you're forced into a system with a small number of choices at any one time. This puts you back in Menu Hell, circa 1985.

    If you've ever actually tried it, gloves-and-goggles VR sucks rocks as a way to get anything done. You can shoot. That's about it. Sculpting and air guitar have been tried, but without force feedback, they are nothing like the real world. Building anything is hopeless.

    Mice won out because you can move a little pointer precisely. Gestures with a pen also work. But gestures in free space, no.

    • Seems to me that mice are the best choice given a 'limited' resolution device. If you were working with something that could scale to the full human interaction space, ie: your 'personal sphere' of influence and scale in resolution then the finger or whatever pointer you used physically wouldn't be so 'blunt'.

      Think about using a virtual pane of glass about a meter square for your 'screen' instead of a little laptop or monitor screen. Also being able to 'zoom' in for precision would add a lot, expecially if it were a localized zoom more like a magnifying lens just for grabbing vector nodes or small sections of text, etc..

      I do agree that current versions of gloves-and-goggles aren't the answer though.

      It won't really work until it's more like a holographic display, something very resolution independent, or with an extremely high maximum resolution that can start out in the middle somewhere.

      As for pointing devices other than mice... using anything that requires full motion of the arm and elbow won't go far for most people.. too much work. You need a ppointer with some buttons that are ergonomic but can translate small ranges of motion into large selections, etc... like those old copying devices artists used to enlarge drawings... like the mice does with it's 'travel/speed' attribute turned up high.

      I'm thinking a gyroscopic device that measures distance traveled and can reset it's relative position to the display as needed (the way you do it manually with a mouse when you pick it up and set it back down at the beginning of your pad).

      my 2 units
      • expecially if it were a localized zoom

        I saw the first implementation of that at Xerox PARC in the mid-80s. It's appeared since in a few games and CAD programs, but never caught on. It makes pointing more precise, but slower.

        I'm thinking a gyroscopic device.

        Like the Gyromouse? [gyration.com]

        Again, it's great for first-person shooters, and for PowerPoint, but not much else.

  • I've been able to know about the next windows BUI (Body User Interface), directly from their ultra secret research labs!

    First of all, the next windows version will not recognise keyboards, mice, touchpads etc., as they're obsolete devices, that the new generation of win-users should gladly forget.

    Of course, a virtual keyboard will be included for compatibility issues, only it will be hidden, so that the average user will never be able to find it.

    The hardware device used in this interface will be a full set of position sensor that the user should put on his body, each of them connected to the computer via his own wire; user will have to stand in front of a huge monitor and move all of his body to send commands.

    The software interface will be an unlimited 3d space, of course, with an avatar of each user connected to the system ad well as small 3d "icons" of each program available: users will have to grab the icon to launch the program, or to kick them to see an alternate menu (right click-like) or to hit them with the head to select them for other reasons.

    Common windows commands will have their shortcuts, with a triple somersault being "shutdown", scratching your armpit being "close" and of course what on some unices is called three fingers salute will be achieved with one (the) finger only, and will allow you to log in, to log out, to check for unstable programs, to shutdown, etc. etc.

    Resistance is futile. :)

  • MIT used to be cool. Writing Spacewar for the PDP-1 (an 18 bit computer with no stack that takes up ONLY 17 square feet!), hacking lisp, building robots.

    The more computing power you throw at MIT, the more lazy they get, the more wanky their projects get.

    I think if you gave the original hackers the machine power of today they would have built Maria (Metropolis) by now.

    graspee

  • If my memory serves me well, I heard Negroponte talk about this sort of thing in the early 70's. I remember his example as an admrial directing nuclear submarines with arm gestures. Glad to see some progress.
  • It occurs to me that what makes the geek-crowd so adept at using computers is the fact that we are able to communicate with it on it's own terms. A computer is quite used to handling streams of characters, i.e. streams of discrete elements. It's not at all good at putting things together as a cohesive whole -- that's what us humans are good at. Those of us who can talk to a computer at it's own level (e.g. by using the commandline) are able to best manipulate it and make the best use of what it's good at. If we were to express ourselves to a computer using natural language, we'd be once again constrained to the world of large cohesive wholes by our language, rather than being able to dip into the world of small elements...

    Why would we possibly want to remove the one element that makes a computer so incredibly powerful -- the ability to handle many small, repetitive, discrete items -- and replace it with what we're already good at, i.e. visualising entire systems? That's pointless. Use it for what it's good at ... and to do that, you need to talk to it in it's native language...!

    Project Oxygen looks like a few steps backwards to me ...
  • I had my first experience with a computer in 1966 - before most people here on Slashdot were born. The machine was an IBM 360 and the 'user interface' was a card punch - you handed the deck of cards to an operator when you were done creating them.

    In the many years of using computers since I have arrived at one conclusion; for me the most important thing about interfacing with a machine is minimizing the amount of 'wrestling with the machine' which I have to do in order to accomplish my task.

    I'll show you what I mean by 'wrestling with the computer'. Suppose that I want to copy all of the emails that I have in my nsmail directory to a cd for archive purposes. I type 'burncd nsmail' to start the process from the command line in Linux. (burncd is a wrapper I have put around the 'mkisofs' and 'cdrecord' command line programs which presuppose them with the correct options for my system.)

    Contrast that with using a cd burning program from a GUI:
    • First find the program in the start menu and select it.
    • Wait for the program to load.
    • Pull down File menu - select open.
    • Find the directory I want to copy in the tree structure. Select the directory with the mouse.
    • Select the type of cd I want to burn.
    • Etc.


    I am going to stop the GUI example here; real GUI cdburning programs are far more complex than I want to write about. The few that I have used make the process of burning a cd quite a lengthy and complex process from the users point of view. I don't want to wave a mouse around pointing and clicking for 30 seconds; I want to burn a damn CD!

    The amount of time and effort that it takes to get the computer to do what I want it to do is what I mean by 'wrestling with the computer'.

    There are times when a GUI is the way to go: I would hate to try doing a PC board layout from a command line. It is easier to move chips around with a mouse than to type 'move U1 .03 inches to the right'.

    Minimum work on my part - maximum output from the computer is what is important to me as an experienced user. I want the computer to do as much work as possible - I want to do as little as possible.

    Computers are the intellectual equivalent of a fork lift; they allow me to handle far heavier intellectual tasks than I would be able to do without one. The problem with a fork lift is that you have to know what you want to pick up and move. The same is true of a computer; if you don't know what you want to do - you can't do it.

    A fork lift is a dangerous machine because it will happily amplify the strength of a fool. In a similar fashion a fool with a computer can do tremendous damage in the intellectual world. An interface which puts obstacles in the paths of fools - while letting people - who know what they are doing - quickly and easily accomplish their tasks is ideal. In a very real sense that is what I like about unix; it doesn't impede me - but it keeps the people who don't know what they are doing from being able to do too much harm.
  • I can't help but to skim over the post that say GUI is nothing but filler, command lines are where it's at, human centered will be less powerful, etc, etc... But they all miss a minor point-- Human centered computers are supposed to be easier to use, thereby making them more accessible to a wider range of people. You can argue from the elitist techno nerd side of the line all you want, but the majority of the world is still made up of people who don't want to take the time to reprogram their VCR's time, let alone screw with an unpolished interface. And unlike catering to the techno geek, building easier to use computers makes more money. You doubt it? Flip on the TV and see who Gateway, Compaq and Dell's commercials are catering to. "Dude, you're getting a Dell!" sure as hell isn't aimed at you or I. A talking cow? Really. Joe AOL doesn't care about open source. He doesn't really even give a damn about microsoft's monopoly (other than the "MS is evil" opinion everybody has). Mozilla? Star Office? Who gives a crap!? Joe doesn't. He'll use what is easy to reach and use, unless there is something overwelmingly excellent in the product. MS makes their stuff at very least easy to reach by integrating everything into their OS. Ease of use is an opinion, so I won't go into that, other than to say there isn't a mass defection from MS yet.

    In the grand scheme of things, Joe makes company 'X' money. You DON'T. Thus the push to make computers easier for him to use.

Talent does what it can. Genius does what it must. You do what you get paid to do.

Working...