Become a fan of Slashdot on Facebook


Forgot your password?
Displays GUI Input Devices

Oblong's g-speak Brings "Minority Report" Interface To Life 221

tracheopterix writes "Oblong Industries, a startup based in LA has unveiled g-speak, an operational version of the notable interface from Minority Report. One of Oblong's founders served as science and technology adviser for the film; the interface was an extension of his doctoral work at the MIT Media Lab. Oblong calls g-speak a 'spatial operating environment' and adds that 'the SOE's combination of gestural i/o, recombinant networking, and real-world pixels brings the first major step in computer interface since 1984.'" The video shown on Oblong's front page is an impressive demo.
This discussion has been archived. No new comments can be posted.

Oblong's g-speak Brings "Minority Report" Interface To Life

Comments Filter:
  • gorilla arm (Score:5, Insightful)

    by Anonymous Coward on Friday November 21, 2008 @01:57AM (#25842339)

    Gorilla arm.

    That is all I've got to say.

    Check the jargon file if you don't understand this.

    • Re:gorilla arm (Score:5, Interesting)

      by SanityInAnarchy ( 655584 ) <> on Friday November 21, 2008 @02:03AM (#25842381) Journal

      Well said... I thought this comic [] illustrated it well, also.

      • Comic is on topic (Score:5, Insightful)

        by TheLink ( 130905 ) on Friday November 21, 2008 @04:43AM (#25842987) Journal

        How's the comic offtopic?

        Back in my school days, one form of _punishment_ was being made to hold your hands up or out for many minutes. Imagine if you had to keep your arms extended for so long - talk about asking for a new set of RSI problems.

        The full 3-D gesture stuff is overrated.

        What would help me a lot more is the ability to quickly switch to a particular window in mind: []

        Even if you don't have all your windows maximized, it would save a fair bit of time. Alt-Tab only works well if you are switching between two windows.

        You can kind of do this on the Linux/BSD console but it's more limited. I'm looking for something like the text console but for the GUI and where you get to pick your "working set" of 9 or so windows from as many windows you have open.

        • Re: (Score:3, Informative)

          by mr_matticus ( 928346 )

          You can kind of do this on the Linux/BSD console but it's more limited. I'm looking for something like the text console but for the GUI and where you get to pick your "working set" of 9 or so windows from as many windows you have open.

          Sounds like a combination of Spaces and Exposé fits that bill exactly. KDE already has the multiple virtual desktops, and I'm sure there's some Exposé clone for Linux out there somewhere.

          • by TheLink ( 130905 )
            But I want to directly go to a particular _window_ with a key combo NOT go to a desktop or "space".

            Going to a desktop/space or popping up a list of windows for selection is a _waste_ of time if I already know exactly which window I want.

            I suggest that most people are able to remember more than 2 windows AND also often work with more than 2 windows at a time. Alt-tab only works quickly for working with 2 windows - it is clumsy for more.

            Multiple desktops are useful for keeping the windows organized, but once
        • by teslar ( 706653 )

          What would help me a lot more is the ability to quickly switch to a particular window in mind: [] [] Even if you don't have all your windows maximized, it would save a fair bit of time. Alt-Tab only works well if you are switching between two windows.

          Another semi-solution I guess, but KDE has the option to "show Window list". I've mapped that to Window+W. It doesn't get you a subset of windows, it gets you all windows grouped by desktop, but if all the window

          • It does NOT do the trick at all, since I cannot go straight to the window I want with a single key combo.

            Why waste time reading and selecting from an easy-to-read list, when you already know which window you want?

            Worse if I still have to lift my hand and move it to the mouse, or press a few more keys - that is slow and inefficient.

            Multiple desktops are fine for keeping windows/tasks organized. But I'm talking about speeding up access to the windows you want once you already know which windows you want.

            It is
    • As a medical doctor, let me say that I am very thankful for Oblong securing my job's future by providing a whole new crop of repeated stress injuries~


      More seriously :
      Well, at least the display is at eye level requiring no neck strain (unlike microsoft's active table) and at a certain distance (with the general population getting older, Presbyopia is an important factor to take into account).
      But still, this for 8 hours a day ?!?

  • Nice (Score:5, Funny)

    by bb84 ( 1301363 ) on Friday November 21, 2008 @02:02AM (#25842379)
    ...but until it shows me the future I won't be *too* impressed.
  • g-speak? (Score:2, Offtopic)

    by gmuslera ( 3436 )
    Ok, lets register gchat, so people will get totally lost between gchat, g-speak and google talk.
  • by avalys ( 221114 ) * on Friday November 21, 2008 @02:11AM (#25842431)

    Actually, I call that an extremely unimpressive demo. It is a lot of technology with little purpose. In that entire video, what are they doing? Just spinning a bunch of pictures around.

    Without a compelling application that requires that interface, it's a just a big, expensive toy.

    • by plantman-the-womb-st ( 776722 ) on Friday November 21, 2008 @02:18AM (#25842465)
      Indeed, controlled with a Power Glove [] no less.
    • The entire time I was thinking "This is cool!" But I can't come up with any way it would make what I do day to day easier than the UI I have now. Typing looks to be a real bitch!
      • I imagine most of the time you'd type as you do now, just like you seldom if ever type with a mouse. Think of this as a replacement for the mouse, or maybe even an additional tool. I guess in that respect it's more like a tablet. You still use your keyboard and mouse but when you need to draw, you bust out the tablet. Maybe you'd use this a lot in Photoshop and not so much in Word.

        Then again, there was a time when the mouse didn't yet exist and when people first saw it, I'm sure they thought it was cute b
        • I see graphical artists loving this. Same with the video editing crowd. They actually did combine two video clips (the big yellow truck and the person with the snake) in the video.

          This looks almost too specialized. for everyday tasks, reading and typing this looks like it will be too much. Then again with all those movements no one will have flabby arms.

          Would it have the same effect on smaller screens? If all those screens were regular 19 flat screens would it work the same? Does one need a big screen to al

        • I think this technology will eventually end up paired with some of the speech recognition technology, (like what Ford and Microsoft have put out recently), and this will do away with the keyboard (although I'd still keep a spare sitting in a drawer, just like I do now for my headless servers).
          I think sound and video editing will be a good application for this, but honestly, I think it will be a while before I walk in to a local midwestern TV station and see them using this tech.
          On the other hand, I could de

    • by Iamthecheese ( 1264298 ) on Friday November 21, 2008 @02:33AM (#25842537)
      Medicine, 3-D rescue mission/fire control mission planning, biology, CAD, art, anything with complex data sets, physics, movie editing, and 3-D movie creation come to mind. The intuitive 3-D control will allow whole new interfaces.
      • by hairyfeet ( 841228 ) <> on Friday November 21, 2008 @05:40AM (#25843239) Journal

        In most if not all of those you mentioned having a 3d view would barely get you halfway there. The problem is you need to be able to TOUCH, I mean really touch, to truly interact. And that is where we really suck right now. Because all of the sensory feedback devices I have seen so far including the really high dollar still in the testing phase ones, really only give you soft/hard. They can't give you warm or slippery or squishy or cold or kinda bumpy get the idea. We humans pick up so much by touch that we simply don't realize, and when you cut us off from those sensations we can still work but not nearly as well.

        That is why IMHO this stuff will never be more than kind of an "ooh cool" kind of expensive toy or for really really tiny niche roles until we can interface the brain directly. Because trying to simulate all the things we can gleam from simple touch would be just to insanely expensive to ever be practical. But if we can figure out a way to send the data to the brain directly, either by some sort of implant or perhaps through sensors on the scalp, then we don't HAVE to come up with a physical way to fake all this data, we can send it to the brain directly. It would also get rid of the "gorilla arm" problem as you wouldn't have to wave your arms like a maniac trying to work since you could simply manipulate the data with your thoughts, or even basic eye tracking.

        Call me crazy, but I think that an interface controlled by the mind could really give us a great leap forward. Even typing this post think of how much faster it would be if my thoughts simply appeared on the page? I guess it is because all these oversized 3d interfaces just seem like overkill, like a holodeck. I know the Star Trek fans will kill me, but let us be honest: holodecks are dumb. You are wasting all that space and energy to give ONE person a little fantasy land to play in. That is really really dumb. When I saw the Voyager episode "Equinox" I thought THAT was what a holodeck would really be like. Instead of wasting all this energy trying to create a physical simulation to interact with just send the signals directly to the brain where they can be experienced with minimal power required.

        Maybe it is just me, but I think this thing too is going overboard with trying to give physical interaction, when it is mental interaction that we should be striving for. But it does look like it would be fun to play with for a half an hour or so, or until your arms feel like falling off, whichever came first.

        • by baggins2001 ( 697667 ) on Friday November 21, 2008 @06:47AM (#25843505)
          Yeah but who would have thought that people would by teleconference rooms. I think it's a nice impressive toy, but someone with a lot of money (company money) is going to decide they need it to impress customers. I can already see someone swapping around Impress documents during a meeting. It'll happen, it'll make no sense, but it will happen.
          • Oh shit, I just thought of a real world application. Training. Where you want people to interact with something in a 3-d way. Say you want to show someone how to change a tire. Without them actually changing a tire. ( well something more expensive and complicated). Someones building widgets. They could interact with the screen without touching it and attach a database id to it.
            Crap I hated this thing.
        • I agree with you mostly, But the one thing that I think we all need to remember in the fight between neural interface and large 3D toys, is that once we have those neural interfaces, we will need a 3d world to work in. I've always viewed all of these toys as getting closer to that. You can't just model the world around you and call it an interface. It's extremely inefficient. All of these technologies from 3D gestures to voice control to whatever become extremely useful when trying to deal with vastly dime

    • Re: (Score:2, Interesting)

      by Louis Savain ( 65843 )

      Are you kidding me? This is the future interface of parallel programming, among other things. Rotate'm, push'm, pull'm, drag'm and drop'm. This technology will allow us to walk inside or fly through our programs and quickly create and/or modify them through trial-and-error. Kinda like the way an interior decorator might rearrange the furniture and colors on the walls. This is the beginning of the end of keyboards and mice and typing. Add a voice recognition interface and this shit is going to kick ass. It w

    • by TheModelEskimo ( 968202 ) on Friday November 21, 2008 @02:59AM (#25842633)
      Agreed. These people are demonstrating something almost completely useless while I use a very traditional method - text entry via keyboard - to learn programming in a console. And I'm a 3D illustrator.

      People keep harping about 3D visualization being the next big thing, but while these awkward, hammer-seeks-nail inventions come and go, simple things like the classic terminal are *increasing* in popularity, if anything. New Linux users and experienced Mac users are saying things like, "actually, I just use the terminal to do such-and-such a task; it's faster that way."
      • Yes, in real world applications this would likely be frustrating, but in games it would prod serious buttock.

        After all, games are designed to entertain, not maximise your productivity.

        I've been a programmer for five years now, in physics/graphics/biosciences/allkindsofstuff, and I can't think of a single application beyond display of datasets at conferences where this might be useful.

        As a replacement to the traditional PowerPoint/PDF conference presentation, it would likely prove entertaining, or at least m

        • by TheLink ( 130905 )
          Actually from games like starcraft, you can see that sustained and peak "actions per second" can be quite important. An interface that can let you increase that will be great.

          Thing is, maybe just a keyboard and _two_ mice (each with a fair number of mouse buttons) and some optional foot pedals would do far more in increasing the sustained actions per second than fancy gorilla arm stuff.

          For example for an FPS you could have movement control with one mouse and one screen/window. And weapon control with anothe
      • Re: (Score:2, Interesting)

        by rusl ( 1255318 )
        Well, I do think there are some interesting possibilities in that thing where they interact with the topography using the cut out shapes.

        However, I too was thinking about my love of the command line. Basically, as they claim, 2D interface came along in 1984. It basically still has a lot to be worked out to make it useful. I do prefer point and click for many things there the command line options are just too complicating. It's easier to cut and paste 5 random files from one place to another than to find s

      • by Haeleth ( 414428 )

        simple things like the classic terminal are *increasing* in popularity, if anything. New Linux users and experienced Mac users are saying things like, "actually, I just use the terminal to do such-and-such a task; it's faster that way."

        And it's not even just Unix and Unix-like systems where the terminal's popular. Even Microsoft recognise its importance -- that's why they introduced PowerShell as an alternative to Windows' traditionally rubbish CLI.

      • Re: (Score:2, Interesting)

        by - r ( 136283 )

        not that i post that much here, but - this is the coolest thing iv/e seen in ages. *not that it applies to us as programmers*, but it does to our users. yes, i use terminal on my imac for programming, but not for seeing the result. i think someone out geeked the geeks here...

      • The terminal and 3D interfaces are complementary, not antagonistic. We should welcome 3D interfaces. Indeed, we should welcome almost anything that increases the level of communication between the user and the computer.

        Think about surfing Google Earth using a CLI. Not good, is it?

        Now imagine you surfing Google Earth with a multi-touch 3D interface (eg: [] ). Then, once you're where you want to be, being able to call up a CLI window with

      • We are great verbal communicators, but not so good plastic artists. Anyway, one of the worst realms for us is 3D, most people simply can't deal with it*, some can understand what is going on 3D, but surely not as fast as 2D. That, combined with the fact that we can read and type faster and easier than we can talk and listen leads us directly to the console.

        Now, of course as some applications do benefit from graphical output, and a few benefit even from 2D input (way less than what MS want people to belive),

      • This isn't something that will just come and go. They've been working on this tech since the early 90's and it's been advancing rather well. Considering how well people have embraced touch screen tech the past couple years, something that had been around since when, the 60's? I think this will do quite well, but I also think it will be many years before we see it in heavy use.
    • by Xiph ( 723935 ) on Friday November 21, 2008 @04:35AM (#25842963)

      I claim that this will be great for gaming, i already want to make games for things like this, seeing this video does nothing to remove that.

      I also think this expensive toy will be great for things that requires complex data to be handled fast.
      That's what gestures are good for, complex objects, needing complex handling, instead of going into a menu->submenu->item, click.
      They're nice in the same way as keyboard shortcuts, they reduce strain, but can't be used for everything.

      Gestures are great for somethings and really poor for other things.
      This system is partly a system for gestures,
      partly a system of semantics of the various gestures,
      and partly a system for using these things over an arbitrary amount of screens(dig about a bit on the website).

      I think that for some uses this will be awesome, for others it won't work. Don't do programming or other text-centric things on this system.
      I have no illusion that talking will ever replace typing.
      Just like I don't think the Wii will replace me going outside to play soccer with my friends, Or that an OMNIMAX will stop me going to beautiful places.

    • Imagine the system applied to interactive pr0n though...

  • by syousef ( 465911 ) on Friday November 21, 2008 @02:18AM (#25842463) Journal

    I really don't want an interface where I have to gesticulate at a computer, while repeating words so the speech recognition engine picks them up correctly and moving cursors around with my eyeballs. Hell I don't even want 3D desktops and transparent windows - take all the damn effects away, and leave me with the folder metaphor, current UI for editing text and pictures, and a command line. These interfaces don't give me any new capabilities, and anything that requires more effort and doesn't empower the user is a waste of time. They aren't revolutionary - they're not even good sci-fi. They don't belong to the future, because the future will be built on interfaces that are MORE not less convenient and do actually give new capabilities. Good sci fi are things like the star trek communicator (not so different to today's mobile phone, or a walkie talkie of old, and were used to enable the characters).

    • Not everyone thought the mouse was a good input at first. This type of UI may have speed advantages as well as visualization advantages we may not completely see yet. CAD comes to mind here. But I suppose ASCII art CAD is enough for some people :)
      • Re: (Score:3, Funny)

        by syousef ( 465911 )

        Not everyone thought the mouse was a good input at first. This type of UI may have speed advantages as well as visualization advantages we may not completely see yet. CAD comes to mind here. But I suppose ASCII art CAD is enough for some people :)

        Show me speed advantages (without significant disadvantages in other areas) and I'll be pleased to accept change. In the meantime my office is enough of a nightmare without people gesticulating and yelling at their computers like Italian villagers.

        I think speed adv

        • In the meantime my office is enough of a nightmare without people gesticulating and yelling at their computers like Italian villagers.

          +1 Funny. I'm only lucky I wasn't drinking anything at the time I read this.


    • Re: (Score:3, Interesting)

      I don't forsee this technology being used on personal home computers in the near future.

      Where I do anticipate (and look forward to) seeing it is for interactive public displays. It would be a very cool interface to have for a 3d map and directory in a mall or an informative display at a museum or aquarium.

      As for home use, it could be used for family gatherings and birthday/wedding parties. Set it up with your DJ software and photos, then let your guests check out photos, pick out music to play, etc.

      Most wed

      • by Spudds ( 860292 )

        All the things you mentioned could be done with a simple touch screen interface that we have now. All you need is an intuitive interface.
        In fact, adding gestures and gloves and what-not would be a hindrance mainly because all the guests would have to figure out what gestures did what. On the other hand, everybody knows how to point and click, even if it's just with their fingers.

    • by patro ( 104336 )

      I really don't want an interface where I have to gesticulate at a computer

      Could be a great workout, though. Imagine coding with this interface, Lot's of exercise. No more Mr. Fat Geek.

    • Hell I don't even want 3D desktops and transparent windows

      Translucent windows are a godsend for me. I <3 being able to pack 2->3x more information in the same screen space. I'm rather unimpressed by 3D desktops and effects like the Comipz cube.

    • by Gulthek ( 12570 )

      In other words:

      "I learned all this once, stop all further development here please."

  • cheapscates (Score:3, Funny)

    by robi2106 ( 464558 ) on Friday November 21, 2008 @02:20AM (#25842473) Journal

    you mean to say, a startup centering around hi tech advances in visual interfaces.... can't afford to host their own demo? They have to go to the upscale HD version for YouTube to host the content?

    Common. Get a real hosting account and a guy that knows how to embed JW to play your flash video.

    • Why would a small startup want to host its own videos when youtube will happily take the strain? The only reasons to do it yourself are a) high-availability, if you can really afford to set that up and really need it b) ego. Since most people don't actually care whether they get a video on youtube or direct from your site, b is mostly irrelevant here.

      p.s.: I don't think "common" means what you think it means ;)

      • by tepples ( 727027 )

        Why would a small startup want to host its own videos when youtube will happily take the strain? The only reasons to do it yourself are a) high-availability, if you can really afford to set that up and really need it b) ego.

        And c) being able to post your video to online forums that have a policy against linking to YouTube or other video sharing sites that allow swearing in user comments.

  • by Ecuador ( 740021 ) on Friday November 21, 2008 @02:23AM (#25842487) Homepage

    Yawn... Another one of these. Why do I feel I read a /. article about "Minority Report interfaces" every week? And it would be interesting if we were talking about pre-cognitive interfaced etc. instead of the useless "do your best traffic officer impression" to move some videos around.
    Yeah, IWTFV (didn't actually RTFA that came with it) and I guess it would be kind of cool for people who are not Real Geeks (TM). I especially enjoyed their "intuitive high bandwidth access to information" where they navigate this seemingly enormous 3D grid of what looks like boxes containing... the same japanese character! Yay, what a way to navigate through 2 bytes of info! Ok, maybe it is 1kb if the boxes were not identical, but there is no way to tell at a glance, as people who have tried to use lame 3D file managers would now. That scene also brought back fond cinematic memories... It's a Unix system! I know this!

  • by wild_quinine ( 998562 ) on Friday November 21, 2008 @02:41AM (#25842571) Homepage

    Oblong calls g-speak a 'spatial operating environment' and adds that 'the SOE's combination of gestural i/o, recombinant networking, and real-world pixels brings the first major step in computer interface since 1984.

    I'm tired of hearing about all these things that will replace the mouse. The mouse will be replaced one day, but not until something comes out which is better, not merely cooler.

    This minority report interface will tire your arms out in less than five minutes. I'm embarrased to admit it, but I use a computer for upwards of eight hours a day. Sometimes upwards of twelve.

    The mouse is ideal in that your fingers have precision, the feel of pointing is natural, and crucially your hand, wrist, arm, are all more or less at rest throughout the process. Sure, you move them. But you don't hold them anywhere. It's a fundamentally different type of task from minority reporting, or wii-ing, or other stupid-but-cool flailing systems.

    So no, I don't know what will replace the mouse. Something, eventually. If I knew what it was, I'd make a bloody fortune. But improving on the mouse will take a damn shot more work than making me say 'Wow', let alone 'meh'.

    • Anyone else think the mouse will be replaced by a neural interface?

      We keep the keyboard for quick and accurate input, and allow the brain to control where the pointer is on the screen. If I want to close a window, I concentrate to move the mouse to the corner of the screen, and think "click", the window closes.

      Maintains backwards compatibility with legacy apps, makes for an amazing RTS game, and uses existing (currently primitive) tech. Also requires a special hat.

  • Wow! I want one (Score:4, Interesting)

    by Prikolist ( 1260608 ) on Friday November 21, 2008 @02:43AM (#25842575)

    I want one! I will disagree with everyone here saying that it's useless. I'd trade the mouse, and pen tablet, and the joystick, and all the rest of those for this. Looks way more convinient - not to mention instinctive - to use. It's like a touchscreen but you don't have to leave greasy fingerprints all over. With this I could even actually draw on computer, while so far any attempts with mouse just ended up with wrist pain and frustration. And just moving the cursor, moving windows, anything... Oh, and games, this will send Wii to an antique museum.

    • It's like a touchscreen but you don't have to leave greasy fingerprints all over.

      So you want a Light Pen? []

    • Re: (Score:2, Informative)

      by rusl ( 1255318 )
      As an artist myself I've often wanted to draw on the computer too. I've never suceeded. However, I've seen a very skilled person draw on the computer. The way to do it is this: use your right hand to carefully draw with the mouse - keep your left hand on Ctrl+Z. It's a computer so no matter how many times you erase you won't rub through the paper. he was really good with it, albeit his drawing style was somewhat limited - slightly gestural if you know what that means. He would make lots and lots of marks an
    • It is a Wii for executives. Once the doors close they'll be bowling.
  • [] Sometimes the movies don't consider the ergonomic problems of "clever" interfaces.
  • I see this having huge potential in CAD & design applications. Spatial controllers for CAD I've found to leave much to be desired. Gestures and natural motion are a huge improvement. This paradigm of interface will all hinge on a killer app, sure the engineering has been done and from what I can tell it works, effectively, but there are so many brilliantly engineered ideas that are simply nothing more than that.
    Implementing a Good(tm) product, and getting a market for it is a whole different story. I w
    • Ok, I know several AutoCAD users, some of them are civil engineers, others are architects. Guess what interface all of them use most of the time? The command line one.

      Maybe, when designing very complex 3D objects, this thing will be usefull. Just don't equate that with CAD, what is a way broader market, including things that don't even need graphical presentation, like circuit optimization.

  • by deodiaus2 ( 980169 ) on Friday November 21, 2008 @02:55AM (#25842621)
    Besides, I am still having hard time operating this mouse foot petal. It is so damn hard to get the selection of a word with my toes! Next thing you know, they'll design away my CD-RW coffee cup holder! I still miss my D parallel printer, what am I going to do with all the cheap cables I got at the discount bin at BestBuy!
  • by Anonymous Coward on Friday November 21, 2008 @03:03AM (#25842657)

    It is not just a gimmick - they have worked on the gestural language as well as translation software, and it works well. The glove is a bit of a bummer, but it is just a passive glove with spots the system can read. They already have clients, yes big data sets of SHARED computing environments, something that is being overlooked. But it will be quite some time before we have it on our laptops, probably on our TVs before that. And, yes, it will be a better UI than the mouse or accelerometers or voice for many things. But the future is a mixed environment not one single solutions.

    • by crossmr ( 957846 )

      The gloves are a first step. Later we could implant something blue-tooth like into the hands or find another method of reading the hands. Remember technology evolves..the solution we present today isn't the solution we're going to be using for 1000 years.

  • Uh huh (Score:3, Interesting)

    by bm_luethke ( 253362 ) <> on Friday November 21, 2008 @03:22AM (#25842721)

    OK, maybe this is the wave of the future. I will not say it isn't - but that promo didn't sell it. It looked like what they claim to be - based on a Hollywood custom script. I want to see how I would use this in the real world - I'm not going to be standing around and moving those text blocks around, nor did I really see why having that matrix of Asian language characters (I don't know which language - I can't read any of them) in that grid would help someone deal with the massive amount of letters anyway. It seems to me since most of them are based on pen strokes that that the arrangement is - hmm - only made to be visual appealing to westerners (which I am one of).

    I had used an SGI CAVE a few years back for a few different things (well, others in the group I worked with wrote the stuff - I played with it simply because it was neat) and I see many similarities. Given that products history I do not see that as a Good Thing for them. In fact they seem to be a good 5-10 years behind the curve - the last time I used one was five years ago and they were already doing all this nice stuff from what I can see.

    It was really good for things that were meant to be visual. For instance they had this really neat data set of a human (some convict that donated their body to science) and you could interact with a 3-dimensional representation of them. Their body "displayed" (or rather appeared too) in the center of the CAVE and then you could select (using a wand that the system kept tract of it's position in the room) a "window" and move/drag it around and see just that slice of the body in a high amount of detail. You could lock that and have as many 2-d slices going through the body as you want.

    They also had a car wreck that you could do a similar thing - but you watched the "slice" as the wreck happened in real time. They actually crashed a car to get the data.

    There were also quite a number of specialized tasks that benefited from it and I still run into some today.

    But, other than that we pretty much played quake on it. Why? Well most data doesn't really need that type of visual representation. Our current screens work quite well and you are simply adding overhead for the heck of it. Even for those that the system worked well for they still did OK on a normal screen. A large monitor costs a few thousand, these systems cost a few hundred thousand. Well, you should get the picture there (and knowing that I worked in a govt research lab at the time should tell you why no one cared that it was a few hundred thousand more).

    This system has the 3-d input but not the nice 3-d output that the SGI systems had so I can't see it working any better - it is just as specialized hardware intensive and I bet just as expensive. Even if it isn't - is the increased productivity for those specialized application going to be worth the cost? I also bet not.

    You will note that even a group that has quite a bit of experience making true Hollywood scenes couldn't come up with better. Perfect for massive data - uh huh - and what did that wonderful things you show of arcs moving around *really* give you? You mean where you put a circle over one of the other circles and it turned yellow?

    Is there *any* reason whatsoever that the majority of that could not be accomplished with a mouse and a large LCD? Nope - so why purchase this? At least the pretty much failed SGI stuff had the whole 3-d output to go with it - and trust me, there is no experience in the world like playing quake in a fully 3-d environment that you are freaking standing in the middle of and the virtual gun actually is being held by your hand. But then - how many are going to pay 250k for that?

    This type of thing is so 1990's and dot com - ten years ago these guys would have been flush with cash from countless venture capitalist. Heck, their video even screams late 90's and early 00's. As is they better really be able to back up the claims they make to even have a shot at it, let alone be truly successful. I didn't particularly see anyt

    • Re:Uh huh (Score:5, Informative)

      by zwei2stein ( 782480 ) on Friday November 21, 2008 @05:26AM (#25843167) Homepage

      The g-speak platform is in use today at Fortune 50 companies, government agencies and universities. Application areas include:

              * Financial services
              * Telepresence
              * Network operations centers
              * Logistics and supply chain management
              * Military and intelligence
              * Automotive
              * Natural resource exploration
              * Data mining and analytics
              * Medical imaging
              * High-touch retail
              * Trade shows and theatrical presentations
              * Consumer electronics interfaces

      Oblong delivers room-sized and single-user g-speak environments as turnkey products.

      A software development kit that runs on both Linux and Mac OS X is available. Applications are source-compatible across both operating systems and can run on ordinary desktop and laptop computers in addition to gesturally-equipped g-speak machines and clusters.

      You were saying?

      • by QuantumG ( 50515 ) *


        People manage to sell plenty of CAVE [] systems too.

      • Well,

        I can't speak for the parent poster, as maybe he didn't see this list. I did. And I still feel the same as the parent poster (and what many other posters have pointed out). Instead of showing a flashy video with a remarkably high "gee whiz" factor for the PHB's of the world, why not show some actual real world applications? Why not show how this thing is currently being used by some of these fortune 50 companies? At the least, present a case study or two that demonstrates the advantages of this sy

        • Chances are that they have signed NDAs and can't show real world applications. I don't see fortune 50 company being comfortable with video of their internal applications being all over internet.

          And I would not expect them to sink funds just to create "showoff" applications loaded with ideas and then let it loose on net (for not reason).

  • by Animats ( 122034 ) on Friday November 21, 2008 @04:16AM (#25842891) Homepage

    Actually, that idea first appeared in film in Johnny Mnemonic. []

    Autodesk put considerable effort into virtual reality in the late 1980s and early 1990s. The hope was that it would make it easier to design 3D objects. It didn't. The fundamental problem is that positioning your hands precisely in free space by eye, not touch, is slow and inaccurate. It looks really cool, but it's like trying to do precision work wearing mittens. Humans are much more precise when they have a surface to work against.

    It's not a technology problem.

    • by thbb ( 200684 )

      There is a fundamental distinction between the minority report UI and the Johnny Mnemonic one: the first is "augmented reality": the user is not cut from the real world ; the real and virtual realities blend in to empower each-other.
      In Johnny Mnemonic's version, this is pure "virtual reality", with clumsy interaction techniques at best.

      Still, the idea is far form perfect:
      The machine was rather difficult to operate. For years, radios had been operated by means of pressing buttons and turning dials; then, a

      • by Animats ( 122034 )

        now all you had to do was wave your hand in the general direction of the components and hope.

        Control by hand-waving was a popular idea in science fiction from the 1930s or so. It probably first appeared in Wells' "Things to Come". The concept came from the early "electric eye" systems (just photocells) and the theremin []. A theremin really is played by waving your hands around; one hand controls pitch, and the other hand controls volume. Anybody can make screeching noises with one, but it's very diff

  • I don't quite understand why it is that people seem to roll out "Minority Report" as the ultimate in cool and useful computer interfaces.

    First the coolness: the book may or may not be good; I haven't read it, but it is story from 1956, and thus likely to be a long way off the mark anyway. Entertaining? Probably, knowing Philip K. Dick. But, having seen that smarmy git, Tom Cruise, in the movie, totally and utterly turned me off; there are very few actors in the world less convincing. Coolness simply doesn't

  • I am so sick of these demos. Next people will be showing off a new keyboard interface that lets you play music by wiggling your fingers in thin air... or write a letter by waving a virtual pencil in thin air... or drive your car by pointing in the direction you want to go and yell "vroom!"

    People need tactile input and feedback to do anything meaningful. What is so wrong with having to touch a surface? Why not make a small, wireless glove with pressure sensors on the fingers that allows any surface to act

    • "The only non-mouse, non-trackpad surface that would be useful is some kind of trackball that could be used independently of the pointing device that allows you to rotate 3D objects"

      Hey, that would be nice. I've seen some trackballs that let you move on 3D (what ordinary mouse does quite well, thank you) but never something that lets me rotate with any precision. That could also be quite cheap... Hey, maybe I have a business plan to make, good bye :)

  • Not again.... (Score:3, Informative)

    by 1u3hr ( 530656 ) on Friday November 21, 2008 @06:39AM (#25843471)

    Search " minority" []

    "Minority Report"-Like Control For PC []
    On November 8th, 2008 with 138 comments
    An anonymous reader writes "A startup named Mgestyk Technologies claims that they have an affordable solution for 'Minority Report'-like PC control. They have...

    Obscura Digital Demos "Minority Report"-Like Display []
    On August 6th, 2008 with 124 comments
    Barence and other readers sent along word of a demonstration by Obscura Digital of a new technology it's dubbed a multi-touch hologram reminiscent of...

    Touch Screen Tech Comes of Age []
    On February 3rd, 2008 with 78 comments
    pottercw writes "Good summary of today's touch-screen technologies on Computerworld the obvious Apple iPhone and Microsoft Surface, plus projected...

    "Interface-Free" Touch Screen at TED []
    On October 30th, 2006 with 194 comments
    Down8 writes, "Jeff Han, an NYU researcher, has recently shown off his 'interface free' touch screen technology at the TEDTalks in Monterey. Some sweet...

    Correct me, but are all these breathless announcements still vapourware?

    I'm getting a bit tired of this bullshit. It was just a stunt, it looked cool but completely impractical. And it's not like "Minority Report" (2002) actually invented the idea, even in the movies. Off the top of my head, same concept was used in "Johnny Mnemonic" (1995), Disclosure (1994), "Hitchhiker's Guide" (1978 (radio version)).

  • one that I can use without the mouse but with my fingers instead.

    one finger tap, left click
    two finger tap, right click
    one finger drag, move pointer
    two finger drag, scroll
    three finger tap, zoom in/out pad./screen mapping for more accurate work...

  • by thbb ( 200684 ) on Friday November 21, 2008 @07:37AM (#25843699) Homepage

    Using datagloves, I did quite a bit of work in 1993 to see how the sort of UIs that we see in the Minority Report could work.

    It turns out that there are 2 issues to overcome:
    - Fatigue: the gesture vocabulary had to consist only of short sequences.
    - "immersion syndrome": whatever I do can be interpreted against my will.

    By designing the gesture vocabulary so that it would require alternating tense postures and relaxed aiming gestures, it was possible to overcome those issues in a pretty satisfactory way. Tension is particularly important, as it conveys intention: if you stress "Go There", people (and machines) can detect the fact that you want something to happen, as compared to using a monocord voice.

    see Charade: Remote Control Of Objects Using Free-Hand Gestures [] published in Communications of the ACM in 1994 for more details.

    The machine was rather difficult to operate. For years, radios had been operated by means of pressing buttons and turning dials; then, as the technology became more sophisticated, the controls were made touch sensitive ... now all you had to do was wave your hand in the general direction of the components and hope. It saved a lot of muscular expenditure of course, but meant you had to stay infuriatingly still if you wanted to keep listening to the same programme. D. Adams, The hitchhiker's guide to the Galaxy, Chap. 2. 1979.

  • by oDDmON oUT ( 231200 ) on Friday November 21, 2008 @08:43AM (#25844031)

    Given that the new paradigm is "Reduce, reuse, recycle", how does a multiscreen, multi-projector, multi-everything system reduce my carbon footprint?

    No, really, I'm curious.

The rich get rich, and the poor get poorer. The haves get more, the have-nots die.