Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
GUI Microsoft Windows

Did Metro UX Elements Come From a 2009 Demo? 68

First time accepted submitter oso2k writes "In 2009, as reported by gizmag, Robert Clayton Miller proposed a UI that borrowed from familiar iPhone gestures and translated them to a multi-tasking data-input rich desktop UI. It would seem, however, Microsoft was paying attention. Elements in Miller's design seem to have been lifted for Metro UI, such as dynamic sized widgets (tiles in Metro UI) on the home screen, swipes alternate between open, fullscreened apps, left tap for the app context menu, right tap for the system context menu. And in Miller's video at [5:41], it would seem Microsoft used the same or nearly the same font [4:30]." It's interesting to spot resemblances here, but how many UI ideas don't have more than one inventor?
This discussion has been archived. No new comments can be posted.

Did Metro UX Elements Come From a 2009 Demo?

Comments Filter:
  • Re:Stupid jargon (Score:2, Interesting)

    by ShogunTux ( 1236014 ) on Thursday September 27, 2012 @04:30PM (#41482025)

    They're not the same thing.

    UI is user interface. This can be a CLI (command line interface), GUI, touchscreen, or really whatever sort of way in which you can think to interact with a computer. As such, conflating it to a GUI, a graphical user interface, is narrowing things down too much, since it's much more general by definition. Each different input method is then going to have different things in which it's good for or not good for, and will need to be taken into consideration when designing.

    For instance, a CLI is going to almost always be the most powerful input method, although it suffers from low discoverability, since you need to learn some basic commands to interact with it first before you can become too proficient at it. And the best CLIs are going to be ones in which you can infinitely chain commands together and even string them out in its own programming interface, so that you can then set up a batch of jobs together with a few clicks of the keyboard. Heck, I'd even classify voice command interfaces as CLIs as well, like Dragon NaturalSpeaking or even Siri, since while they don't involve keyboards, they have the same strengths and weaknesses as user interfaces (although the voice input could be seen as a fuzzier input method, much like how touchscreens are to GUIs, since you lose a bit of precision in the interaction, due to voice recognition software having to figure out what you intend to say).

    While for UX, that stands for user experience, which is a completely different concept entirely. UI only designates how someone interacts with a computer, while UX is more so about whether that interface is optimal for the task at hand, or even whether there's consistency between the user interface interaction. So in essence, the UI designates the what, but the UX is how.

    For instance, let's focus on using a touchscreen interface, which is one GUI implementation, and compare it to a mouse input. For starters, a touchscreen is never going to be a precision interaction method, because while you might be able to increase the screen size, you'll never match a mouse without lowering the DPI of the screen drastically, which then makes interaction a bit clumsier. Likewise, a mouse is going to be confined to a single input, while a touchscreen doesn't have to be, but can take in multiple inputs simultaneously, and as such, the mouse will never be able to quite match a touchscreen on this front. As such, while they both do represent graphical user interfaces, they do not share the same user experience, which is part of the reason why you hear complaints from people who don't like having to use one for a desktop, because forcing one UI for both then requires that in order to not completely suck on one input method, it needs to make compromises in the other.

    Of course, there are some people who seem to believe that designing for the fuzzier interface while providing ways of doing tasks with single inputs will automatically make it optimal for both (I'm looking at you GNOME 3 and Windows 8), but this is sheer lunacy. Much like a CLI interface is not the most optimal for all cases (e.g. graphical manipulation), despite being the more powerful alternative, a touchscreen is not going to be a replacement for the old tried and true mouse and keyboard, which then allows for you to cram and browse through more information on one screen than a touch interface would, since a touch interface can't handle as much precision as the mouse can, and needs to be fuzzier by default in order to be useful.

    So perhaps you might not care about all of this, since it does at least appear like you aren't within the industry since you dismissed all of this as being names for the same things, but at least you've had a brief 101 excerpt of HCI (human computer interaction), and can't claim ignorance to these terms as a defense any more. Because surely you likewise wouldn't say "CPU? GPU? RAM? Why do we need so many names for the same damn thing? It's not like we're using desktops anymore, so what difference does the C or G make."

  • Re:Zune circa 2006 (Score:4, Interesting)

    by im_thatoneguy ( 819432 ) on Thursday September 27, 2012 @08:59PM (#41484271)

    You've only mentioned style and appearance. This is about the function of the UI.

    Con10uum: Every window should be always open and you just scroll left to right between them. Dynamically scaling each window with pinch/zoom.
    Windows 8: Only 1 or 2 apps should ever be open and you swap the one currently on the screen.
    Functional comparison: Fingers are involved in both gestures. Functionally completely different windowing philosophy.

    Con10uum: You should click a button off to the left side of the screen to bring up the app context menu.
    Windows 8: You swipe from the bottom of the screen.
    Functional comparison: Both acknowledge the fact that applications have menus and provide a means of accessing said menu.

    Con10uum: You should click a button off to the right side of the screen to open the launcher.
    Windows 8: You should swipe from the side of the screen to reveal an onscreen button to open the launcher. You also reveal global actions such as sharing or printing the current page.
    Functional comparison: Both involve clicking on the right area of the screen. Seeing as there are only 3 usable sides to a touchscreen it's a stretch to say that this was a rip-off. Especially since Microsoft's explanation of "It's where your thumb is when you hold a tablet" is a perfectly good rationale and makes more sense than "because some web video that nobody saw put it there."
    Con10uum has no equivalent to Microsoft's global sharing button. In Con10uum that would be part of the application's file menu and would be in a different menu.

    Con10uum: Desktop widgets.
    Windows 8: No desktop in Metro. The launcher icons though can display extended information.
    Functional Comparison: Widgets have been around for decades. Every customized windows theme included an RSS/News widget on the desktop. It's just "what you do". But functionally a widget and a metro tile are completely different. A widget is an enhanced part of the desktop and was in Windows Vista as part of the OS for years before Con10uum. A tile though serves dual purposes as primarily an icon but a secondary duty as a widget.

For God's sake, stop researching for a while and begin to think!

Working...