Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
GUI Programming Software X Technology

Gosling: If I Designed a Window System Today... 431

An anonymous reader writes "In his blog entry for the 10th August, James Gosling (finally) publishes a short paper he wrote in 2002 entitled 'Window System Design: If I had to do it over again in 2002'. His design is to make the window system do the absolute minimum and move all the work into the client."
This discussion has been archived. No new comments can be posted.

Gosling: If I Designed a Window System Today...

Comments Filter:
  • by shfted! ( 600189 ) on Friday August 20, 2004 @11:10PM (#10030333) Journal
    I'd make it opaque to keep my arch nemisis, the Evil Yellow Face from entering my underground command center... though my mom alredy complains the basement is too dark.
  • Good idea (Score:5, Insightful)

    by penguinoid ( 724646 ) on Friday August 20, 2004 @11:10PM (#10030334) Homepage Journal
    I think it is a good idea to separate the server and the client so each does its own stuff. This will increase modularity and compatibility quite a bit, IMCUO (in my completely uninformed opinion)
    • Re:Good idea (Score:4, Insightful)

      by shfted! ( 600189 ) on Friday August 20, 2004 @11:16PM (#10030359) Journal
      On the other hand, it would waste resources. By consolidating your RAM in the server, copies of the same program could reference the same pages in memory -- a very significant savings, if you have a smart OS and your users typically run the same applications. Plus, because user activity tends to be bursting (i.e. the CPU and hard-drive sit idle most of the time), money could be saved by equiping the clients with less capable hardware, and/or performance could be beefed up for those bursts by having a high speed/capacity server (imagine having several timse the processing power of your client machine at your disposal). Granted, this latter benefit is reduced when your users run long-running, intensive tasks.
      • by TykeClone ( 668449 ) <TykeClone@gmail.com> on Friday August 20, 2004 @11:22PM (#10030382) Homepage Journal
        Further, more money could be saved by making the clients with simple monochrome monitors (say green in color) with VT100 keyboards.

        Sounds like it's back to the future.

        • Re:Good idea (Score:4, Insightful)

          by Short Circuit ( 52384 ) * <mikemol@gmail.com> on Friday August 20, 2004 @11:26PM (#10030401) Homepage Journal
          I was at AutoZone today. (Er, yesterday. It's after midnight.) I asked the sales clerk what their computer system was.

          He said, "It's an old piece of crap." (He works on a green dumb terminal)

          I asked him if it did the job well enough...
        • Re:Good idea (Score:3, Insightful)

          by shfted! ( 600189 )
          True, but in that case, the users would notice a decline in their computing experience, versus a potential (and very real) increase by centralising resources. Take another example: When you reboot a stand alone client, very rarely is the program image for say, the word processor, already in ram. Thus, when the user starts the program, he or she has to wait for the program to be loaded into RAM. Compare this to a centralised system, where another user has likely used the word processor recently, and so the p
          • Re:Good idea (Score:5, Insightful)

            by ipfwadm ( 12995 ) on Saturday August 21, 2004 @12:00AM (#10030538) Homepage
            It takes about 2 seconds for MS Word to come up on my laptop when running on batteries. When plugged in, that would presumably be a tad faster. Even if your central server can have it open in 0.1 seconds, I would bet that the network latency would make that 1.9 seconds all but go away, and 1.9 seconds isn't much of an inconvenience to me anyway. Sure, some apps take longer, but once I've started those up, they usually stay open for a long long time. Besides, we're still only talking about a few seconds of initialization time -- Visual Studio just took 4 seconds, Photoshop CS took 20. I waste more time blowing my nose.

            There's a reason nobody runs client-server. Desktop systems with fast processors are just too cheap.

            • Re:Good idea (Score:5, Insightful)

              by NtroP ( 649992 ) on Saturday August 21, 2004 @11:42AM (#10032777)
              There's a reason nobody runs client-server. Desktop systems with fast processors are just too cheap.
              Actually, we do, and very successfully. I can get an empty Microtel workstation from Walmart for $168.00 with a 17 in monitor for another $120.00 or so. This gives me a great "thin client" for under $300.00. Sure, that's not much more savings over, say a $500.00 stand-alone desktop, but the savings (in a lab environment) comes down the line. With a standalone desktop I have to replace it in 4-5 years and probably at least add RAM in the mean time (think Longhorn will run on 128Mb well?). At, say $500.00 a pop for 30 workstations, you are looking at $15,000 to upgrade the lab (and a $500 standalone workstation won't last very long). I can put a whole new thin-client lab in for under $10,000 or upgrade an existing lab (either monitors or CPUs) for half that (though why I'd ever need to do that I don't know - maybe moving to flat-panel monitors or bigger CRTs?)

              The thin clients, once in place, are good indefinitely. If I need more speed or capacity, I just upgrade the server - not a whole lab of 30 workstations. The savings continues from there. With no internal moving parts the energy consumption for the lab goes down, and the lab also stays cooler - requiring less energy again from the H/VAC system. Small savings, but with 30 labs - it adds up. On top of this, I don't ever have to touch the clients. They PXE-boot from a central Tao-tc Linux server [taolinux.org] which loads a small kernel and rdesktop on the client and then severs the connection. The client connects to a Dell rack-mount Windows 2003 Terminal server or one of our Fedora LTSP Terminal Servers, depending on our needs.

              This means that, for any given lab, I have, at most, one machine to manage, install apps on, patch, secure and otherwise babysit. This saves big bucks on time, OS upgrade licenses, Patchlink licenses, Antivirus licenses, etc. that I would have needed for every computer in the lab (assuming they were Windows desktops). I also have much greater reliability: if one of the servers goes down I just change a setting on the Tao-tc box, have the lab reboot their clients, and presto, they're pointing to one of the other servers in another building and sharing it's power while I re-ghost the dead server.

              We also allow our users to disconnect from their sessions instead of logging out. This means they can come back later to any of the thin-clients in the building, log in and be exactly where they left off before. This is a godsend during power outages - the servers are on UPS's, when the power comes back on, the users reconnect to their existing sessions and no work is lost, no data is corrupted.

              Granted, the thin-client scenario is doesn't work for every situation - we use high-end workstations for CAD/CAM and Video Production Labs. We also use dedicated workstations for those staff who need to sync Palms or use local USB devices, etc. but for "normal" staff, classroom and lab use - it rocks!

              One Dual-processor 3.2GHz server with 4Gb of RAM can serve over 100 clients running Office at blazing speeds. Word and Exel load "instantly". You should see the look on peoples faces when I show them an empty IBM 300PL (P2 133 MHz) system net-booted to windows, and I click on Word. It invariably blows their workstations away. And because people using the Terminal Server can't install every shiny, blinky piece of software that shows up it STAYS fast. And saves me more money and headaches in the process.

              The best part is that our our Mac OS X users can use RDP to connect to the terminal servers too - allowing them to use the Windows-only software with ease - instead of forcing them to give up their Macs. In fact we just did a week-long class on some proprietary Windows-only app in our iMac Lab. With the 3-button scroll-mice plugged in, they never even knew the difference; worked, like a charm.

              So, yeah, you aren't going to use thin-clients for gaming and surely not at home, but in a controlled corporate or school environment, you can't beat it for ease of management, performance and cost savings.

          • Re:Good idea (Score:4, Insightful)

            by be-fan ( 61476 ) on Saturday August 21, 2004 @12:05AM (#10030558)
            Oh, btw. The X protocol will probably outlive Xlib. XCB aims to speak the X protocol, while fixing Xlibs shortcomings. So if we had standardized on Xlib, we couldn't have replaced it with XCB today.

      • Re:Good idea (Score:3, Informative)

        by timeOday ( 582209 )
        I don't think a windowing system should be built around networking at all.

        In the common case, there is no client or server, just an app running on a PC. So don't build the assumption of networking into windowing.

        Look at X: it's built on a standardized network protocol. If you want you could implement a different Xlib, even one with a different API, so long as it used the X network protocol. But that extra degree of design freedom has been a complete waste of effort, code complexity, and CPU cycles.

        • Re:Good idea (Score:3, Insightful)

          by be-fan ( 61476 )
          The problem is that in any client/server architecture, you're going to need *some* sort of protocol. How else would you do it? And you can't ditch client server, because:

          a) History shows that it's rarely the bottleneck (eg: fast GUIs like QNX and BeOS are client/server);
          b) There is no other good place to put it --- kernel space is too dangerous.

          So once you've defined the binary protocol between apps, it's a tiny step to make that network transparent while you're at it.
        • Re:Good idea (Score:5, Insightful)

          by nathanh ( 1214 ) on Saturday August 21, 2004 @12:51AM (#10030709) Homepage
          In the common case, there is no client or server, just an app running on a PC. So don't build the assumption of networking into windowing.

          OK. So let's run with that idea. We still have multiple clients and one set of hardware, so we need to arbitrate the access. We also need to have a common place where the clients can share information like window clip lists. Then there are issues like drag and drop, cut and paste, etc which also require inter-client communication. And how do you solve the issue of two clients seeing the mouse button being pressed, and both assuming that the click was for them?

          At one stage you realise you need to have a program, somewhere, that coordinates all of the clients. Assuming this won't be the kernel, it must be another userspace program. We call this program "the X server". And because we have all these clients in userspace, and the X server is also in userspace, they need to use some form of inter-process communication. XFree86 and X.org already use UNIX sockets; one of the fastest IPC methods available. The only thing faster would be shared memory but that's been tried before and it's more hassle than it's worth.

          Now admittedly there are some situations where the clients simply need to talk directly to the hardware. For example the client needs to upload a 3D texture or render an MPEG-2 frame. For those situations it makes no sense to send that data to the X server first. So for those situations we do have solutions that bypass the X server and go directly to the hardware. These include the DRI extension, the MIT-SHM extension and the DGA extension.

    • Re:Good idea (Score:5, Interesting)

      by timeOday ( 582209 ) on Friday August 20, 2004 @11:27PM (#10030404)
      I'm afraid I disagree with the idea of a minimalist windowing system - one that leaves most everything to user level libraries. This still leaves the door wide open for applications to implement various looks, various copy/paste mechanisms, and other things that annoy people.

      20 years ago it might have made sense to make this very modular since nobody knew how things would end up looking. Today, let's face it, windowing is "done." All the various libraries over X look and work very similarly, just different enough to clash. Windowing is mature, I say it's time for more integration.

      Modularity should be at the level of source code, not runtime components.

      • Re:Good idea (Score:5, Insightful)

        by Jeremi ( 14640 ) on Saturday August 21, 2004 @12:30AM (#10030637) Homepage
        m afraid I disagree with the idea of a minimalist windowing system - one that leaves most everything to user level libraries. This still leaves the door wide open for applications to implement various looks, various copy/paste mechanisms, and other things that annoy people.


        So you don't want a windowing system that is flexible, because people might want to take advantage of that flexibility?


        I think your reasoning is a misguided attempt to solve by technical means what is really a politicial/sociological problem. The proper solution is to have a strong set of UI guidelines and standard libraries that make it trivially easy to follow those standards, not to limit the capability of the system just because you don't trust people not to abuse it.

        • Re:Good idea (Score:5, Interesting)

          by timeOday ( 582209 ) on Saturday August 21, 2004 @01:18AM (#10030784)
          I would design the system soundly, and make it flexible underneath, but not push that as the main feature, or give people cause to reimplement right off the bat.

          Here's what happened with X11 as I see it. Fundamentally, it was a network protocol spec and client/server model. Then they built Xlib to implement the network protocol. Then, they ginned up the Athena widget set, sort of a quickie prototype on how one might actually start to build a UI on X. Having done that, they called it a day, leaving it for others to implement the look and feel, and basic functionality like cut & paste. As a result, for years most developers just used the (crappy) Athena widgets as-is, while some others started off in several directions making something worth using (e.g. Motif). Finally a decade or two later we have some decent Windowing toolkits built on X, and a look-and-feel morass.

          X was overly focused on the juicy technical aspects of the day (like networking) and stopped short of providing an application-ready windowing system.

          Instead, focus on delivering 1) a rock-solid, high quality API and 2) a great-looking, high performance implementation for the common case - an app running locally on a PC.

          In other words, pick good API (e.g. GTK) and implement it over a small, relatively primitive rendering library to access hardware (e.g. OpenGL).

          If people want to come along later and re-implement the API to insert a network transport layer, fine. They can write a shared object to do that, and slip it in place of the local version. Its backend might be VNC, X, whatever.

          If they want to re-implement it to look different, or have different functionality, fine. But there probably won't be a lot of motivation to do this (except maybe to default to a different skin, or make this year's buttons round instead of square, so people feel better about paying for an OS upgrade). And if you replace the default shared GUI library with something else, *all* apps will link against it and hence look the same. (Unless you want to get fancy for some reason and run them with different link paths or something).

          • Re:Good idea (Score:4, Insightful)

            by gnuman99 ( 746007 ) on Saturday August 21, 2004 @12:18PM (#10032974)
            X was overly focused on the juicy technical aspects of the day (like networking) and stopped short of providing an application-ready windowing system.

            Instead, focus on delivering 1) a rock-solid, high quality API and 2) a great-looking, high performance implementation for the common case - an app running locally on a PC.

            Common case for X? Local PC? WTF are you talking about. X was designed for UNIX servers during the days when "Local PC" didn't even exist. I'm *very* glad that X is such a flexible and bullshit-free protocol. That's why you can have different desktop environments be it KDE, Gnome or even stuff like blackbox.

            I had yet to crash X by passing some null value or whatever to the Server. Windows API, on the other hand, "solid" as you imply, craps out when you start passing NULLs to it. Heck, you can still crash the entire box by passing some weird numbers to the right functions!

            Sorry, I'll take the simplicity and flexibility of the protocol over any copy&paste or drag&drop "standard".

        • Re:Good idea (Score:5, Insightful)

          by shirai ( 42309 ) * on Saturday August 21, 2004 @03:18AM (#10031093) Homepage
          So you don't want a windowing system that is flexible, because people might want to take advantage of that flexibility?

          I'm afraid you have it ass backwards. An integrated system allows you the *flexibility* to do whatever you want, including a uniform interface.

          You can still do whatever you want with the interface ultimately but you would be encouraged to do it the consistent way. The encouragement would come from the fact that you wouldn't have to build standard features from scratch every time.

          For example, Windows never stopped Photoshop from implementing their proprietary windowing subsystem for their palettes and such. But I, for one, am glad that they still use standard drop down menus, minimize/maximize buttons, etc.
      • Re:Good idea (Score:5, Interesting)

        by mpaque ( 655244 ) on Saturday August 21, 2004 @01:13AM (#10030771)
        Curiously, the Mac OS X window system implements almost the exact design Jim Gosling describes in his paper.

        All drawing work is done on the client side, and the window server has nothing to do with fonts, cut/paste support or much other higher level work. The window server simply assembles the drawing buffers to the displays (via hardware or software) and routes events, using hints of the foreground application and the visible window area to manage the task.

        A consistent look and feel is derived by providing a consistent set of high level toolkits, residing on a set of lower level drawing frameworks.

        Shared libraries make sure the needed code is readily available and resident in memory. Font are cached and vended as shared memory resources using Mach's virtual memory semantics. Drawing buffers also leverage Mach VM semantics.
        • Re:Good idea (Score:4, Interesting)

          by pohl ( 872 ) on Saturday August 21, 2004 @08:55AM (#10031843) Homepage
          Hmmm...I recognize you from the old comp.sys.next.* usenet hierarchy. Didn't you disappear after the acquisition to go work on creating Quartz? If so, it must be fun to be a few steps ahead of Gosling. Oh, and thank you for the working implementation that I'm using right now.
        • Re:Good idea (Score:4, Informative)

          by The Ego ( 244645 ) on Saturday August 21, 2004 @11:11AM (#10032626)
          Karma whoring: to understand who the poster is, please check this previous post [slashdot.org]
          of mine.

          And for a one-post description of Quartz and links to Usenet posts from "mpaque", you can see this post [slashdot.org].

          Mike's post have always impressed me, hence the apparent fanboyism of those post. And the more experience I gain in this industry, the more I respect this king of professionalism in non-official communications.
      • Re:Good idea (Score:4, Interesting)

        by Hacksaw ( 3678 ) on Saturday August 21, 2004 @03:33AM (#10031128) Homepage Journal
        Today, let's face it, windowing is "done."

        This like saying that once cars could go faster than it was safe to, no more innovation was needed.

        What would happen if such a windowing system appeared would be this: the GTK+ folks, the QT folks, and some Xlib folks would port their libraries to the new system, add in a few missing things, and we'd have the same thing we have now, but faster, and easier to maintain.

        It would also move importants bits out of the server, like the paste buffers and so on, into plain user space, where they could more easily be standardized. Free of the legacy swamp of X, clean designs could spring forth, and innovation could happen.


        For instance, I'd love for there to be an easy to use clipboard stack, that could hold as many clips as there was diskspace, and an interface to help maintain it. Click the clip you want, second button it into place. This would make things like document editing easier, and make using the clipboard less of an annoyance.

      • I think you are going to be proven wrong... my feeling is that there is going to be some key changes in the future with 3D acceleration. I'm expecting window systems to radically change in the future when 3D features of graphics cards are used. Let's face it: we have 3D cards, which are not used for anything. Doesn't it seem plausible that the windowing system will start using the unused capabilities of modern video cards? Doesn't it also seem logical to have everything in 3D rather than 2D (which is old sc
  • Wait... (Score:4, Funny)

    by StevenHenderson ( 806391 ) <stevehenderson@NOspam.gmail.com> on Friday August 20, 2004 @11:14PM (#10030352)
    His design is to make the window system do the absolute minimum and move all the work into the client.

    Wait, so you mean you wouldn't require this?
    http://it.slashdot.org/article.pl?sid=04/05/04/222 3237&tid=201&tid=137 [slashdot.org]
  • by zoloto ( 586738 ) on Friday August 20, 2004 @11:14PM (#10030354)

    Would be a system that is both lightweight and fast. Everything could move at the speed of a finely tuned video game. Advances in rendering pipelines and library design would be easy to
    accommodate. This window system design isn't particularly radical: it's more just pointing out that this is the way that X is going already, given the increasing predominance of applicationside rendering libraries. Once you accept that fact and admit that it's actually the right way to go, the design falls out, simply by
    stripping away legacy stuff that isn't needed any more.


    So. Who's with me to create tihs sourceforge project? Dead serious folks, not a troll. BUt who has the gumption to get it started and make it run VERY fast, then after a while see how the X.org people would think of merging or using it? Eh eh?

    let me know, use my gpg key to encrypt messages (it's the wave of the future!).

    --zoloto
    • by IamTheRealMike ( 537420 ) on Saturday August 21, 2004 @06:01AM (#10031403)
      Gosling basically described DirectFB [directfb.org] so if you like this sort of idea, go hack on that.

      However, I'd suggest talking to various people in the industry first - people tend to get lots of misinformation that sounds correct but actually isn't by reading random stuff on the web (and slashdot). See the remarks about Office preloading above - doesn't happen.

      So the design of X it turns out isn't actually a serious bottleneck on performance. If you do profiling runs and such, you find that having everything co-ordinated by the X server isn't a serious speed problem and that much larger issues are things like having to read from the fb to do XRENDER blending (or was last time I checked).

      Basically, before going "wow yeah, right on!" I suggest you do a lot of research into the design of past and present windowing systems - what sounds intuitively right often isn't.


      • See the remarks about Office preloading above - doesn't happen.


        I followed that thread with a lot of interest, and I believe the poster who said that MS is just really good at optimizing apps. I think the preloading "myth" may have to do with the shortcut to Office that appears in C:\Documents and Settings\Start Menu\Startup after installing Office. If this isn't a preloader for Office, what is it?
        • That's (iirc) FindFast and is entirely optional. It indexes files in the background a bit like updatedb does in Linux. I don't think it preloads parts of Office, it certainly doesn't need to do that.

          Basically Office starts really fast because it makes heavy use of lazy loading (only loads code just-in-time), and because Microsoft do things like reordering code and functions in the source to ensure that frequently used code resides in the same pages in memory.

          OK, I can see from the replies to my first po

  • Wow comment on X (Score:3, Interesting)

    by mjh ( 57755 ) <mark AT hornclan DOT com> on Friday August 20, 2004 @11:15PM (#10030356) Homepage Journal
    I think the most interesting part of the article was this:

    The Result Would be a system that is both lightweight and fast. Everything could move at the speed of a finely tuned video game. Advances in rendering pipelines and library design would be easy to accommodate. This window system design isn't particularly radical: it's more just pointing out that this is the way that X is going already, given the increasing predominance of applicationside rendering libraries. Once you accept that fact and admit that it's actually the right way to go, the design falls out, simply by stripping away legacy stuff that isn't needed any more.

    I can't count how many times I hear on /. someone saying that X is too bulky, etc, etc. And here's Gosling saying (2 years ago) that X is headed in the direction of slim and lightweight.

    Am I misreading what he's saying?

    • by twitter ( 104583 ) on Friday August 20, 2004 @11:32PM (#10030426) Homepage Journal
      I can't count how many times I hear on /. someone saying that X is too bulky, etc, etc. And here's Gosling saying (2 years ago) that X is headed in the direction of slim and lightweight.

      People who complain about X being "bulky", "bloated" and all that are trolls. It was designed on slim hardware and designed flexibly.

      The real test is to simply use it. Try Feather Linux or any of the other tiny distros out on some crufty old hardware and see for yourself. I've got a 90 MHz laptop that runs X just fine with 24MB of RAM thanks to Woody, fluxbox and other light applications. Gnome 1.4 also is snappy enough, though KDE is a little slow. X is not the problem if there is one! Feather runs even faster running testing and unstable Debian code and I suspect that two further years of going down Gosling's path is responsible. Of course newer hardware runs better and I don't have problems with things like xawtv, Xine or quake running with KDE or Window Maker on top of X.

      From where I stand, I have no idea what people are talking about when they complain about X. They never say anything specific.

    • Re:Wow comment on X (Score:5, Informative)

      by nathanh ( 1214 ) on Saturday August 21, 2004 @12:27AM (#10030631) Homepage
      I can't count how many times I hear on /. someone saying that X is too bulky, etc, etc. And here's Gosling saying (2 years ago) that X is headed in the direction of slim and lightweight. Am I misreading what he's saying?

      No. You've read him correctly. What Gosling is saying is a simplified version of the X.org roadmap.

      For example, X11 contains a font renderer. The design is really ancient. No anti-aliasing. Poor kerning. Clients couldn't access the glyphs very easily, which made it impossible to do arbitrary things like strokepaths or proper printing. It kind of sucked. A number of font extensions were considered for XFree86. Any one of them would have addressed all of the existing issues but they were heavyweight solutions.

      So in the end Keith Packard wrote a better solution. He implemented the XRender extension. This extension simply knows how to draw rows of glyphs. It also knows about alpha masks (Porter Duff compositing). The client now turns the font (typically TrueType) into alpha-masked glyphs and sends the glyphs to the X server. If you're using a GNOME or KDE desktop with antialiased fonts then you're using Keith's XRender extension and client-side font rendering instead of the X11 font renderer. This is only practical because the client-side libraries (eg, libxft2) are shared.

      Another interesting example of "slimming down" the X server is the Composite extension. Rather than implement a heavy compositing engine in the X server, Keith designed this extension so it simply renders the window into offscreen memory. Another extension, XDamage, tells a special client called the "compositor" when any region of the window changes. The compositor then uses the XRender extension to render the damaged region with appropriate drop shadows and/or alpha masks. Notice how the rendering is still done by the X server so it can be hardware accelerated.

      For the future of X.org there is more of this "slimming down" being planned. Jim Gettys and Keith Packard gave a presentation [keithp.com] in July 2004 where they suggest the future of X is as an OpenGL client. They are both keen on a new design where the X server stops being the arbitrator of video hardware. Instead it becomes an OpenGL client with direct access to the video hardware through the DRM, just like every other DRI client. There is a simpler version of that paper in the short slideshow Life in X Land [keithp.com].

      • Re:Wow comment on X (Score:4, Interesting)

        by Brandybuck ( 704397 ) on Saturday August 21, 2004 @02:35AM (#10030997) Homepage Journal
        The trouble with doing everything over OpenGL is that you're subjugating X11 to the video chip manufacturers. While I understand that gamers could care less about closed versus open drivers, I for one don't want to mess with proprietary drivers just to use a 2D desktop. I could be using Windows if I wanted that.

        Right now the Open Source nv and ati drivers in X.org are more than adequate for normal 2D display, but they suck for OpenGL.

        I'm not idly ranting about ideology, I'm talking practical problems. When I bought my new computer I put an GeForce in it because everyone said NVidia drivers were the best for FreeBSD. But NVidia never bothered update their driver for -CURRENT for six months. Six freaking months! I should be the one deciding what branch, OS and kernel to use, and *not* NVidia.

        I fully understand that NVidia and ATI have proprietary intellectual property tied up in their drivers, and can't open them. But that's their problem, not mine. I'm not going to cry for them, because I don't have this problem with my ethernet card, hard drives or CPU.
  • Network bandwidth? (Score:3, Insightful)

    by daVinci1980 ( 73174 ) on Friday August 20, 2004 @11:28PM (#10030409) Homepage
    One problem is his treatment of remote windows... He suggests sending them over as video streams.

    If networking bandwidth is a problem now with the X format (which is basically just sending clicks and so forth), why does he think the response is going to be any better when sending *a huge ton of pixel data*?

    Even if you assume that you only have to transmit differences, there are still cases where the difference will be several megs. (For example, a fullscreen clear in 1600x1200x32).
    • by be-fan ( 61476 ) on Friday August 20, 2004 @11:33PM (#10030435)
      He's not suggesting sending over huge amounts of pixel data. If the app speaks OpenGL, you can ship over the OpenGL command stream. Since OpenGL was designed to support network rendering from day 1, this can be very efficient.
    • RTFA? (Score:3, Informative)

      by jbellis ( 142590 )
      he also says that (a) sending pixel data is basically what the Sun Ray product does and (b) it's about as efficient as using the X protocol would be, or (reading between the lines) they wouldn't have done it that way...
  • by DataPath ( 1111 ) on Friday August 20, 2004 @11:29PM (#10030411)
    His idea of making remote connections a highly compressed pixel stream doesn't excite me - it seems less than ideal.

    I would think that you would want to stream, when possible, rendering api calls, so that you can send pixel data as pixels, vector data as vectors, and 3d surface and texture data as such.

    Maybe have a method for negotiating what rendering api's are supported, stream those, and then render the rest as pixels and push those.

    My intuition tells me that doing so would make remote connection streaming a lot more efficient. Maybe someone with more knowledge than me can explain why this would/wouldn't be a good idea.
  • by be-fan ( 61476 ) on Friday August 20, 2004 @11:29PM (#10030417)
    As Gosling mentions, X is moving in this direction today. In a year or two, when the newest X changes are stable, the average GTK+ or Qt app will talk to the server via OpenGL. On most DRI-like setups, the route from GL to GPU looks like:

    OpenGL -> userspace command buffer -> graphics memory (DMA via Direct Rendering Manager).

    Text layout, fonts, etc, are all done server-side, and the only thing the "server" sees are pixmaps and GL commands.
  • by Brian_Ellenberger ( 308720 ) * on Friday August 20, 2004 @11:53PM (#10030517)
    Maybe its because Gosling is coming from X11 land and its sucky drag n drop/clipboard implementations but this is seriously a big deal is a Windowed operating system. In a Windowed Operating System, it should be easy to move data from one application to another---even though they are made by different companies. And not just text either---things like pictures as well. Going beyond this, dynamic linking and embedding is a handy feature as well.

    • by Anonymous Coward
      Neither of these features is the responsibility of the windowing system. A windowing system only records events and distributes them. To the windowing system a drag and drop is a click, a mouse move, and a declick and nothing more. All the windowing system does is alert through messages, "Hey, a click happened here", "Hey, the mouse is dragging", "Hey, the mouse was declicked." The application is responsible for knowing what those events signal. The application is responsible for interpretting the results,
  • by lawpoop ( 604919 ) on Saturday August 21, 2004 @12:05AM (#10030562) Homepage Journal
    ... I would choose an windowing system that did more work.

    Seriously, all we are talking about is modularizing the windowing system. If the WS is as simple as possible, people are going to rely on libraries and windowing toolkits to get their work done. I guess that's already happened with GTK, etc.

  • by MagikSlinger ( 259969 ) on Saturday August 21, 2004 @12:14AM (#10030592) Homepage Journal

    For my fellow Amigaites out there:

    I would build a "device driver" that did nothing more than manage the clipping lists and hand out graphic device ports. This might actually be best done at user level, rather than a device driver, using shared memory and semaphores.
    I wouldn't use signals for anything. Everything would go through a unified message queue (along with mouse and keyboard events).

    *sniff* That brings back memories. Sadly, my Amiga RKMs now support my monitor, but oh... this is so familiar. :-)

    For the rest: the Amiga had a graphics library layer that talked directly to the hardware. On top of that was built the "Layers" library which does what Gosling is talking about. It just handled clipping lists and "stacking" without any other details. On top of this layer was built the GUI.

    Also, the Amiga used a single message port to communicate with the application. You could have more msg ports, but rarely needed it. You waited politely for a message, fetched it, then acted upon it as you will. All your GUI events queued up nicely in the message port.

  • On top of that (Score:5, Interesting)

    by be-fan ( 61476 ) on Saturday August 21, 2004 @12:20AM (#10030607)
    In order to get good performance out of such a simple window system, applications need to be reasonably intelligent. One thing I think this entails is getting rid of immediate-mode APIs as the standard way to draw, and make retained-mode APIs the standard way to draw. To refresh your memory:

    - An immediate mode API is something like GL or Cairo. The app sends drawing commands, and the engine executes them immediately. If something moves and needs to be redrawn, the app musdo all the work of redrawing the scene.

    - A retained-mode API is something like EVAS. Instead of submitting drawing commands, specifies what the scene looks like in a scene graph. The canvas library does all the dirty work of redrawing scenes efficiently when things change.

    The plight of X (which has very fast drawing, but often has brain-dead application redraw behavior) shows that no matter how fast your graphics API is, many application programmers (who usually aren't graphics programmers), will still make it look slow by writing apps that redraw the whole scene on even the smallest change. A good canvas API like EVAS fits very well with how most apps work. Canvas APIs are slower when scenes change quickly, but for most apps, most UI elements stay static. Where canvas APIs excel is in allowing simply-coded apps to demostrate good redraw behavior, because all drawing optimization can be done in the canvas.

    Of course, for scenes which are animated and quickly-changing, apps should be able to access the underlying immediate-mode API, but this hsould be the exeception rather than the rule.
  • by 2TecTom ( 311314 ) on Saturday August 21, 2004 @12:37AM (#10030668) Homepage Journal
    ... could you take a wee break between engines and do an Id OpenGL GNU GUI?
  • Yes, but... (Score:3, Funny)

    by Phekko ( 619272 ) on Saturday August 21, 2004 @01:11AM (#10030762)
    ...what do you think of a person who only does the bare minimum?
  • by CaptnMArk ( 9003 ) on Saturday August 21, 2004 @01:13AM (#10030773)
    He is mostly right.

    The one problem is there though: by using lots of client side libraries with their own per-client state some efficiency is lost and startup time increases greatly.

    We are already seeing this with today's gtk and kde programs that already have disastrous startup times.

    [mark@silver mark]$ time xterm -e exit

    real 0m0.111s
    user 0m0.066s
    sys 0m0.007s

    [mark@silver mark]$ time gnome-terminal -e exit
    Bonobo accessibility support initialized
    GTK Accessibility Module initialized
    Atk Accessibilty bridge initialized

    real 0m0.311s
    user 0m0.203s
    sys 0m0.032s

    [mark@silver rxvt-unicode-3.3]$ time src/rxvt -e exit

    real 0m0.052s
    user 0m0.004s
    sys 0m0.003s

    The machine is Athlon XP 2500+ 1G RAM, no swap, Fedora Core 2.
  • by Madcapjack ( 635982 ) on Saturday August 21, 2004 @01:15AM (#10030777)
    If I had the chance to do it, I'd call it Lindows.
  • by Dwonis ( 52652 ) * on Saturday August 21, 2004 @01:32AM (#10030828)
    From the article:
    I would make the "window system" so minimal that it is almost non-existent. Each graphical application gets direct access to the hardware, and a window is nothing more than a clipping list and an (x,y) translation. I would build a "device driver" that did nothing more than manage the clipping lists and hand out graphic device ports. This might actually be best done at user level, rather than a device driver, using shared memory and semaphores.

    The last thing we need is a new design that allows arbitrary user programs to have read/write access to the entire screen (read-only access is bad enough). Sooner or later, we are going to start running arbitrary programs on our computers in a secure sandbox environment that is enforced by the OS (and ultimately, the CPU). What happens when some cute little game your spouse downloaded yesterday decides to make itself look like your electronic banking program? Under this architecture, how do we avoid that? Hack every display driver in existence? Trust the shared library to prevent this?

    • by mpaque ( 655244 ) on Saturday August 21, 2004 @01:46AM (#10030873)
      The last thing we need is a new design that allows arbitrary user programs to have read/write access to the entire screen (read-only access is bad enough).


      Subtle point here. The hardware the apps have access to may not be the screen, but an off-screen surface which the graphics acceleration subsystem (such as OpenGL) can draw into. The window system takes care of getting the bits drawn in the off-screen surface onto the displays.


      These surfaces can live in VRAM, or DMA addressable main memory. Lots of tricks can be done here by having the app draw at what is essentially the front end of the display processing pipeline.


      Consider for example the GL buffer-as-texture path. Apps draw into a buffer, which when flushed is treated by the window system as a texture to be applied to the app window. The whole GL pipeline can be applied, scaling or warping the texture, altering the geometry the surface is to be applied to, mixing the texture with other texture sources, and so on.

  • by RAMMS+EIN ( 578166 ) on Saturday August 21, 2004 @01:57AM (#10030902) Homepage Journal
    If I designed a window system today, it would have themeable standard widgets, and the protocol (function calls for local, some sort of RPC for remote) would only have to specify the widgets to be used, as opposed to all the drawing operations, which is what X11 does.

    Also, it wouldn't require each and every event (mouse move, click, ...) to be communicated between server and client. Rather, clients would be able to indicate which events they wish to receive for each widget (basically like onclick, onmouseover and friends in HTML).

    All this is simultaneously going to do away with the many competing and incompatible GUI toolkits for X and the non-themeability of Windows and Aqua, and make network transperency work without huge bandwidth requirements and sluggish responsiveness.

    It's worth pointing out that this window system exists in the form of PicoGUI [picogui.org]. Sadly, the site is currently down.

    By the way, what is it about OpenGL that makes it so suitable for acceleration, yet it's horribly slow when implemented in software?
    • I must say I still prefer the idea of a "heavy" windowing system/manager, mainly for the benefits it gives to network transparency. For example, imagine several clients connecting from several different machines and/or user accounts. Under X11 with GTK+/QT/whatever, the different widget sets appear differently, and can appear differently depending on user settings. I like the sound of Fresco [fresco.org] - all widgets are rendered by the server. Under this sort of system the differences between GTK+, QT, etc would simp

  • by Doc Ruby ( 173196 ) on Saturday August 21, 2004 @02:30AM (#10030986) Homepage Journal
    I'd make the windows group by process tree, and offer frames of grouped windows, panes of subwindows, and visual pipes for STDIN/OUT/ERR among them - all embeddable among one another. I'd save window geometries in an OS DB. I'd strictly define the windowing system as merely the presentation layer, independent through an API to a logic layer, in turn independent of a data layer. And I'd write the entire windowing layer in OpenGL.
    • by Jherico ( 39763 )
      Computers are general purpose machines designed to process just about anything. Each processor design has its advantages and disadvantages in this world of generalized computing. How hardware optimization applies in this case is hard to explain, so let me take an example from another field, cryptography. The DES algorightm has several steps for encrypting a given block of text. Two of them involve an arbitrary reshuffling of bits. For instance in a given 64 bit block, bit 2 might be put in bit 5's posi
  • Bad idea (Score:5, Insightful)

    by haraldm ( 643017 ) on Saturday August 21, 2004 @03:08AM (#10031068)
    This concept kills the concept of thin clients and X terminals - that are way more in widespread use than most people think.

    Letting the app take care of its own window borders is a bad idea as well. This is one of the worst parts in M$ Windows - once an app hangs, there is no way of closing or minimizing a window or simply of getting it out of the way. It's way better to have this handled by a separate process.

    • This concept kills the concept of thin clients and X terminals

      It doesn't kill the concept of thin clients. You render to a servers back-buffer, and transmit it over the network to the client. Then you proxy mouse and keyboard events to the window on the client to the server. It is non-trivial, but definitely possible. From the article:

      I think that a more viable solution in the long run would be to replace the X protocol with a very simple pixel copying protocol that uses the user-level rendering li

      • Re:Bad idea (Score:3, Insightful)

        by Sax Maniac ( 88550 )
        If your app doesn't respond to commands, you kill the process. I mean really, how often do you expect your applications to hang?

        All the time. Let's say I do mass-scale operation in Finale that's going to take a lot of time - extracting all 25 parts from a score and grinding them all out to disk. It's going to take a few minutes, where the application window goes dead.

        Sure, I could kill the process, but that wouldn't give me the desired results, would it?

        It would be sure nice to be able to minimize

      • Re:Bad idea (Score:3, Insightful)

        by argent ( 18001 )
        You render to a servers back-buffer, and transmit it over the network to the client.

        Bitmap scraping. Been there, done that, got the bad rendering, latency, crummy feedback, etcetera. By making heroic efforts and badly compromising the user experience you can actually make it more network efficient than X, but you completely blow the feedback you need for end-user efficiency.

        The usual scenario is that when you click on an object the application's idea of what the UI looks like is completely different from
    • Re:Bad idea (Score:3, Insightful)

      by julesh ( 229690 )
      This is one of the worst parts in M$ Windows - once an app hangs, there is no way of closing or minimizing a window or simply of getting it out of the way.

      Huh? If an app hangs in MS windows, I find clicking on the window's close button results in an "Application not responding -- do you want to kill the process?" dialog box popping up. Whereas X tends to cope really badly with hung clients, generally requiring you to use an entirely different command (e.g. "kill window" rather than "close window", altho
    • Re:Bad idea (Score:4, Informative)

      by mpaque ( 655244 ) on Saturday August 21, 2004 @11:18AM (#10032668)
      Letting the app take care of its own window borders is a bad idea as well. This is one of the worst parts in M$ Windows - once an app hangs, there is no way of closing or minimizing a window or simply of getting it out of the way. It's way better to have this handled by a separate process.

      Annoying, isn't it? The trick here is not to let the apps draw to the visible frame buffer, which requires all this visible region locking, but instead to have the app draw to a buffer (in off-screen VRAM or main memory, addressable by the window system). The window system is then responsible for placing the content on-screen.

      So, how does that help? The app always has a place to draw, and the separate window system process always has control over moving the bits onto the display. This means that a window manager can always order the window out, or move the window aside, without the cooperation of the application. In one implementation, the draggable areas used to move the window are registered with the window manager, so the app need not even be involved in moving the window.

      One of the more interesting possibilities here comes into play when the window system is implemented atop a powerful engine such as OpenGL. In this case, the window buffers can be treated as texture sources and applied using the various texture combiner paths, along with scaling, filtering, and various transforms, all applied after the application has rendered it's content..

      This allows the window system to be extended in a variety of ways without changing one line of the application's code. The windows can be minimized quite literally by adjusting the transformation matrix, or by playing with transparency, without the cooperation of the application. One could transform the window contents down to icon size, and composite the content with an iconic badge, producing a minimized icon representing the window, complete with live content, without the cooperation of the application.

  • Legacy (Score:3, Interesting)

    by ooze ( 307871 ) on Saturday August 21, 2004 @04:02AM (#10031174)
    From the Article:

    <i>Once you accept that fact and admit that
    it&#146;s actually the right way to go, the design falls out, simply by
    stripping away legacy stuff that isn&#146;t needed any more.</i>

    This is actually the hardest thing to do. Todays Computer systems ar still mostly based on the concepts of 30 and more years ago. So many things that got hacked in into Unix and/or Windows in the last decades could be unified in the way it is accessed. Plan9 is actually a nice step in this direction.
  • His new design duplicates the big mistake of X... putting policy in the application... and makes it worse. Why is it that people use Mac OS or Microsoft Windows? Because they have a consistent GUI, however it's implemented, that isn't subject to the whims of each applications programmer.

    And it's unnecessary... most applications need a fairly limited set of graphic primitives, and where composition of those primitives is needed scripts in the window system can virtually always do the job: the limiting factors in a GUI are rendering, which would still be handled in native code, and the human. Yes, some applications need tightly coupled high performance control over their display, but this is still and for the forseeable future an exception. Even art software really doesn't need the kind of GPU-intensive performance he's shooting for. The applications that need to do their own direct rendering of complex scenes, rather than just a fast way to pump bitmaps to the display, are pretty rare and can be dealt with as they are now with a shortcut through the window system. With OpenGL you can even have multiple applications of that kind running concurrently without interfering with each other.

    So the special case he's optimising for is already well handled, we don't need to build the window system around it. And in the general case it wastes the performance of the graphics card by keeping the application way off in the processor intimately involved with the mechanics of moving images around. As GPUs get more power and memory it will be more and more practical to move more of the window system into the GPU, and it will be more and more desirable to handle rendering in a common layer that's close to the display (in the GPU, where possible) the way Mac OS X already handles compositing.

    Quartz Extreme is pretty crude. It shouldn't be necessary to do rendering in the processor and compositing in the GPU (the normal case, because it doesn't copy rendered windows back from the GPU to the CPU and maintains the master of each Quartz window in main memory at all times), with all the extra memory traffic that creates... but it shows the way forward. A truly 3d GUI where windows and more complex application objects are managed in 3d space the way a window system handles them in 2d space should be possible and efficient.

    But consider what happens when you move a window into the 3d background... the GUI moves it away from you and tilt s it at an angle so you can keep it in view "off to one side". You can't keep going back to the application over and over again to re-render its part of the screen as your viewpoint changes. Instead you let the GPU map it onto a surface, and navigaton of the environment is smooth and more or less invisible to the application. Perhaps one might send the app a signal that say "suspend updating" when it's too far away or out of your viewpoint, but that's an optimization.

    No, this is exactly the wrong time to go back to the X model of a dumb server and smart applications.

In the long run, every program becomes rococco, and then rubble. -- Alan Perlis

Working...