Gosling: If I Designed a Window System Today... 431
An anonymous reader writes "In his blog entry for the 10th August, James Gosling (finally) publishes a short paper he wrote in 2002 entitled 'Window System Design: If I had to do it over again in 2002'. His design is to make the window system do the absolute minimum and move all the work into the client."
If I were to design a window system today (Score:5, Funny)
Re:If I were to design a window system today (Score:3, Informative)
Will keep it nice and pitch black. Trick I learned from a friend who lived in Arizona.
Re:If I were to design a window system today (Score:3, Funny)
It is dangerous though at certain times of the day if you live on busy streets with cars and your windows point towards them.
Not to mention everyone will think your growing an indoor hydroponic crop or running a crack house.
Re:If I were to design a window system today (Score:5, Funny)
For a minute there, I thought you were talking about a different [toastytech.com] Evil Yellow Face.
Re:If I were to design a window system today (Score:3, Informative)
________________________
My TrunkMonkey can beat up your TrunkMonkey
Re:If I were to design a window system today (Score:4, Informative)
Good idea (Score:5, Insightful)
Re:Good idea (Score:4, Insightful)
Re:Good idea (Score:5, Funny)
Sounds like it's back to the future.
Re:Good idea (Score:4, Insightful)
He said, "It's an old piece of crap." (He works on a green dumb terminal)
I asked him if it did the job well enough...
Re:Good idea (Score:3, Insightful)
Re:Good idea (Score:5, Insightful)
There's a reason nobody runs client-server. Desktop systems with fast processors are just too cheap.
Re:Good idea (Score:5, Insightful)
The thin clients, once in place, are good indefinitely. If I need more speed or capacity, I just upgrade the server - not a whole lab of 30 workstations. The savings continues from there. With no internal moving parts the energy consumption for the lab goes down, and the lab also stays cooler - requiring less energy again from the H/VAC system. Small savings, but with 30 labs - it adds up. On top of this, I don't ever have to touch the clients. They PXE-boot from a central Tao-tc Linux server [taolinux.org] which loads a small kernel and rdesktop on the client and then severs the connection. The client connects to a Dell rack-mount Windows 2003 Terminal server or one of our Fedora LTSP Terminal Servers, depending on our needs.
This means that, for any given lab, I have, at most, one machine to manage, install apps on, patch, secure and otherwise babysit. This saves big bucks on time, OS upgrade licenses, Patchlink licenses, Antivirus licenses, etc. that I would have needed for every computer in the lab (assuming they were Windows desktops). I also have much greater reliability: if one of the servers goes down I just change a setting on the Tao-tc box, have the lab reboot their clients, and presto, they're pointing to one of the other servers in another building and sharing it's power while I re-ghost the dead server.
We also allow our users to disconnect from their sessions instead of logging out. This means they can come back later to any of the thin-clients in the building, log in and be exactly where they left off before. This is a godsend during power outages - the servers are on UPS's, when the power comes back on, the users reconnect to their existing sessions and no work is lost, no data is corrupted.
Granted, the thin-client scenario is doesn't work for every situation - we use high-end workstations for CAD/CAM and Video Production Labs. We also use dedicated workstations for those staff who need to sync Palms or use local USB devices, etc. but for "normal" staff, classroom and lab use - it rocks!
One Dual-processor 3.2GHz server with 4Gb of RAM can serve over 100 clients running Office at blazing speeds. Word and Exel load "instantly". You should see the look on peoples faces when I show them an empty IBM 300PL (P2 133 MHz) system net-booted to windows, and I click on Word. It invariably blows their workstations away. And because people using the Terminal Server can't install every shiny, blinky piece of software that shows up it STAYS fast. And saves me more money and headaches in the process.
The best part is that our our Mac OS X users can use RDP to connect to the terminal servers too - allowing them to use the Windows-only software with ease - instead of forcing them to give up their Macs. In fact we just did a week-long class on some proprietary Windows-only app in our iMac Lab. With the 3-button scroll-mice plugged in, they never even knew the difference; worked, like a charm.
So, yeah, you aren't going to use thin-clients for gaming and surely not at home, but in a controlled corporate or school environment, you can't beat it for ease of management, performance and cost savings.
Re:Good idea (Score:5, Insightful)
Your point? When I double-click on the Word icon, it takes two seconds for the window to come up. Why should I care if the app is pre-loaded or not? If it's pre-loaded on everyone's system, why should we time it as if it weren't?
And you still conveniently neglected to address the fact that I mentioned other apps, and that even in today's high-speed world, a few seconds of waiting for your app to load really isn't a big deal. Like I said, I dedicate more time to excretory bodily functions every day than I do to waiting for my software to load.
Re:Good idea (Score:5, Interesting)
The problem is that there isn't enough RAM to preload all the applications. My PC during the day will run (and this is a typical work day) Word, Excel, Outlook, Visio, Project, Firefox, Internet Explorer, and an assortment of programs that don't concern me like virus scanners.
If all of these applications tried to preload themselves on startup then your swap would grind itself into dust and boottime would be in excess of 30 minutes.
It's false reasoning to say that Word takes only 2 seconds. It takes 2 seconds plus whatever time it added to the boot sequence. And if the first application you run isn't Word then there is a good chance that the preloaded Word will be swapped to disk anyway, making the next instance of Word take significantly longer than 2 seconds.
Take note that Mozilla also uses the preload trick. My work machine has consumed all 256MB of RAM and 450MB of swap after a fresh reboot and a login. That's 450MB of intensive swap activity that slowed down my boot sequence. If I just want to check my appointments in Outlook then why am I forced to wait for Word and Mozilla to fight over the swap? It's ludicrous.
Re:Good idea (Score:3, Insightful)
Re:Good idea (Score:5, Insightful)
That works fine for word processors. But there are several situations where client-server GUI is the preferred solution. For example, VPN clients are often implemented as Citrix over IPSec. In scientific and academic circles it's common for the applications to run on headless mainframes and/or supercomputers. In high security environments it's sometimes impossible to run a client locally; you must run it remotely and display it locally.
And when you start to consider other issues - how much does it cost to patch and maintain 3000 Windows desktops? - it quickly becomes obvious that per-user desktops aren't the be-all and end-all.
Load times, sure, I'll agree with you that's not a big deal.
Office doesn't add anything to the boot sequence (Score:3, Informative)
Re:Good idea (Score:5, Insightful)
It isn't, and therefore your point is irrelevent. Just because it happens to work for you that way doesn't mean it does, or needs to, work for everyone that way.
I'd prefer not to have apps load on boot unless I tell them to load on boot, thank you very much. I don't need either my RAM or swap being soaked by an app I haven't given explicit permission to load.
But then, that may be why I don't live in a Windows world.
Max
Re:Good idea (Score:5, Informative)
No dude. They start fast because Microsoft really, really know how to optimize their software to start fast, and because that's always been a corporate priority for them. Research has shown that given two roughly equivalent apps, most people will decide on the basis of which one starts faster.
That doesn't mean they're using dirty tricks. Look into working set optimizers some time.
Re:Good idea (Score:4, Informative)
However, they aren't alone in this at all. Apple Quicktime, Mozilla, Real, and dozens of other packages all try and do the same thing. Fortunatly, the trend has been away from trying to hide this from the user.
Re:Good idea (Score:5, Informative)
I can't speak about the way the preload problem is handled today but when I worked at Microsoft (10 years ago) we spent an insane amount of effort to get the apps to load faster, or more accurately, to give the apps the appearance of loading more quickly. Often at startup we would just load as little of the app we could to render the main frame and then load the actual functional code in the background.
This was prioritized over code maintainabilty, obviously some features, and even some bugs.
I really can't see this being a huge priority in open source projects since code maintainability (modularity) and the associated flexibility is such a high priority in most of them. Just look at linux bootup. You could probably speed things up significantly by not running all those sh scripts in /etc/init.d/ (or running them after the console login has appeared, giving the appearance of boot) but what developer would give up that flexibility for a little speed?
Re:Good idea (Score:3, Informative)
Got some more information on that? I searched for it, and I found:
Microsoft Office Startup - Microsoft Office Startup preloads some
Apart from the fact that I've never seen this - but then, I'm using Office 2000 - preloading some DLL files is still far from
Re:No (Score:3, Informative)
Re:Good idea (Score:4, Insightful)
Re:Good idea (Score:3, Informative)
In the common case, there is no client or server, just an app running on a PC. So don't build the assumption of networking into windowing.
Look at X: it's built on a standardized network protocol. If you want you could implement a different Xlib, even one with a different API, so long as it used the X network protocol. But that extra degree of design freedom has been a complete waste of effort, code complexity, and CPU cycles.
Re:Good idea (Score:3, Insightful)
a) History shows that it's rarely the bottleneck (eg: fast GUIs like QNX and BeOS are client/server);
b) There is no other good place to put it --- kernel space is too dangerous.
So once you've defined the binary protocol between apps, it's a tiny step to make that network transparent while you're at it.
Re:Good idea (Score:5, Insightful)
I disagree. A properly separated model like the DRI/DRM has high speed userspace drivers and doesn't cause problems when there is a bug with the driver. The model requires a tiny kernel module called the DRM. It manages the hardware resources (eg, DMA) and queues the clients so they don't stomp on each other. The majority of the driver is written in userspace and links directly into the client application (via libGL).
Putting an entire video driver in the kernel isn't sensible. There is too much complexity and there is no actual benefit. It's actually faster with modern cards to link the driver directly to the client. The reason being that the client can fill the command buffer without context switches. If the entire video driver was in the kernel you would need two context switches per queue flush.
The only cases when the network transparency causes a measurable impact is when a lot of data is being pushed from the client to the hardware. For those situations we have direct rendering in the client. For all other situations, the costs of network transparency are lost in the noise. I wouldn't be too concerned about it.
Re:Good idea (Score:5, Insightful)
OK. So let's run with that idea. We still have multiple clients and one set of hardware, so we need to arbitrate the access. We also need to have a common place where the clients can share information like window clip lists. Then there are issues like drag and drop, cut and paste, etc which also require inter-client communication. And how do you solve the issue of two clients seeing the mouse button being pressed, and both assuming that the click was for them?
At one stage you realise you need to have a program, somewhere, that coordinates all of the clients. Assuming this won't be the kernel, it must be another userspace program. We call this program "the X server". And because we have all these clients in userspace, and the X server is also in userspace, they need to use some form of inter-process communication. XFree86 and X.org already use UNIX sockets; one of the fastest IPC methods available. The only thing faster would be shared memory but that's been tried before and it's more hassle than it's worth.
Now admittedly there are some situations where the clients simply need to talk directly to the hardware. For example the client needs to upload a 3D texture or render an MPEG-2 frame. For those situations it makes no sense to send that data to the X server first. So for those situations we do have solutions that bypass the X server and go directly to the hardware. These include the DRI extension, the MIT-SHM extension and the DGA extension.
Re:Question (Score:5, Informative)
DGA is Direct Graphics Access. It allows a client to directly access the framebuffer. The client needs to handle all the pixel packing models (eg, RGB555, RGB888, RGBA8888) and work out the line strides and so on.
MIT-SHM is MIT Shared Memory. Though the magic of shared memory, the client and server share a piece of memory containing an XImage or a Pixmap. The client can change the contents and then tell the server to render the image/pixmap to screen.
DRI is the most complicated of the bunch. It stands for Direct Rendering Infrastructure. The basic explanation is that it allows a client to send commands directly to the video card. At the moment the only DRI implementation is OpenGL. So for example, quake3 links to libGL.so which is a DRI aware library. The library finds out which video card you have and loads the appropriate video card driver. This driver knows how to turn OpenGL commands into the hardware commands for your video card. These commands are shoved into a buffer which is provided by DRM (the DRI kernel module) and then blasted off to the video card. X only gets involved to setup cliplists and create windows; the actual 3D rendering is all done from the client directly to the hardware.
Those 3 extensions take care of the biggest bottlenecks in X: framebuffer access, image transfers, and 3D streams. There are some other issues with the X pipe - things like latency moreso than throughput - but I'm not sure that removing the X pipe would solve those problems. The biggest issues with X on Linux right now are things like latency, single-threading, libraries that block, lack of double buffering, lack of synchronisation between window managers and widgets and clients, etc.
Re:Good idea (Score:5, Interesting)
20 years ago it might have made sense to make this very modular since nobody knew how things would end up looking. Today, let's face it, windowing is "done." All the various libraries over X look and work very similarly, just different enough to clash. Windowing is mature, I say it's time for more integration.
Modularity should be at the level of source code, not runtime components.
Re:Good idea (Score:5, Insightful)
So you don't want a windowing system that is flexible, because people might want to take advantage of that flexibility?
I think your reasoning is a misguided attempt to solve by technical means what is really a politicial/sociological problem. The proper solution is to have a strong set of UI guidelines and standard libraries that make it trivially easy to follow those standards, not to limit the capability of the system just because you don't trust people not to abuse it.
Re:Good idea (Score:5, Interesting)
Here's what happened with X11 as I see it. Fundamentally, it was a network protocol spec and client/server model. Then they built Xlib to implement the network protocol. Then, they ginned up the Athena widget set, sort of a quickie prototype on how one might actually start to build a UI on X. Having done that, they called it a day, leaving it for others to implement the look and feel, and basic functionality like cut & paste. As a result, for years most developers just used the (crappy) Athena widgets as-is, while some others started off in several directions making something worth using (e.g. Motif). Finally a decade or two later we have some decent Windowing toolkits built on X, and a look-and-feel morass.
X was overly focused on the juicy technical aspects of the day (like networking) and stopped short of providing an application-ready windowing system.
Instead, focus on delivering 1) a rock-solid, high quality API and 2) a great-looking, high performance implementation for the common case - an app running locally on a PC.
In other words, pick good API (e.g. GTK) and implement it over a small, relatively primitive rendering library to access hardware (e.g. OpenGL).
If people want to come along later and re-implement the API to insert a network transport layer, fine. They can write a shared object to do that, and slip it in place of the local version. Its backend might be VNC, X, whatever.
If they want to re-implement it to look different, or have different functionality, fine. But there probably won't be a lot of motivation to do this (except maybe to default to a different skin, or make this year's buttons round instead of square, so people feel better about paying for an OS upgrade). And if you replace the default shared GUI library with something else, *all* apps will link against it and hence look the same. (Unless you want to get fancy for some reason and run them with different link paths or something).
Re:Good idea (Score:4, Insightful)
Instead, focus on delivering 1) a rock-solid, high quality API and 2) a great-looking, high performance implementation for the common case - an app running locally on a PC.
Common case for X? Local PC? WTF are you talking about. X was designed for UNIX servers during the days when "Local PC" didn't even exist. I'm *very* glad that X is such a flexible and bullshit-free protocol. That's why you can have different desktop environments be it KDE, Gnome or even stuff like blackbox.
I had yet to crash X by passing some null value or whatever to the Server. Windows API, on the other hand, "solid" as you imply, craps out when you start passing NULLs to it. Heck, you can still crash the entire box by passing some weird numbers to the right functions!
Sorry, I'll take the simplicity and flexibility of the protocol over any copy&paste or drag&drop "standard".
Re:Good idea (Score:5, Insightful)
I'm afraid you have it ass backwards. An integrated system allows you the *flexibility* to do whatever you want, including a uniform interface.
You can still do whatever you want with the interface ultimately but you would be encouraged to do it the consistent way. The encouragement would come from the fact that you wouldn't have to build standard features from scratch every time.
For example, Windows never stopped Photoshop from implementing their proprietary windowing subsystem for their palettes and such. But I, for one, am glad that they still use standard drop down menus, minimize/maximize buttons, etc.
Re:Good idea (Score:3, Interesting)
Re:Good idea (Score:5, Interesting)
All drawing work is done on the client side, and the window server has nothing to do with fonts, cut/paste support or much other higher level work. The window server simply assembles the drawing buffers to the displays (via hardware or software) and routes events, using hints of the foreground application and the visible window area to manage the task.
A consistent look and feel is derived by providing a consistent set of high level toolkits, residing on a set of lower level drawing frameworks.
Shared libraries make sure the needed code is readily available and resident in memory. Font are cached and vended as shared memory resources using Mach's virtual memory semantics. Drawing buffers also leverage Mach VM semantics.
Re:Good idea (Score:4, Interesting)
Re:Good idea (Score:4, Informative)
of mine.
And for a one-post description of Quartz and links to Usenet posts from "mpaque", you can see this post [slashdot.org].
Mike's post have always impressed me, hence the apparent fanboyism of those post. And the more experience I gain in this industry, the more I respect this king of professionalism in non-official communications.
Re:Good idea (Score:4, Interesting)
This like saying that once cars could go faster than it was safe to, no more innovation was needed.
What would happen if such a windowing system appeared would be this: the GTK+ folks, the QT folks, and some Xlib folks would port their libraries to the new system, add in a few missing things, and we'd have the same thing we have now, but faster, and easier to maintain.
It would also move importants bits out of the server, like the paste buffers and so on, into plain user space, where they could more easily be standardized. Free of the legacy swamp of X, clean designs could spring forth, and innovation could happen.
For instance, I'd love for there to be an easy to use clipboard stack, that could hold as many clips as there was diskspace, and an interface to help maintain it. Click the clip you want, second button it into place. This would make things like document editing easier, and make using the clipboard less of an annoyance.
Re:Good idea (Score:3, Interesting)
Wait... (Score:4, Funny)
Wait, so you mean you wouldn't require this?
http://it.slashdot.org/article.pl?sid=04/05/04/22
New windowing system from scratch? (Score:3, Interesting)
So. Who's with me to create tihs sourceforge project? Dead serious folks, not a troll. BUt who has the gumption to get it started and make it run VERY fast, then after a while see how the X.org people would think of merging or using it? Eh eh?
let me know, use my gpg key to encrypt messages (it's the wave of the future!).
--zoloto
Re:New windowing system from scratch? (Score:5, Insightful)
However, I'd suggest talking to various people in the industry first - people tend to get lots of misinformation that sounds correct but actually isn't by reading random stuff on the web (and slashdot). See the remarks about Office preloading above - doesn't happen.
So the design of X it turns out isn't actually a serious bottleneck on performance. If you do profiling runs and such, you find that having everything co-ordinated by the X server isn't a serious speed problem and that much larger issues are things like having to read from the fb to do XRENDER blending (or was last time I checked).
Basically, before going "wow yeah, right on!" I suggest you do a lot of research into the design of past and present windowing systems - what sounds intuitively right often isn't.
Re:New windowing system from scratch? (Score:3, Interesting)
I followed that thread with a lot of interest, and I believe the poster who said that MS is just really good at optimizing apps. I think the preloading "myth" may have to do with the shortcut to Office that appears in C:\Documents and Settings\Start Menu\Startup after installing Office. If this isn't a preloader for Office, what is it?
Re:New windowing system from scratch? (Score:3, Informative)
Basically Office starts really fast because it makes heavy use of lazy loading (only loads code just-in-time), and because Microsoft do things like reordering code and functions in the source to ensure that frequently used code resides in the same pages in memory.
OK, I can see from the replies to my first po
Wow comment on X (Score:3, Interesting)
I can't count how many times I hear on /. someone saying that X is too bulky, etc, etc. And here's Gosling saying (2 years ago) that X is headed in the direction of slim and lightweight.
Am I misreading what he's saying?
trust your eyes, not negative comments. (Score:5, Insightful)
People who complain about X being "bulky", "bloated" and all that are trolls. It was designed on slim hardware and designed flexibly.
The real test is to simply use it. Try Feather Linux or any of the other tiny distros out on some crufty old hardware and see for yourself. I've got a 90 MHz laptop that runs X just fine with 24MB of RAM thanks to Woody, fluxbox and other light applications. Gnome 1.4 also is snappy enough, though KDE is a little slow. X is not the problem if there is one! Feather runs even faster running testing and unstable Debian code and I suspect that two further years of going down Gosling's path is responsible. Of course newer hardware runs better and I don't have problems with things like xawtv, Xine or quake running with KDE or Window Maker on top of X.
From where I stand, I have no idea what people are talking about when they complain about X. They never say anything specific.
Gosling, Hopkins, JCR, and Unix-Haters Perspective (Score:3, Interesting)
Re:Wow comment on X (Score:5, Informative)
No. You've read him correctly. What Gosling is saying is a simplified version of the X.org roadmap.
For example, X11 contains a font renderer. The design is really ancient. No anti-aliasing. Poor kerning. Clients couldn't access the glyphs very easily, which made it impossible to do arbitrary things like strokepaths or proper printing. It kind of sucked. A number of font extensions were considered for XFree86. Any one of them would have addressed all of the existing issues but they were heavyweight solutions.
So in the end Keith Packard wrote a better solution. He implemented the XRender extension. This extension simply knows how to draw rows of glyphs. It also knows about alpha masks (Porter Duff compositing). The client now turns the font (typically TrueType) into alpha-masked glyphs and sends the glyphs to the X server. If you're using a GNOME or KDE desktop with antialiased fonts then you're using Keith's XRender extension and client-side font rendering instead of the X11 font renderer. This is only practical because the client-side libraries (eg, libxft2) are shared.
Another interesting example of "slimming down" the X server is the Composite extension. Rather than implement a heavy compositing engine in the X server, Keith designed this extension so it simply renders the window into offscreen memory. Another extension, XDamage, tells a special client called the "compositor" when any region of the window changes. The compositor then uses the XRender extension to render the damaged region with appropriate drop shadows and/or alpha masks. Notice how the rendering is still done by the X server so it can be hardware accelerated.
For the future of X.org there is more of this "slimming down" being planned. Jim Gettys and Keith Packard gave a presentation [keithp.com] in July 2004 where they suggest the future of X is as an OpenGL client. They are both keen on a new design where the X server stops being the arbitrator of video hardware. Instead it becomes an OpenGL client with direct access to the video hardware through the DRM, just like every other DRI client. There is a simpler version of that paper in the short slideshow Life in X Land [keithp.com].
Re:Wow comment on X (Score:4, Interesting)
Right now the Open Source nv and ati drivers in X.org are more than adequate for normal 2D display, but they suck for OpenGL.
I'm not idly ranting about ideology, I'm talking practical problems. When I bought my new computer I put an GeForce in it because everyone said NVidia drivers were the best for FreeBSD. But NVidia never bothered update their driver for -CURRENT for six months. Six freaking months! I should be the one deciding what branch, OS and kernel to use, and *not* NVidia.
I fully understand that NVidia and ATI have proprietary intellectual property tied up in their drivers, and can't open them. But that's their problem, not mine. I'm not going to cry for them, because I don't have this problem with my ethernet card, hard drives or CPU.
Re:Wow comment on X (Score:5, Informative)
Network bandwidth? (Score:3, Insightful)
If networking bandwidth is a problem now with the X format (which is basically just sending clicks and so forth), why does he think the response is going to be any better when sending *a huge ton of pixel data*?
Even if you assume that you only have to transmit differences, there are still cases where the difference will be several megs. (For example, a fullscreen clear in 1600x1200x32).
Re:Network bandwidth? (Score:5, Insightful)
RTFA? (Score:3, Informative)
pixel pushing for remote connections? (Score:3, Insightful)
I would think that you would want to stream, when possible, rendering api calls, so that you can send pixel data as pixels, vector data as vectors, and 3d surface and texture data as such.
Maybe have a method for negotiating what rendering api's are supported, stream those, and then render the rest as pixels and push those.
My intuition tells me that doing so would make remote connection streaming a lot more efficient. Maybe someone with more knowledge than me can explain why this would/wouldn't be a good idea.
X is moving in this direction (Score:5, Interesting)
OpenGL -> userspace command buffer -> graphics memory (DMA via Direct Rendering Manager).
Text layout, fonts, etc, are all done server-side, and the only thing the "server" sees are pixmaps and GL commands.
What about Drag n Drop and the Clipboard??? (Score:4, Interesting)
Not in the windowing system (Score:3, Informative)
If I had to design the app over again... (Score:3, Interesting)
Seriously, all we are talking about is modularizing the windowing system. If the WS is as simple as possible, people are going to rely on libraries and windowing toolkits to get their work done. I guess that's already happened with GTK, etc.
He's talking about the Amiga (Score:5, Interesting)
For my fellow Amigaites out there:
*sniff* That brings back memories. Sadly, my Amiga RKMs now support my monitor, but oh... this is so familiar. :-)
For the rest: the Amiga had a graphics library layer that talked directly to the hardware. On top of that was built the "Layers" library which does what Gosling is talking about. It just handled clipping lists and "stacking" without any other details. On top of this layer was built the GUI.
Also, the Amiga used a single message port to communicate with the application. You could have more msg ports, but rarely needed it. You waited politely for a message, fetched it, then acted upon it as you will. All your GUI events queued up nicely in the message port.
On top of that (Score:5, Interesting)
- An immediate mode API is something like GL or Cairo. The app sends drawing commands, and the engine executes them immediately. If something moves and needs to be redrawn, the app musdo all the work of redrawing the scene.
- A retained-mode API is something like EVAS. Instead of submitting drawing commands, specifies what the scene looks like in a scene graph. The canvas library does all the dirty work of redrawing scenes efficiently when things change.
The plight of X (which has very fast drawing, but often has brain-dead application redraw behavior) shows that no matter how fast your graphics API is, many application programmers (who usually aren't graphics programmers), will still make it look slow by writing apps that redraw the whole scene on even the smallest change. A good canvas API like EVAS fits very well with how most apps work. Canvas APIs are slower when scenes change quickly, but for most apps, most UI elements stay static. Where canvas APIs excel is in allowing simply-coded apps to demostrate good redraw behavior, because all drawing optimization can be done in the canvas.
Of course, for scenes which are animated and quickly-changing, apps should be able to access the underlying immediate-mode API, but this hsould be the exeception rather than the rule.
Hey JC, sir, after you do the outer space thing (Score:3, Funny)
Yes, but... (Score:3, Funny)
client side libraries (Score:5, Insightful)
The one problem is there though: by using lots of client side libraries with their own per-client state some efficiency is lost and startup time increases greatly.
We are already seeing this with today's gtk and kde programs that already have disastrous startup times.
[mark@silver mark]$ time xterm -e exit
real 0m0.111s
user 0m0.066s
sys 0m0.007s
[mark@silver mark]$ time gnome-terminal -e exit
Bonobo accessibility support initialized
GTK Accessibility Module initialized
Atk Accessibilty bridge initialized
real 0m0.311s
user 0m0.203s
sys 0m0.032s
[mark@silver rxvt-unicode-3.3]$ time src/rxvt -e exit
real 0m0.052s
user 0m0.004s
sys 0m0.003s
The machine is Athlon XP 2500+ 1G RAM, no swap, Fedora Core 2.
Re:client side libraries (Score:3, Insightful)
By running all this behavior (accessibility) independantly for all clients you have lots of overhead.
The above time is a second/cached run (w/ prelink). For the first one, it takes 4+ seconds while rxvt and xterm are still 1.
This encourages big/monolithic applications which is not the unix way.
note: rxvt I also listed has unicode support and eye candy can be enabled too. There's no need for configuration gui to run on startup.
If I had the chance... (Score:3, Funny)
Short-sighted design (Score:3, Insightful)
The last thing we need is a new design that allows arbitrary user programs to have read/write access to the entire screen (read-only access is bad enough). Sooner or later, we are going to start running arbitrary programs on our computers in a secure sandbox environment that is enforced by the OS (and ultimately, the CPU). What happens when some cute little game your spouse downloaded yesterday decides to make itself look like your electronic banking program? Under this architecture, how do we avoid that? Hack every display driver in existence? Trust the shared library to prevent this?
Re:Short-sighted design (Score:5, Interesting)
Subtle point here. The hardware the apps have access to may not be the screen, but an off-screen surface which the graphics acceleration subsystem (such as OpenGL) can draw into. The window system takes care of getting the bits drawn in the off-screen surface onto the displays.
These surfaces can live in VRAM, or DMA addressable main memory. Lots of tricks can be done here by having the app draw at what is essentially the front end of the display processing pipeline.
Consider for example the GL buffer-as-texture path. Apps draw into a buffer, which when flushed is treated by the window system as a texture to be applied to the app window. The whole GL pipeline can be applied, scaling or warping the texture, altering the geometry the surface is to be applied to, mixing the texture with other texture sources, and so on.
If I Designed a Window System Today... (Score:3, Interesting)
Also, it wouldn't require each and every event (mouse move, click,
All this is simultaneously going to do away with the many competing and incompatible GUI toolkits for X and the non-themeability of Windows and Aqua, and make network transperency work without huge bandwidth requirements and sluggish responsiveness.
It's worth pointing out that this window system exists in the form of PicoGUI [picogui.org]. Sadly, the site is currently down.
By the way, what is it about OpenGL that makes it so suitable for acceleration, yet it's horribly slow when implemented in software?
Re:If I Designed a Window System Today... (Score:3, Interesting)
I must say I still prefer the idea of a "heavy" windowing system/manager, mainly for the benefits it gives to network transparency. For example, imagine several clients connecting from several different machines and/or user accounts. Under X11 with GTK+/QT/whatever, the different widget sets appear differently, and can appear differently depending on user settings. I like the sound of Fresco [fresco.org] - all widgets are rendered by the server. Under this sort of system the differences between GTK+, QT, etc would simp
windows on my world (Score:3)
Re:windows on my world (Score:3, Interesting)
Bad idea (Score:5, Insightful)
Letting the app take care of its own window borders is a bad idea as well. This is one of the worst parts in M$ Windows - once an app hangs, there is no way of closing or minimizing a window or simply of getting it out of the way. It's way better to have this handled by a separate process.
Re:Bad idea (Score:3, Informative)
It doesn't kill the concept of thin clients. You render to a servers back-buffer, and transmit it over the network to the client. Then you proxy mouse and keyboard events to the window on the client to the server. It is non-trivial, but definitely possible. From the article:
Re:Bad idea (Score:3, Insightful)
All the time. Let's say I do mass-scale operation in Finale that's going to take a lot of time - extracting all 25 parts from a score and grinding them all out to disk. It's going to take a few minutes, where the application window goes dead.
Sure, I could kill the process, but that wouldn't give me the desired results, would it?
It would be sure nice to be able to minimize
Re:Bad idea (Score:3, Insightful)
Bitmap scraping. Been there, done that, got the bad rendering, latency, crummy feedback, etcetera. By making heroic efforts and badly compromising the user experience you can actually make it more network efficient than X, but you completely blow the feedback you need for end-user efficiency.
The usual scenario is that when you click on an object the application's idea of what the UI looks like is completely different from
Re:Bad idea (Score:3, Insightful)
Huh? If an app hangs in MS windows, I find clicking on the window's close button results in an "Application not responding -- do you want to kill the process?" dialog box popping up. Whereas X tends to cope really badly with hung clients, generally requiring you to use an entirely different command (e.g. "kill window" rather than "close window", altho
Re:Bad idea (Score:4, Informative)
Annoying, isn't it? The trick here is not to let the apps draw to the visible frame buffer, which requires all this visible region locking, but instead to have the app draw to a buffer (in off-screen VRAM or main memory, addressable by the window system). The window system is then responsible for placing the content on-screen.
So, how does that help? The app always has a place to draw, and the separate window system process always has control over moving the bits onto the display. This means that a window manager can always order the window out, or move the window aside, without the cooperation of the application. In one implementation, the draggable areas used to move the window are registered with the window manager, so the app need not even be involved in moving the window.
One of the more interesting possibilities here comes into play when the window system is implemented atop a powerful engine such as OpenGL. In this case, the window buffers can be treated as texture sources and applied using the various texture combiner paths, along with scaling, filtering, and various transforms, all applied after the application has rendered it's content..
This allows the window system to be extended in a variety of ways without changing one line of the application's code. The windows can be minimized quite literally by adjusting the transformation matrix, or by playing with transparency, without the cooperation of the application. One could transform the window contents down to icon size, and composite the content with an iconic badge, producing a minimized icon representing the window, complete with live content, without the cooperation of the application.
Legacy (Score:3, Interesting)
<i>Once you accept that fact and admit that
it’s actually the right way to go, the design falls out, simply by
stripping away legacy stuff that isn’t needed any more.</i>
This is actually the hardest thing to do. Todays Computer systems ar still mostly based on the concepts of 30 and more years ago. So many things that got hacked in into Unix and/or Windows in the last decades could be unified in the way it is accessed. Plan9 is actually a nice step in this direction.
All the mistakes of X, made worse (Score:3, Interesting)
And it's unnecessary... most applications need a fairly limited set of graphic primitives, and where composition of those primitives is needed scripts in the window system can virtually always do the job: the limiting factors in a GUI are rendering, which would still be handled in native code, and the human. Yes, some applications need tightly coupled high performance control over their display, but this is still and for the forseeable future an exception. Even art software really doesn't need the kind of GPU-intensive performance he's shooting for. The applications that need to do their own direct rendering of complex scenes, rather than just a fast way to pump bitmaps to the display, are pretty rare and can be dealt with as they are now with a shortcut through the window system. With OpenGL you can even have multiple applications of that kind running concurrently without interfering with each other.
So the special case he's optimising for is already well handled, we don't need to build the window system around it. And in the general case it wastes the performance of the graphics card by keeping the application way off in the processor intimately involved with the mechanics of moving images around. As GPUs get more power and memory it will be more and more practical to move more of the window system into the GPU, and it will be more and more desirable to handle rendering in a common layer that's close to the display (in the GPU, where possible) the way Mac OS X already handles compositing.
Quartz Extreme is pretty crude. It shouldn't be necessary to do rendering in the processor and compositing in the GPU (the normal case, because it doesn't copy rendered windows back from the GPU to the CPU and maintains the master of each Quartz window in main memory at all times), with all the extra memory traffic that creates... but it shows the way forward. A truly 3d GUI where windows and more complex application objects are managed in 3d space the way a window system handles them in 2d space should be possible and efficient.
But consider what happens when you move a window into the 3d background... the GUI moves it away from you and tilt s it at an angle so you can keep it in view "off to one side". You can't keep going back to the application over and over again to re-render its part of the screen as your viewpoint changes. Instead you let the GPU map it onto a surface, and navigaton of the environment is smooth and more or less invisible to the application. Perhaps one might send the app a signal that say "suspend updating" when it's too far away or out of your viewpoint, but that's an optimization.
No, this is exactly the wrong time to go back to the X model of a dumb server and smart applications.
NON-PDF text... (Score:3, Informative)
Window System Design:
If I had it to do over again in 2002.
James Gosling
December 9, 2002
In the deep dark past I have been involved in building window
systems. I did the original design and implementation of both the
Andrew and NeWS window systems. Both of which predated
X11. They shared with X11 the architectural feature of being
networked: clients sent messages to the server over TCP
connections. I occasionally get asked "if you had to do it over
again, what would you do? Would you do the same thing"
Re:Can we get a non-pdf'd link please? (no text) (Score:3, Informative)
Jackass...
Re:WHAT'S WRONG WITH PDFs? (Score:4, Insightful)
That said pdf is EVIL INCARNATE a simple 15 page document suddenly becomes a 4 meg monstrosity trying to be a 'book' in a medium where it's inapropriate. That is a pain to navigate, and you can't cut and past sections from it most of the time so you can have just the part you need in a small usuable text file.
Needless to say I equate putting docs in a pdf file on par with most of the other stuff PHB's do with tech they don't understand.
Sorry for the rant, but I just spent an hour downloading docs in 4 pdf's averaging 3 megs+ each that would have easily fit (images included) in less than a meg in any other format and been more usefull as well. The smallest was 164k, 3 pages, no images.
Mycroft
Re:Don't remember who it was... (Score:3)
This is probably the worst application of XML I have ever heard. And believe me, people are using it for everything and nothing already.
So your proposal is to use a protocol that takes 10x the size of the data it needs to transfer. XML (used that way) is just a file format. Why taking the most bulky one?
Talk about a fast and lightweight system. I need to draw a pixel. Size of the XML packet: 165 bytes. Wow.
Re:Don't remember who it was... (Score:5, Funny)
<objection tone="disgusted">
<body>xml is too sodding verbose for any use ever anywhere. Satan himself recoils before its horror.
</body>
</objection>
Re:Don't remember who it was... (Score:3, Funny)
<?xml version="1.0" encoding="ISO-8859-1"?>
<argument:objection xmlns:argument="http://www.mynamespaceserver.com/ n amespaces/argument" tone="disgusted">
<argument:body>xml is too sodding verbose for any use ever anywhere. Satan himself recoils before its horror.
</argument:body>
</argument:objection>
Re:Don't remember who it was... (Score:3, Insightful)
Re:Don't remember who it was... (Score:3, Funny)
What, you mean like IMAP does?
)
Re:Really dumb, missing the point entirely (Score:5, Insightful)
Quite a lot, and they are all pretty necessary.
I think you're understimating all the things that modern applications are required to do.
The windowing system should offer basic 2D and 3D functions, widgets (file selection boxes, drop-downs, radio buttons, checkboxes, crap like that),
What about ListViews? TreeViews? IconViews? What about internationalized text? Text-layout libraries? Image-loading libraries? Component libraries? HTML renderers? Interprocess-communications libraries? Event-notification libraries? Audio libraries? You can't "un-invent" all of these features. Few people want to go back to the bad-old days of poorly formatted text, apps that can only read BMP files, each app needing to reinvent stuff like PDF display and HTML display widgets, apps that can't talk to each other, apps that can't handle multimedia, apps that don't notice when things in the system change, etc, etc. Doing things "quick, fast, and shitty" is a lot easier than doing things "right," but you'd be stupid to want "shitty" over "right."
They simply used Toolbox calls, and any functions that were needed beyond that were not so hard to include right in the binary itself.
What the fuck do you think the toolbox was? It might not have been a shared library (it was a widget in ROM), but it *was* a library nonetheless. It was no different than Qt is, only it can't handle HTML, internationalized text, etc, etc.
Re:Really dumb, missing the point entirely (Score:4, Insightful)
People demand a lot from their desktop these days, so their desktop does a lot of things. It can take a lot of code to do it all. Sure, you may get smaller binary sizes and no library dependencies writing everything in assembly, but a) it's infeasible and b) the desktop would lack most of the features people want.
But you're missing the point. Ever done an ldd on X or an Xserver? That's what Gosling is talking about.
Using this new windowing scheme with have little/no effect on existing clients because they will still use some toolkit like GTK to do their windowing and widgets. It's not like that client developers would have to write their own widget set for each client, they will still use GTK or Qt or whatever, just like they do today.
What will need to change is the toolkits themselves.
If you had actually RTFP you would see that he was advocating a windowing system that was even more simple what you suggest.
Re:Really dumb, missing the point entirely (Score:3, Insightful)
The question is could most the average consumer realistically replace their current machine with a Mac Plus? I believe the answer is no. Why? Because there are lot of things that weren't around in those days that we take for granted now.
Imagine trying to tell people that they can no longer use 24-bit color, watch videos, play MP3s,
Re:Really dumb, missing the point entirely (Score:3, Insightful)
You're talking about windowing systems, not application frameworks. There's a difference. Using ldd on this currently running Konqueror process, I see the following "windowing system" dependencies: KDE, Qt and X11. That's it. And most of KDE and Qt are NOT part of the "windowing system".
Most of the libraries you see in the ldd are part of X11
Re:if I made a windowing system... (Score:3, Insightful)
This is impractical, and doesn't happen anywhere.
Look at MS Windows - if you go into add/remove programs you will see a few things listed. Do a find for *.exe *.com and other executable extensions. You find a few hundred more applications that are not listed there, many stand alone like ntbackup, cmd and tens or hundreds of others. Now consider a *nix system, which has the philosopy of lots of little programs that do one tiny job
Re:if I made a windowing system... (Score:3, Insightful)
Re:Quite frankly I wouldn't let him design a windo (Score:3, Informative)
I will grant you that Swing can get complex and it can take serious effort to eliminate bottlenecks. It's intended to be a general framework for MVC based
Re:Quite frankly I wouldn't let him design a windo (Score:3, Insightful)
Pan
Re:Quite frankly I wouldn't let him design a windo (Score:3, Interesting)
Re:Astonishing that Gosling is getting things wron (Score:5, Informative)
The clips are needed to handle event routing, as you mention, and to take care of some subtle internal housekeeping, even when Quartz Extreme is in use. Since not all systems or graphics cards can run Quartz Extreme (there are certain specific graphics card capabilities needed) the clipping information is needed for software compositing cases.
Re:Double Buffering (Score:3, Informative)