Wayland, a New X Server For Linux 487
An anonymous reader writes "Phoronix has a new article out on Wayland: A New X Server For Linux. One of Red Hat's engineers has started writing a new X11 server around today's needs and to eliminate the cruft that has been in this critical piece of free software for more than a decade. This new server is called Wayland and it is designed with newer hardware features like kernel mode-setting and a kernel memory manager for graphics. Wayland is also dramatically simpler to target for in development. A compositing manager is embedded into the Wayland server and ensures 'every frame is perfect' according to the project's leader."
Does this... (Score:5, Interesting)
...spell the death-knell of X-based graphics drivers? Does this mean that such drivers will finally be folded into pure kernel modules with no fancy wrappers required? Does that also mean that we can eliminate X as a dependency for playing video games, and using Linux in multimedia or kiosk environments?
Its good to see Red Hat developers doing this (Score:5, Interesting)
While I'm a firm believer in "If it ain't broke, don't fix it", I think it is good to see Red Hat developers (or any developers) looking to future needs and being allowed to devote development time towards those needs.
Xorg isn't broken for most users right now, but planning and creating alternatives is a good idea.
There is alreeady a brainstorm... (Score:5, Interesting)
For including Wayland in Ubuntu:
http://brainstorm.ubuntu.com/idea/15205/ [ubuntu.com]
HELL yes. (Score:5, Interesting)
eliminate the cruft
ABOUT F'ING TIME.
X has been a case study in How Not to Write Software for twenty years now. Once upon a time, it was a pretty cool experimental software project. But for twenty years now, there have been exactly two kinds of X development:
A) Throw a layer on top of it to make it useable for normal people
B) Throw another driver underneath it to make it just barely work on your particular hardware.
Project A is fine until someone has to get beyond your little layer, in which case it's .xinitrc hell. Project B is just treading water, postponing the day that we all realize this indispensable software tool is a gigantic house of cards headed for collapse.
Probably some XFree86 dudes are reading this. Let me just tell you I appreciate your diligence in the nightmare of a job you've set yourself to, but the time has come. Take off and nuke the site from orbit. It's the only way to be sure.
Re:Does this... (Score:5, Interesting)
Re:Does this... (Score:5, Interesting)
But if you're going to "get rid of the cruft", doesn't that suggest that you'd want to move to an architecture that depends on the kernel's graphics subsystem rather than maintaining a zoo of obsolete usermode drivers?
Hardware is the purview of the kernel. Or at least the Hardware Abstraction Layer. (Depending upon your OS's architecture.) Today's X servers still support all kinds of usermode drivers, just so that 95% of configurations can thunk it all to the kernel. Thus there doesn't seem to be much point in providing the graphics drivers in the X server. Better to let the kernel do its job while the X server does its job of drawing the GUI through interpreting a series of abstract commands.
As a bonus, the graphical system becomes available to a variety of programs that desire low-level access to the graphics card rather than running an X server.
Perhaps I'm being naive, but why wouldn't a clean separation between the graphics system and the kernel drivers be an advantageous goal?
What's wrong with X... (Score:5, Interesting)
Xorg isn't broken for most users right now, but planning and creating alternatives is a good idea.
In a sense I think it really is... Admittedly, not necessarily in a way that everybody would notice, as you said - but still...
What X is good at, basically, is putting simple UIs over a network. For instance, I can run XEmacs remotely over the internet, and performance is decent.
Presently, this feature of X is being under-utilized. We're using a network-transparent protocol for the display server, but most people aren't running apps from remote hosts, and applications aren't being written with this in mind.
Basically, for all the overhead associated with something like X to be worthwhile then one of a few possible conditions must be satisfied. Either applications must be designed such that they work efficiently over the network with the present limitations in the display protocol, or the display protocol must be enhanced or altered such that today's applications can run reasonably well over a network link.
Running X apps over an internet link versus a LAN is an extreme case, admittedly - but nevertheless, an old Athena app can do it, while the simplest of GTK or QT apps can have a real problem with it...
Re:Does this... (Score:3, Interesting)
Didn't Microsoft do this in NT4, and wasn't it a very bad move for security and stability?
Article misunderstands concept? (Score:5, Interesting)
The article describes this as a "new X server". However it quotes the author of said program pretty much implying this is some kind of a new, non-X video interface. He talks about "porting" GTK+ from X, and about writing native applications for it and a "new, rootless X server" in order to be able to run X apps. All things that would not be necessary if this were an X server.
In other words, this is not an X server.
Re:Does this... (Score:5, Interesting)
Microsoft moved the ENTIRE graphical subsystem to the kernel. Which made things faster, but did make them less stable and less secure. (Sun also had an option to take this route in Solaris.) This would be like taking the entire X server and cramming it down into the kernel.
I'm not suggesting anything quite so extreme. Rather, I'm talking about leaving device control in the hands of the device manager (i.e. the kernel or the HAL) and having the X server access the device through a standard driver interface. Much like audio, mouse, keyboard, networking, and storage are all handled by the kernel.
FWIW, Microsoft left the graphics in the kernel. They did add some extra checks to stabilize it, but we're all living with those kernel graphics today.
Re:Does this... (Score:3, Interesting)
Doesn't Vista finally move the graphics into userland?
And, the funny thing is, Microsoft's been screwing this up for years now, starting with OS/2 1.1. It was IBM's turn to work with OS/2 1.3, and they quickly moved it out of the kernel. But, Windows NT was a fork of OS/2 1.2, not 1.3.
Canonical (Score:5, Interesting)
Shuttleworth said he is going to pay devs to work on major upstream projects. He should focus on this. For one, it would affect both KDE and Gnome users, and it would solve a major problem with Linux. If he really wants Linux to compete with OS X in terms of interface, he should focus on the X Server first.
That being said, I hope Novell chips in some dev support, and that the KDE, Gnome, QT and GTK+ devs all chime on what they'd like to see changed.
Re:What's wrong with X... (Score:3, Interesting)
Hell, I run MythTV over the Internet. By that I mean I run mythfrontend on the server via X11 to my non-Linux work computer.
Works pretty well.
Re:Thank you! (Score:1, Interesting)
X is an application. *And* a server. On OS X as well. And under *nix, and even under Windows (when you add an X server to it.)
Woosh!
X is an application on OS X (and yes, it's a server, too), but it's not the graphics server. It uses Apple's graphics server to do its magic, like any other application.
That's what the original poster was referring to -- it's time to move beyond X11 as the graphics server, and just let it be like any other application that can push bits to the screen.
Re:Does this... (Score:4, Interesting)
Not that I'm aware of. Even the server 2008 kernel (which allows you to boot into a console) has the graphics in the kernel.
Re:HELL yes. (Score:5, Interesting)
Fun fact: Every single bit of development put into X.org since the big fork has been undoing the mistakes committed during the XFree86 years. Making X modular, reworking font handling, introducing EXA, crafting AIGLX, even kernel mode-setting, all of these are undoing bad things from the past.
KRH, who's been writing Wayland, also is responsible for parts of GEM, RGBA OpenGL visuals, and other GLX improvements. Neither he, nor any of us, are planning to just abandon code that's still viable. Tender love and care goes a long way with bit-rotted code.
Re:Thank you! (Score:4, Interesting)
It was [y-windows.org] done long ago.
Catch up to OS X circa 2001. (Score:3, Interesting)
Re:What's wrong with X... (Score:3, Interesting)
this has stumped me for years (Score:5, Interesting)
We went through the same thing when switching to X.org from XFree86. When will nVidia support it? When will ATi support it? When will my driver be ported?
Why is X dealing directly with the drivers anyway? Why isn't there a thin graphics layer in Linux, like a framebuffer that supports acceleration? Write X to that. Then you can switch your X or use whatever GUI you want and you hardware still works. Freedom to choose, right? The mantra of Open Source?
I remember a bunch of very promising GUIs coming up in the early 2000s that really struggled without enough drivers. "The source is open, just port the thousands of drivers!" yeah sure.
Re:Thank you! (Score:4, Interesting)
IIRC, Y was not based on a composited architecture, so by today's standards it sucks. It might have made a nice successor to X back in the BeOS days, though.
Re:Thank you! (Score:3, Interesting)
No, what should have been done a long time ago is to scrap X and use Sun's NeWS. They demo'd it in the 80s and it fixes a lot of problems in any X(/Y).
Oh well.
Great point! (Score:3, Interesting)
That's actually a great point.
It's particularly annoying if you have some intermittent problems with, say, the mouse disappearing and the only way to recover is to restart X. Being able to restart X without killing all the clients would change such a problem from "completely ruins my entire user experience" to "mildly annoying".
NeWS was good and bad (Score:5, Interesting)
When it was good, it was very very good, but when it was bad, well, it was a windowing system written in Postscript that let you pass pieces of Postscript code back and forth between client and server to get things done, which could be appallingly insecure and buggy. (The fix for this was that Gosling later wrote Java with things he'd learned from NeWS.) (Postscript is essentially FORTH souped up with font knowledge, but it's good enough to handle objects in.)
Postscript means that WYSIWYG, really, rendered however you'd like. The terminal emulator, for instance, used Postscript, rendered at screen resolutions, and if you needed to print it, it rendered them at printer resolutions, or if you iconized a terminal window, that just set to font size to 1 point / 1 pixel, and you could still see any interactions happening in the icon. My boss was around 60, and constantly switching pairs of glasses if he needed to talk to somebody and also read his computer screen. We set his psterm default to 24-point font, and everything was Just Bigger, and he could just read it without messing around. Mouse tracking worked well, because you could make the tracking happen down in the server without the extra round-trip to the client, so if you had a slow network connection it was ok - you were passing data across the link, not pictures of the mouse, etc.
get bent (Score:5, Interesting)
a single distro gaining popularity will be instrumental for standardizing what is expected of Linux for introduction into a larger market
the flaws in your biased & self interested statement are manifest. manifest and hilarious.
first off, i dont see what advantage linux has by gaining a larger market. will these corporate interest invest time and code into linux? will they provide free support to end users? will the people joining your standardized Linux gain anything from the homogenized OS they've switched to?
second, how will standardization improve linux's marketability? to what extend to we enforce homogenization? do we enforce a single wm on all users? do we enforce a single office suite? a single programming language?
third, how do you plan to tell everyone they must work on the same thing? do you think everyone will willingly conform to the standard patterns you wish to impose and stop working on the things they think are cool?
Linux's only strength is that it grants developers an open environment to develop novel new things. all I see in your desire is a self interested bid to crush out the free spirited developer spirit and to replace it with something tooled to replace commercial operating systems with something free, for your own good. honestly I dont think you or your desires contribute anything useful to the linux community, in fact I think the desire to make Linux conform to the expectations of the "typical" desktop has been the worst mistake the Linux movement.
Re:Does this... (Score:3, Interesting)
There already is one API to program against if you're a hardware developer who wishes to support Linux. It's called Linux, and it's used by all Linux distributions.
If you are software developer, there are a number of APIs you can choose from that will work across all reasonable Linux distributions.
Really, doing cross-distro work isn't that hard.
What is hard is making your software work on any Linux distro at all if you're going about it the wrong way. Linux isn't about ABI stability, and, with the plethora of different library versions out there that change during upgrades, neither is the userland. The only way to make your software work across different versions and/or different distros is to allow people to compile the code themselves, at least the part of the code that interacts with the operating system. But once you've done that, all the work of compiling and integrating with the plethora of different distros out there can be done by others, and will be if they value your software enough. You don't have to worry about that.
Re:Does this... (Score:4, Interesting)
Re:Network Transparency? (Score:3, Interesting)
Boy are you wrong, NoMachine _is_ X! They just use very clever compression schemes to make it usable over slow connections. http://en.wikipedia.org/wiki/NX_technology [wikipedia.org]
VNC, Remote Desktop, Citrix etc are just kludgey ways to get X-like remoteness for systems that were never meant for it and it's noticeable because they come with severe limitations.
And for Gnome and KDE being "monstrosities" I don't know where you got that idea from, their respective developers seem to be pretty pleased with them. And Qt, which is one of KDE's pillars, is widely seen as one of the most powerful and easiest to use GUI toolkits. Ah, yes, it's cross-platform as well, which seem to defeat your "broken foundation" statement.
Re:Notes for the Uninformed (Score:3, Interesting)
I think he means the administrative overhead of X11 protocol management is not the bottleneck, so it may as well be written in Python. The mathematically intensive and low-level parts, which are only part of the overall code base, can remain C. I don't necessarily agree, as this would increase latency for protocol handling, which really adds up.
This is a common pattern in modern software development - very simple, mechanical C code wrapped by high level, elegant Python (or your scripting language of choice). You get 99% of the performance of pure C with 1/100th the development time. I've done the same in high-performance scientific computing projects with great success.
Latency (Score:3, Interesting)
Open an explorer window in Windows. Resize it. Notice the flicker and rendering artifacts. Open a Nautilus window in GNOME. Resize it. Notice the horrible flicker and rendering artifacts. This is without compositing. With it, you get other artifacts.
It doesn't matter what program or what machine you are using. You can compare the same thing using Firefox in Windows and Linux. A much slower Windows machine produces redraws with far fewer artifacts than a high-end Linux box. Since Windows does it better, the must be something wrong with X.
Re:X11 - The X Windowing System (Score:3, Interesting)
Ah, insult the people. A sure sign of a strong technical argument.
Correlation is not causation. How's that go?
xlsatoms | wc -c gives 14746. That's a whole 14KILO bytes permenantly taken up by atoms.
It's an information leak.
Er, it's a graphics system.
Used to create a user interface. User interfaces should have audio, and support video capture, etc.
It has the overhead of packing/unpacking data into structs. Huh? Doesn't any API have this problem?
No, not the ones that aren't network protocols. Most APIs pass their data as parameters, in registers, or as pointers.
ICCCM is complex and not that great. On the other hand, with the years of hindsight, large sections (some quite interesting) have dropped completely
ICCCM only exists because X11 has no thought to actual applications interacting with each other.
Lots of unused (even at the time) primitives like jaggy lines and circles designed for 1-bit displays.
I don't consider myself old, but I've used X11 running on 1 bit displays. They were cheap and so some universities ahd them in significant quantities. I'm pretty sure it's not possible to draw a smooth line or circle on a 1 bit display, but if you know how, feel free to revolutionise display technology.
The problem is they are still jaggy on N-bit displays. That's why nobody uses them.
Your list is, so far rather uninformed. Have you ever programmed with XLib or examined the X protocol? Are you just regurgitating one of the more peculiar slashdot memes?
Says you. I've programmed in Xlib directly, including talking the protocol in binary for one project (where Xlib was too large to use). I've used Xt, Motif, and then also the newer toolkits of course. I've also hacked on the server source a little bit (not open source).
Bad name (Score:3, Interesting)
198?-1983: V
1983-1984: W Window System
1984-1986: X1 - x10
1987-2008: X11
2008: Wayland?
Sorry, but I won't use it unless it is called X12 or Y.