Forgot your password?
typodupeerror
GNOME

Gnome 3.12 Delayed To Sync With Wayland Release 204

Posted by Unknown Lamer
from the just-in-time-for-the-x11-joke dept.
sfcrazy writes "Gnome developers are planning to delay the release of Gnome 3.12 by approximately a week. It's a deliberate delay to sync the release with the availability of Wayland 1.5. Matthias Clasen (Fedora and Gnome developer) explains that 'the GNOME release team is pondering moving the date for 3.12.0 out by approximately a week, to align the schedule with the Wayland release plans (a 1.4.91 release including all the xdg-shell API we need is planned for April 1). The latter 3.11.x milestones would be shifted as well, to avoid lengthening the freeze period unnecessarily.'"
This discussion has been archived. No new comments can be posted.

Gnome 3.12 Delayed To Sync With Wayland Release

Comments Filter:
  • by Dave Whiteside (2055370) on Wednesday February 05, 2014 @11:09AM (#46162911)

    X11 low level is such a huge mess of everything from text to pixels to anything higher
    wayland is a much better step up to modern display tech
    [basically]

  • by buchner.johannes (1139593) on Wednesday February 05, 2014 @11:13AM (#46162951) Homepage Journal

    This talk is insightful: https://www.youtube.com/watch?... [youtube.com]

  • by Dcnjoe60 (682885) on Wednesday February 05, 2014 @11:20AM (#46163023)

    Say, forever? MATE with Xorg is much more suitable than either Gnome or Wayland.

    Ummm, even MATE is planning on switching to Wayland, so evidently the developers of MATE would disagree with you.

  • by Anonymous Coward on Wednesday February 05, 2014 @11:33AM (#46163143)

    It's busy work for a team that believe X11 is antiquated and needs replacement. Wayland fans will argue that X11 is a huge mess that tries to do to much while in reality it has been doing its job for many decades.

    In other words, we are throwing away the baby with the bath water because some people suffer from NIH syndrome and gladly trade away stability for something shiny in hopes to encourage more gaming on Linux despite the questionable performance improvements.

  • How X/Wayland work (Score:5, Informative)

    by Anonymous Coward on Wednesday February 05, 2014 @12:01PM (#46163403)

    X is an application that runs on a computer with a graphics card. A graphical application can then use the X libraries to send drawing commands over the network to an X server, eg "draw a line", "draw a box", "display this bitmap", "display this string in font zzz". Note that the concept of "client" and "server" are somewhat reversed from the normal meaning - the X "server" runs on your desktop, the client can run somewhere in a datacenter. Think about apps processing major datasets and then generating some output...makes sense then for the "client" to be on the larger computer.

    The X "server" also controls keyboard/mouse/etc, sending events to the relevant client apps.

    The problem with X is that the whole design no longer matches what client apps want to do - eg interact with 3d-capable GPUs, use exactly the fonts they want (rather than asking the X server to use the font with a specific name, and hoping the server has that font available). And the network layer inbetween adds latency. And the set of commands that X supports is now so large that the server is huge - making it buggy, full of security holes, and difficult to maintain.

    Wayland is basically the lowest-level parts of X (handling the graphics card), plus a very simple API for clients - it accepts bitmaps only, no "draw a line" stuff. And no network support - clients are local only. Client apps can then code directly against the Wayland APIs (ie pass it bitmaps, often generated by interacting directly with a GPU to render 3d graphics into a buffer). Fast, simple. Or clients can code against the original X API, in which case the drawing commands are sent across the network as they always were, and then are handled by a slimmed-down X-server which executes the commands and passes the resulting buffer to the local wayland server.

    In practice of course, most apps will code to the GTK or QT apis, and it is GTK/QT which is responsible for interacting with Wayland or X.

    There is also code in development to create a "wayland network protocol" where clients can generate images (on whatever computer they are running on - which might have a GPU), and then send the (compressed) image over the network to another wayland server where the user actually sits and sees the graphics. This is a kind of "RDP remote desktop" mode - and according to many people will actually out-perform the old X way of doing things, as well as being vastly simpler to implement/maintain.

  • by Anonymous Coward on Wednesday February 05, 2014 @12:32PM (#46163733)

    I want just to point out that X11 as in Xorg is now stable and can mostly run without configuration by hand thanks to the people behind Wayland. People forget the past easily...

  • by Billly Gates (198444) on Wednesday February 05, 2014 @12:54PM (#46163941) Journal

    Let me change a few words around for entertainment purposes :-)

    PHB: "I'm in complete agreement with you. What they're doing is throwing away everything that used to work with activeX just to have something they can say they developed in a lot of cases. They're also making a lot of things W3C only, and throwing out compatibility with IE 5 quirks mode and IE 6 browsers."

    Sound ludicrous but my point is X is also a bad technology that is dated and a thorn in the Unix ecosystem equally. People fear change sometimes and I can tell you the same Unix nerds screamed when Sun got rid of Inet for their event driven system system which is more modern and appropriate for laptops and modern systems where conditions change.

    Have you used Linux 13 years ago? I have and MAN X SUCKS back then and it showed more easily. You do not realize it because you have very fast cpus with gobs of ram. But I remember X taking up just 75% of the ram before I could run any apps.

    X is a dumb terminal technology made for greenscreens of the Carter Administration of where you had the VAX the size of a refigerator and everyone had dumb terminals or smart ones with long serial cables to the computer room.

    It was not designed for multimedia, OpenGL, low latency, touch screens, low power phones or tablets, or even running a desktop program.

    Thats right your code has to run in a server and another copy of itself as a client. Why?? Gnome hides some of this the openGL workarounds are to go to the linux kernel directly with DRM (where does that leave Solaris and FreeBSD users?) to get around that horrible hack of X.

    The unix haters manual has an entertaining section on X. The protocol, technology, and API are beyond horrible.

    I think Linux lost on the desktop because of X! We would not be fighting for 15 aweful years recreating Guis due to the lack of X working.

  • by Anonymous Coward on Wednesday February 05, 2014 @04:35PM (#46166381)

    Previous AC poster here..

    When you say "X has never been used this way", I presume you mean that nowadays most desktop users only run apps locally on the desktop, ie the client/server are on the same machine. This is true - now. I'm old enough to remember the "thin client" wave, where the latest coolest thing for businesses was to have a low-powered desktop system that was just screen/keyboard/operating-system/X11, and all the apps were run on servers. The networking ability of X made this possible. And even now, sysadmins often appreciate the ability to run some admin-type apps remotely.

    And one of the common complaints about Wayland is that it "lacks network transparency" - ie people are claiming that they still want/need the ability to run client and server on separate hosts. Just see comments elsewhere on this article..

    But modern apps want to do things that X wasn't originally designed to do, so X has lots of "extensions", eg the heavily-used DRI which allows apps to do their own rendering (eg 3d rendering, or rendering text themselves) and then pass the data as a bitmap to X - almost exactly like Wayland does. Because of the historical structure of X, the way such data is transferred between client and server is inferior to Wayland in many ways (esp security). And things like syncing rendering with the screen refresh (to avoid tearing) is difficult/impossible. And an X server still carries the code for a large number of APIs that modern apps don't use (but attackers can call).

    When running X client and server on the same host, some things are optimised, eg uses "unix" sockets rather than real network sockets, and passes "handles" to memory in some cases. But there is still significant overhead imposed by this original client->network->server separation that *many* people never need. Wayland turns this around - it assumes client/server are on same host, and remoting can be done by having some "proxy client" handling network traffic and then acting as a normal local client to the wayland server.

    And interestingly, apps that use DRI then lose "network transparency" (the ability to run client and server on separate machines). So AFAICT, the people complaining about "wayland not having network transparency" are being very unfair - X often doesn't either; only "simple" apps still work remotely. On the other hand, as my original comment noted, wayland has *two* ways of supporting remoting : by layering X on top, or by building an RDP-like system on top.

  • by raxx7 (205260) on Wednesday February 05, 2014 @07:16PM (#46168381) Homepage

    You're 90% right, but the devil is in the details.
    The X protocol allows applications to send drawing commands like "draw a line here, circle there, text with this font over there". You can also store pixmaps to the server and then reference them.
    But these drawing commands can't draw anti-aliased shaped, so in the late 1990s X applications were either pushing lots of pixmaps or pushing so many tiny drawing commands it was worse than pushing pixmaps.

    Then came XRender. XRender is based on pixmaps/gylphs, but also provides masking/blending operations on them.
    This allows for better re-use of server stored pixmaps, which allowed for anti-aliased applications with less network traffic.
    All in all, it's pretty slick.

    But history is repeating itself and application developers are again going back to pushing lots of pixmaps. Qt developers concluded that, for local clients, their client-side renderer was *much* faster than the XRender based one and at some point made it the default for Qt4. For Qt5, they didn't bother with a XRender based one.

    To top it off, whether it's XRender or brute force pixmap, modern X applications send so many commands they need a lot of bandwidth.Also, most X applications were never written to tolerate high latency connections, even though the protocol is asynchronous.
    So, remote X tends to work poorly over the Internet, leading a lot of us to use tools like VNC, NX or Xpra.

    The Xpra server runs as specialized X server and X compositor in the remote system, where the X application is to be run. Then it takes the contents of the X application's windown, scans for changed parts, compresses and sends it over to the Xpra client, which then draws the application window in the local system.
    Since the X application is talking to a local X server, there's no latency there. And the diffiing/compressing ends up requiring less bandwidth than sending the raw X commands.

    So, history has shown twice supporting drawing commands is a fool's errand, Wayland only supports pushing pixmaps. And only through shared memory, a Wayland compositor and a Wayland application must always to be on the same machine.

    But there isn't anything stopping anyone from implementing a Wayland compositor that does what the Xpra server does. So, that's pretty much plan "A" for running Wayland applications remotely.

Things equal to nothing else are equal to each other.

Working...