Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Intel Software Ubuntu Linux

Intel Rejects Supporting Ubuntu's XMir 205

An anonymous reader writes "Just days after Intel added XMir support to their Linux graphics driver so it would work with the in-development the X11 compatibility layer to the Mir display server premiering with Ubuntu 13.10, Intel management has rejected the action and had the XMir patch reverted. There's been controversy surrounding Mir with it competing with Wayland and the state of the display server being rather immature and its performance coming up short while it will still debut in Ubuntu 13.10. Intel management had to say, "We do not condone or support Canonical in the course of action they have chosen, and will not carry XMir patches upstream." As a result, Canonical will need to ship their own packaged version of the Intel (and AMD and Nouveau drivers) with out-of-tree patches."
This discussion has been archived. No new comments can be posted.

Intel Rejects Supporting Ubuntu's XMir

Comments Filter:
  • Re:Layering? (Score:2, Informative)

    by Billly Gates ( 198444 ) on Sunday September 08, 2013 @07:36AM (#44788881) Journal

    I thought they were switching to Wayland anyway.

    X was really hated here on slashdot in the early days 12 years ago! I guess modern hardware hides its issues with bloat and a client and server relationship. It was made for dumb terminals and it shows. Low latency for things like glx openGL has had issues and many hacks just to get it to work mediocre wise.

  • by jbolden ( 176878 ) on Sunday September 08, 2013 @07:43AM (#44788909) Homepage

    When will Linux finally use standard ABIs and APIs for drivers just like very other OS on the planet?

    Never. The moves to support binary compatibility on Linux have been rejected time and time again by the Linux community. And that is far from the case for every other OS on the planet. Many OSes don't support arbitrary drivers at all.

    I guess RMS thinks that is oppressive and wants opensource hardware even though patent holders from the likes of the h.264 consortorium forbid it!

    RMS has little to do with this policy. Even Linus mostly supports it. The people who don't support it are mostly Windows users.

    Why can't you just use one driver written a few years ago and use it universally across all distros due to this?

    You can. You can use drivers from almost 2 decade ago that were sources into the kernel. You can't generally with binary drivers because Linux doesn't offer binary compatibility.

  • by Anonymous Coward on Sunday September 08, 2013 @08:47AM (#44789155)

    You can. You can use drivers from almost 2 decade ago that were sources into the kernel. You can't generally with binary drivers because Linux doesn't offer binary compatibility.

    You really really can't. (At least not in general.) Structures keep changing the names of members, and removing members. For example: Recently, user id's changed from being plain old integers to being potentially a struct that you have to use accessor methods to use. Every time a new kernel comes out, our drivers invariably break and need additional code adding to check for and cope with the new kernel. (No, we can't just stop supporting old versions of the kernel. Big companies are out there demanding support for Redhat5 and some event earlier. The 2.6 kernel tree is still very much alive. And of course, yet others leap on to the new kernel as soon as it's downloadable.)

    So yeah - maintaining a kernel module is currently a pain in the ass and backwards compatibility with older drivers would be a big win. Binary compatibility would be preferable; but source compatibility would be a good start.

  • by Balinares ( 316703 ) on Sunday September 08, 2013 @09:01AM (#44789207)

    I think Mir is a case study in how to correctly identify problems and then going about solving them all wrong.

    See, the good thing about Wayland is, it does the right thing in having a limited scope. It aims to do one thing and do it well: provide an API for GUI clients to share buffers with a compositor.

    And the problem with Wayland is, of course, that... it has a limited scope. Screen management? Input handling? Buffer allocation? "A modern desktop needs all that!" say the Ubuntu devs, and yeah, that's absolutely correct. "That's a client concern," say the Wayland devs, and guess what? From their point of view, that's correct too. (Although Wayland since started working on an input handling API.)

    Now, the important thing to realize is, when the Wayland guys say that something is a client concern, as I understand, they don't necessarily mean the GUI applications, no. They mean the compositor.

    Meaning that a whole lot of the stuff desktop shells rely on is, in fact, not provided by Wayland itself.

    That's where Weston comes in: it's supposed to be an example (a "reference implementation", to use the designated words) of how to write a compositor. But... not necessarily in a way that meets the higher level needs of desktop shells. Unsurprisingly, both KDE and GNOME will be using their own compositors.

    So basically, a whole lot of the desktop integration on top of Wayland will be, as it were, left as an exercise to the reader.

    With all that in mind, I think the highest outcome end game is somewhat clear: frame-perfect rendering through the Wayland API of Mir-composited KDE/GNOME/Unity clients.

    Or in other words, Mir should probably be a set of APIs to handle all the admittedly important desktop integration -- clipboard, multi-screen layout, input and gestures, systray/notification requests... -- with an optional and replaceable compositor thrown in.

    All the points of contention that I know of, mainly that Canonical requires server-side buffer allocation (presumably for mobile ARM platforms) where Wayland does it client-side, could have been resolved with some diplomacy and a mutual willingness to reach a satisfactory compromise.

    But instead, it looks like the report card is just going to say, "Doesn't play well with others." As usual. What a sad mess and wasted opportunity.

  • Re:Layering? (Score:4, Informative)

    by Lemming Mark ( 849014 ) on Sunday September 08, 2013 @02:36PM (#44791375) Homepage

    I can speculate a bit with things that sound plausible to me given my knowledge of the system - but I might still be a bit off target... Still, maybe it helps a little.

    Mir and Wayland both expect their clients to just render into a buffer, which clients might do with direct rendering, in which case the graphics hardware isn't really hidden from the client anyhow. AFAIK it's pretty normal practice that there's effectively in-application code (in the form of libraries that are linked to) that understands how to talk directly to the specific hardware (I think this already happens under Xorg). The protocol you talk to Wayland (and Mir, AFAIK) isn't really an abstraction over the hardware, just a way of providing buffers to be rendered (which might, have just been filled by the hardware using direct rendering).

    In this case Xorg is a client of Mir, so it's a provider of buffers which it must render. The X11 client application might use direct rendering to draw its window, anyhow. But the Xserver might also want to access hardware operations directly to accelerate something it's drawing (I suppose)... So the X server needs some hardware-specific DDX, since Mir alone doesn't provide a mechanism to do all the things it wants.

    As for why the Intel driver then needs to be modified... I also understand that Mir has all graphics buffers be allocated by the graphics server (i.e. Mir) itself. Presumably Xorg would normally do this allocation (?) In which case, the Intel DDX would need modifying to do the right thing under Mir. The only other reason for modifying the DDX that springs to mind is that perhaps the responsibilities of a "Mir Client" divide between Xorg and *its* client, so this could be necessary to incorporate support for the "Mir protocol" properly. That's just hand-waving on my part, though...

    Bonus feature - whilst trying to find out stuff, I found a scary diagram of the Linux graphics stack but my brain is not up to parsing it at this time of day:
    http://en.wikipedia.org/wiki/File:Linux_Graphics_Stack_2013.svg [wikipedia.org]

  • by Desler ( 1608317 ) on Sunday September 08, 2013 @03:47PM (#44791823)

    Good luck finding contributors. Most FOSS contributors don't get C++ at all.

    Absolute bullshit. KDE, for example, is written in C++ has had no hard time finding thousands of contributors. There are also tons of FOSS apps written in C++ with Qt. You sound like someone who has been in a cave from the mid 90s until now.

    The language isn't ready for such low level components yet.

    In what specific way exactly?

Be careful when a loop exits to the same place from side and bottom.

Working...