Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Graphics Intel Software Ubuntu Linux

Intel Rejects Supporting Ubuntu's XMir 205

An anonymous reader writes "Just days after Intel added XMir support to their Linux graphics driver so it would work with the in-development the X11 compatibility layer to the Mir display server premiering with Ubuntu 13.10, Intel management has rejected the action and had the XMir patch reverted. There's been controversy surrounding Mir with it competing with Wayland and the state of the display server being rather immature and its performance coming up short while it will still debut in Ubuntu 13.10. Intel management had to say, "We do not condone or support Canonical in the course of action they have chosen, and will not carry XMir patches upstream." As a result, Canonical will need to ship their own packaged version of the Intel (and AMD and Nouveau drivers) with out-of-tree patches."
This discussion has been archived. No new comments can be posted.

Intel Rejects Supporting Ubuntu's XMir

Comments Filter:
  • Surprised? (Score:5, Insightful)

    by Teun ( 17872 ) on Sunday September 08, 2013 @06:22AM (#44788847)
    I can't say I'm terribly surprised.

    Though Intel will be open to an alternative to X11 they are in no way obliged to carry an immature release just because Canonical wants to push theirs.

  • by multi io ( 640409 ) <olaf.klischat@googlemail.com> on Sunday September 08, 2013 @06:23AM (#44788855)
    Why does the Intel Xorg graphics driver have to know anything about XMir, which, as far as I understand it, is just an Xorg driver for running Xorg as a Mir client?
    • Quoting the first link:

      The other big change is the merging of XMir handling in the xf86-video-intel driver. When using XMir for running X11/X.Org applications atop a Mir display server, modified DDX drivers are still required. These modifications are now present in the xf86-video-intel driver by default rather than Canonical carrying the work as out-of-tree patches.

      • Quoting the first link:

        When using XMir for running X11/X.Org applications atop a Mir display server, modified DDX drivers are still required.

        Well, that just restates/confirms the layering problem I mentioned, without explaining it.

        • Re:Layering? (Score:4, Interesting)

          by Lemming Mark ( 849014 ) on Sunday September 08, 2013 @07:06AM (#44789007) Homepage

          I'm honestly not super clear myself! But the DDX is, as I understand it, the in-Xorg portion of the graphics driver. So I guess it's not unreasonable that that component needs to know it's not got complete control of the hardware, as opposed to the Xorg-only case where it would have. Presumably it needs to proxy some operations through Mir (or Wayland, for XWayland) that it'd normally just set directly.

          A *bit* like running X under X using Xnest or Xephyr, though I'd imagine it's less extreme than that (since those, I'd guess, have to issue X-level drawing commands to their host X server, whereas to get graphics under Wayland/Mir they'd just render to a memory buffer like any Wayland/Mir client).

          All slightly speculative since I'm not familiar with the in-depth technical details!

          • I'm honestly not super clear myself! But the DDX is, as I understand it, the in-Xorg portion of the graphics driver. So I guess it's not unreasonable that that component needs to know it's not got complete control of the hardware, as opposed to the Xorg-only case where it would have. Presumably it needs to proxy some operations through Mir (or Wayland, for XWayland) that it'd normally just set directly.

            Well..why would the Intel driver even be used when Xorg runs "hosted" as a Mir client? In that configuration, XMir should be the "driver", and any Intel driver code in Xorg should lie dormant. Or did this patch actually touch something other than Intel's Xorg driver?

            • Re:Layering? (Score:4, Informative)

              by Lemming Mark ( 849014 ) on Sunday September 08, 2013 @01:36PM (#44791375) Homepage

              I can speculate a bit with things that sound plausible to me given my knowledge of the system - but I might still be a bit off target... Still, maybe it helps a little.

              Mir and Wayland both expect their clients to just render into a buffer, which clients might do with direct rendering, in which case the graphics hardware isn't really hidden from the client anyhow. AFAIK it's pretty normal practice that there's effectively in-application code (in the form of libraries that are linked to) that understands how to talk directly to the specific hardware (I think this already happens under Xorg). The protocol you talk to Wayland (and Mir, AFAIK) isn't really an abstraction over the hardware, just a way of providing buffers to be rendered (which might, have just been filled by the hardware using direct rendering).

              In this case Xorg is a client of Mir, so it's a provider of buffers which it must render. The X11 client application might use direct rendering to draw its window, anyhow. But the Xserver might also want to access hardware operations directly to accelerate something it's drawing (I suppose)... So the X server needs some hardware-specific DDX, since Mir alone doesn't provide a mechanism to do all the things it wants.

              As for why the Intel driver then needs to be modified... I also understand that Mir has all graphics buffers be allocated by the graphics server (i.e. Mir) itself. Presumably Xorg would normally do this allocation (?) In which case, the Intel DDX would need modifying to do the right thing under Mir. The only other reason for modifying the DDX that springs to mind is that perhaps the responsibilities of a "Mir Client" divide between Xorg and *its* client, so this could be necessary to incorporate support for the "Mir protocol" properly. That's just hand-waving on my part, though...

              Bonus feature - whilst trying to find out stuff, I found a scary diagram of the Linux graphics stack but my brain is not up to parsing it at this time of day:
              http://en.wikipedia.org/wiki/File:Linux_Graphics_Stack_2013.svg [wikipedia.org]

    • Re: (Score:2, Informative)

      I thought they were switching to Wayland anyway.

      X was really hated here on slashdot in the early days 12 years ago! I guess modern hardware hides its issues with bloat and a client and server relationship. It was made for dumb terminals and it shows. Low latency for things like glx openGL has had issues and many hacks just to get it to work mediocre wise.

      • Everyone except Canonical are switching to Wayland.
    • that's the point mir isn't even close to ready so to shove it down users throats much like unity they just threw xorg on top of it.
  • by Billly Gates ( 198444 ) on Sunday September 08, 2013 @06:32AM (#44788869) Journal

    When will Linux finally use standard ABIs and APIs for drivers just like very other OS on the planet?

    Why can't you just use one driver written a few years ago and use it universally across all distros due to this? The other free BSDs have this and you can install the extra compat libraries to accomplish this. I guess RMS thinks that is oppressive and wants opensource hardware even though patent holders from the likes of the h.264 consortorium forbid it!

    Before I get flamed remember the article mentioned ATI and NVidia drivers as well so Intel is not the asshole here. Rather they different kernels and distros being redone requiring new QA and recompiling with every release.

    There is a reason many old time linux users like myself only run CentOS in a VM Now. It is because Redhat provides ABIs and APIs that do not change for 5 years. Unfortunately it also means an out of date distro as well which is not fair to non server users (even a few server users who need a newer app or framework.)

    • by jbolden ( 176878 ) on Sunday September 08, 2013 @06:43AM (#44788909) Homepage

      When will Linux finally use standard ABIs and APIs for drivers just like very other OS on the planet?

      Never. The moves to support binary compatibility on Linux have been rejected time and time again by the Linux community. And that is far from the case for every other OS on the planet. Many OSes don't support arbitrary drivers at all.

      I guess RMS thinks that is oppressive and wants opensource hardware even though patent holders from the likes of the h.264 consortorium forbid it!

      RMS has little to do with this policy. Even Linus mostly supports it. The people who don't support it are mostly Windows users.

      Why can't you just use one driver written a few years ago and use it universally across all distros due to this?

      You can. You can use drivers from almost 2 decade ago that were sources into the kernel. You can't generally with binary drivers because Linux doesn't offer binary compatibility.

      • Re: (Score:3, Insightful)

        Sorry but the patent trolls who sue everybody will make you sign a NDA making your work closed source if you make hardware. So the days of having it in the kernel are over.

        Microkernels and exokernels are what acemics say are supperior and the wave of the future.

        Regardless what OS doesn't use abi and api for driver development? I cant think of any modern OS? How about Mac users wanting a driver that works throughout versions? With the exception of the split between powerpc and x86 it is true on that platform

        • by dmbasso ( 1052166 ) on Sunday September 08, 2013 @08:07AM (#44789235)

          Sorry but the patent trolls who sue everybody will make you sign a NDA making your work closed source if you make hardware. So the days of having it in the kernel are over.

          You realize you're commenting on a story about Intel, right? You know, the company that has Linux kernel developers writing open source drivers for their chipsets.

        • by jbolden ( 176878 )

          Sorry but the patent trolls who sue everybody will make you sign a NDA making your work closed source if you make hardware.

          If you lose a patent suit and use someone else's patented work, yes.

          Regardless what OS doesn't use abi and api for driver development? I cant think of any modern OS?

          ZSeries OS (MVS), ISeries OS (OS/400), Cisco iOS, most embedded.... In general most OSes that don't care about quick and easy hardware support.

          I, hairyfeet, and others who want things to just work and have given up p

        • by caseih ( 160668 )

          And microkernels continue to remain in the realm of academics and theory, and not in the real world. Even Windows went down the microkernel route for a while with Windows NT, early versions, but for for performance reasons hacked and thunked things to the point that we're essentially back to a monolithic kernel now, with everything important running in-kernel, and in ring-0. Graphics moved back to ring-0, network drivers, etc.

          Darwin, though based on a microkernel core, is a hybrid kernel with a large BSD

          • NT was never microkernel - the drivers always resided in the kernel, not userspace. Windows 8 is more microkernel than NT ever was.

            Monolithic runs better on the x86 platform, while microkernel would run better on RISC, VLIW or SMP platforms. The reason monolithic seems to have won is that x86 has won. Microkernels have a lot better shot in CPUs based on ARM, MIPS, POWER, et al

        • by Kjella ( 173770 ) on Sunday September 08, 2013 @12:28PM (#44790923) Homepage

          Microkernels and exokernels are what acemics say are supperior and the wave of the future.

          Academics have been saying that since it was MINIX vs Linux and reality won. This is also orthogonal to API/ABI, you can have userspace drivers without a stable API/ABI and you can have a stable API/ABI with in-kernel drivers.

        • by SEE ( 7681 )

          Microkernels . . . are what acemics say are supperior and the wave of the future.

          Yep. That's what they were saying 25 years ago, too. And if you want one, GNU HURD is ready and waiting.

      • You can. You can use drivers from almost 2 decade ago that were sources into the kernel. You can't generally with binary drivers because Linux doesn't offer binary compatibility.

        But most manufacturers don't WANT to provide sources to their drivers - they'd be quite happy to provide a binary interface, but that's difficult to do in Linux.

        You might argue, fuck them then, sources or bust. Well, Linux use on the desktop is so low anyway, what incentive would they have to comply when they can just stick with Win

        • by Rich0 ( 548339 ) on Sunday September 08, 2013 @07:53AM (#44789175) Homepage

          Since this policy is never likely to change, I can't see why anyone is surprised Linux has still never made it on the desktop.

          Who exactly is surprised by this? Certainly not those who created the policy. The purpose of the policy was not to make Linux popular on the desktop, or anywhere else for that matter. The creators of the policy do not profit from Linux, so its popularity isn't really a big concern.

        • by maird ( 699535 )

          But most manufacturers don't WANT to provide sources to their drivers

          As someone who works on linux bug fixing for, among others, the hardware partners of a linux distro vendor I sense that changing day by day. Some never will publish but as a result those they compete with will generally have a lower per-developer cost of development leading to a higher rate of bug fixes alone for the vendors who do publish. Not publishing made sense when the PC was the only platform that mattered but I'm impressed by the number of x86/x86-64 build bugs I see for things being called point of

        • by jbolden ( 176878 )

          But most manufacturers don't WANT to provide sources to their drivers - they'd be quite happy to provide a binary interface, but that's difficult to do in Linux.

          Agreed. The server manufacturers didn't want to either that was until large number of customers made Linux compatibility a reason to buy hardware.

          The kernel developers can stick to their policy as much as they like. But put up barriers between businesses who make the stuff people want to use, and a small price to pay being keeping their source (

          • by tepples ( 727027 )

            Have you noticed Android tablet sales? Unless by desktop you mean x86 Linux is finally doing quite well.

            By "desktop" I mean an environment where I can have more than one application displaying on the screen at once. Use cases include splitting the screen down the middle between the document I'm writing and the document I'm referring to, or having a calculator that appears on top of the application I'm running.

            • by jbolden ( 176878 )

              That's not really a market segment it is a use case. If you want to say "high power desktops" then Linux does much better there than on the low end possibly around 4%. OSX is the big player that in general has far worse hardware support than Linux.

      • Re: (Score:2, Informative)

        by Anonymous Coward

        You can. You can use drivers from almost 2 decade ago that were sources into the kernel. You can't generally with binary drivers because Linux doesn't offer binary compatibility.

        You really really can't. (At least not in general.) Structures keep changing the names of members, and removing members. For example: Recently, user id's changed from being plain old integers to being potentially a struct that you have to use accessor methods to use. Every time a new kernel comes out, our drivers invariably break and need additional code adding to check for and cope with the new kernel. (No, we can't just stop supporting old versions of the kernel. Big companies are out there demandin

        • by jbolden ( 176878 )

          GP was talking about drivers not working between versions. You are talking about the complexity of maintaining a kernel module. That's a different issue. And yes stuff will break between kernel versions.

    • But hasn't X.org been the standard for well over a decade now, with Wayland only being non-universal in the future thanks to Canonical's self-segregating behavior? Or am I misunderstanding what you're saying?

      I don't think that this is the reason that it hasn't really thrived on the desktop; outside the hardcore devs, most of the people (particularly non-geeks) that might/do use Linux tend to not know or care about the details as long as it works with minimal/no intervention. IMHO, the reason is that it ha

  • by dbIII ( 701233 ) on Sunday September 08, 2013 @06:44AM (#44788915)
    Dumb framebuffer wars begun have they?
  • Confused (Score:4, Insightful)

    by msobkow ( 48369 ) on Sunday September 08, 2013 @07:27AM (#44789069) Homepage Journal

    So when Ubuntu 13.10 ships, it will force you to use XMir?

    If so, thanks for the warning. The last thing I want to do is deal with an unstable graphics driver. It's taken years for X11 with NVidia drivers to get stable, and I don't want to touch XMir with someone else's 10-foot pole for until it's been in use for at least 2-3 years.

    • well that's is kinda the point of the non lts release. testing.
      • by msobkow ( 48369 )

        For me the point of installing 13.04 was getting upgrades to certain packages I wanted, not testing.

        Oh well, hopefully by the time I'm forced to upgrade from 13.04 the steaming pile will have stabilized.

        • it dos not bother me all that much. what will happen is people will shift away from ubuntu hell most have aruldy to non stock variants just to avoid unity. as all the support moves over to wayland ubuntu will be forced to use wayland as nobody makes apps or support mir. there just trying the same tactics they pulled with unity despite hot badly that has failed.
  • by Balinares ( 316703 ) on Sunday September 08, 2013 @08:01AM (#44789207)

    I think Mir is a case study in how to correctly identify problems and then going about solving them all wrong.

    See, the good thing about Wayland is, it does the right thing in having a limited scope. It aims to do one thing and do it well: provide an API for GUI clients to share buffers with a compositor.

    And the problem with Wayland is, of course, that... it has a limited scope. Screen management? Input handling? Buffer allocation? "A modern desktop needs all that!" say the Ubuntu devs, and yeah, that's absolutely correct. "That's a client concern," say the Wayland devs, and guess what? From their point of view, that's correct too. (Although Wayland since started working on an input handling API.)

    Now, the important thing to realize is, when the Wayland guys say that something is a client concern, as I understand, they don't necessarily mean the GUI applications, no. They mean the compositor.

    Meaning that a whole lot of the stuff desktop shells rely on is, in fact, not provided by Wayland itself.

    That's where Weston comes in: it's supposed to be an example (a "reference implementation", to use the designated words) of how to write a compositor. But... not necessarily in a way that meets the higher level needs of desktop shells. Unsurprisingly, both KDE and GNOME will be using their own compositors.

    So basically, a whole lot of the desktop integration on top of Wayland will be, as it were, left as an exercise to the reader.

    With all that in mind, I think the highest outcome end game is somewhat clear: frame-perfect rendering through the Wayland API of Mir-composited KDE/GNOME/Unity clients.

    Or in other words, Mir should probably be a set of APIs to handle all the admittedly important desktop integration -- clipboard, multi-screen layout, input and gestures, systray/notification requests... -- with an optional and replaceable compositor thrown in.

    All the points of contention that I know of, mainly that Canonical requires server-side buffer allocation (presumably for mobile ARM platforms) where Wayland does it client-side, could have been resolved with some diplomacy and a mutual willingness to reach a satisfactory compromise.

    But instead, it looks like the report card is just going to say, "Doesn't play well with others." As usual. What a sad mess and wasted opportunity.

    • Yeah. Canonical people found some 'shortcomings' in Wayland and decided to start their own project with even more shortcomings. Like choosing C++ for implementation language. Good luck finding contributors. Most FOSS contributors don't get C++ at all. The language isn't ready for such low level components yet. Maybe in future but not now.
      • by msobkow ( 48369 )

        What a load of tripe. With very little syntactic sugar you can compile C code with a C++ compiler.

        You lose all the benefits of C++ by doing so, but it's perfectly feasible. So, yes, C++ is quite ready for doing low-level programming.

        • You're joking, right? There's no point to use it if you lose all benefits as you said. You still can use some features like namespaces but it's not worth it I think. The whole point of C++ is classes and templates but not many people know how to use those effectively. Better stick to C in important low level projects like display server for now.
          • by Desler ( 1608317 )

            The whole point of C++ is to use it how you want hence why it's a multi-paradigm langauge. Bjarne specifically rejects the pidgeonholing you attempt to ascribe to C++.

            • Nonetheless C++ is not a compilation of disparate features, but a complete language serving its own purpose. You cannot decide to use only subset of features without seeing whole picture. This requires deep understanding of the language which many potential contributors to low level linux subsystems lack and don't care to even try to acquire.
      • Re: (Score:3, Informative)

        by Desler ( 1608317 )

        Good luck finding contributors. Most FOSS contributors don't get C++ at all.

        Absolute bullshit. KDE, for example, is written in C++ has had no hard time finding thousands of contributors. There are also tons of FOSS apps written in C++ with Qt. You sound like someone who has been in a cave from the mid 90s until now.

        The language isn't ready for such low level components yet.

        In what specific way exactly?

  • Intel heavily supports Wayland, including employing the primary developer. Isn't this move on their part simply saying, we're dogfooding Wayland, and Canonical needs to handle XMir itself? Snark aside, doesn't that seem like a reasonable move on their part?
  • This needless display system might put the fledgling Linux gaming industry on the back foot. Games need good drivers quite often. Steam only runs on Ubuntu (officially) and this silly bullying may cause them much more harm then the benefits they may get (and what are they after all!)

    • we will probably just see people move away from ubuntu its aruldy happening with the likes of mint etc yes Ubuntu based but the main things are not unity lol. i whont be surprised we see not mir remixes.

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...