Intel Rejects Supporting Ubuntu's XMir 205
An anonymous reader writes "Just days after Intel added XMir support to their Linux graphics driver so it would work with the in-development the X11 compatibility layer to the Mir display server premiering with Ubuntu 13.10, Intel management has rejected the action and had the XMir patch reverted. There's been controversy surrounding Mir with it competing with Wayland and the state of the display server being rather immature and its performance coming up short while it will still debut in Ubuntu 13.10. Intel management had to say, "We do not condone or support Canonical in the course of action they have chosen, and will not carry XMir patches upstream." As a result, Canonical will need to ship their own packaged version of the Intel (and AMD and Nouveau drivers) with out-of-tree patches."
Surprised? (Score:5, Insightful)
Though Intel will be open to an alternative to X11 they are in no way obliged to carry an immature release just because Canonical wants to push theirs.
Layering? (Score:3)
Re: (Score:2)
The other big change is the merging of XMir handling in the xf86-video-intel driver. When using XMir for running X11/X.Org applications atop a Mir display server, modified DDX drivers are still required. These modifications are now present in the xf86-video-intel driver by default rather than Canonical carrying the work as out-of-tree patches.
Re: (Score:2)
Quoting the first link:
When using XMir for running X11/X.Org applications atop a Mir display server, modified DDX drivers are still required.
Well, that just restates/confirms the layering problem I mentioned, without explaining it.
Re:Layering? (Score:4, Interesting)
I'm honestly not super clear myself! But the DDX is, as I understand it, the in-Xorg portion of the graphics driver. So I guess it's not unreasonable that that component needs to know it's not got complete control of the hardware, as opposed to the Xorg-only case where it would have. Presumably it needs to proxy some operations through Mir (or Wayland, for XWayland) that it'd normally just set directly.
A *bit* like running X under X using Xnest or Xephyr, though I'd imagine it's less extreme than that (since those, I'd guess, have to issue X-level drawing commands to their host X server, whereas to get graphics under Wayland/Mir they'd just render to a memory buffer like any Wayland/Mir client).
All slightly speculative since I'm not familiar with the in-depth technical details!
Re: (Score:3)
I'm honestly not super clear myself! But the DDX is, as I understand it, the in-Xorg portion of the graphics driver. So I guess it's not unreasonable that that component needs to know it's not got complete control of the hardware, as opposed to the Xorg-only case where it would have. Presumably it needs to proxy some operations through Mir (or Wayland, for XWayland) that it'd normally just set directly.
Well..why would the Intel driver even be used when Xorg runs "hosted" as a Mir client? In that configuration, XMir should be the "driver", and any Intel driver code in Xorg should lie dormant. Or did this patch actually touch something other than Intel's Xorg driver?
Re:Layering? (Score:4, Informative)
I can speculate a bit with things that sound plausible to me given my knowledge of the system - but I might still be a bit off target... Still, maybe it helps a little.
Mir and Wayland both expect their clients to just render into a buffer, which clients might do with direct rendering, in which case the graphics hardware isn't really hidden from the client anyhow. AFAIK it's pretty normal practice that there's effectively in-application code (in the form of libraries that are linked to) that understands how to talk directly to the specific hardware (I think this already happens under Xorg). The protocol you talk to Wayland (and Mir, AFAIK) isn't really an abstraction over the hardware, just a way of providing buffers to be rendered (which might, have just been filled by the hardware using direct rendering).
In this case Xorg is a client of Mir, so it's a provider of buffers which it must render. The X11 client application might use direct rendering to draw its window, anyhow. But the Xserver might also want to access hardware operations directly to accelerate something it's drawing (I suppose)... So the X server needs some hardware-specific DDX, since Mir alone doesn't provide a mechanism to do all the things it wants.
As for why the Intel driver then needs to be modified... I also understand that Mir has all graphics buffers be allocated by the graphics server (i.e. Mir) itself. Presumably Xorg would normally do this allocation (?) In which case, the Intel DDX would need modifying to do the right thing under Mir. The only other reason for modifying the DDX that springs to mind is that perhaps the responsibilities of a "Mir Client" divide between Xorg and *its* client, so this could be necessary to incorporate support for the "Mir protocol" properly. That's just hand-waving on my part, though...
Bonus feature - whilst trying to find out stuff, I found a scary diagram of the Linux graphics stack but my brain is not up to parsing it at this time of day:
http://en.wikipedia.org/wiki/File:Linux_Graphics_Stack_2013.svg [wikipedia.org]
Re: (Score:2, Informative)
I thought they were switching to Wayland anyway.
X was really hated here on slashdot in the early days 12 years ago! I guess modern hardware hides its issues with bloat and a client and server relationship. It was made for dumb terminals and it shows. Low latency for things like glx openGL has had issues and many hacks just to get it to work mediocre wise.
Re: (Score:3)
Re:Layering? (Score:5, Insightful)
Then Ubuntu comes along and goes "NOOOO", and decides to do it differently.
yep, this is the way of Linux. You throw many different things at a wall and see which one sticks best. Then you standardise on that thing.
Too many people are just about bitching that one thing is better than another thing without any comprehension that this is the way FOSS systems evolve. I imagine (or would hope) that Wayland and XMir will stand on their own and one will become a dominant player over the other. Politics aside this is the way it should be. Unfortunately, once the politics and the 'my thing is better than yours' attitude gets involved, it makes dropping the poor version for the better one difficult - people try to maintain the poorer one regardless.
You see this in Openoffice v LibreOffice. Surely by now one of these would have their best bits of code migrated to the other so development and evangelism could focus efforts on just one product, but instead we still have the bitching about which one is better. (though maybe its just too soon for this example)
Re: (Score:2)
yep, this is the way of Linux. You throw many different things at a wall and see which one sticks best. Then you standardise on that thing.
In the FOSS world the "standardize on one thing" often doesn't happen, you end up with all kinds of incompatible competitors, no single one with a great deal of popularity.
Re: (Score:3, Insightful)
Re: (Score:2)
You mean ALSA or something layererd on top of ALSA?
The rest is nonsense perpetrated by a lazy corporate developer who couldn't keep up with what the community can do on it's own?
Some whine. Some just take care of business.
Re: (Score:2)
Per-application volume control is THE missing feature in ALSA. If that ever comes to life, and easy enough for non-tech users, Pulseaudio will die for sure!
Until then, ALSA has stuff missing.
Re: (Score:2)
ok, the way of FOSS - because we do have BSD if Linux isn't to your taste.
Re: (Score:2)
Many games ported to use Linux and OpenGL natively show up FASTER than their Windows relation, either OpenGL (being faster on Windows than DirectX) or DirectX on Windows.
My experience is the opposite. In most cases Windows runs rings around Linux in terms of desktop effect and game performance. For example, take the low-end Radeon 6320 Brazos APU (paired with AMD E-450 cpu). This setup can run Source based games fluidly under Windows, but under Linux they drop to 10-15 fps, which is about half the frame rate. Also, even simple desktop effects are jerky, while on Windows they are silky smooth. Linux experiment was made using the closed source 'fglrx' driver, the open source
Re: (Score:2)
Re: (Score:2, Insightful)
Want to fix it? Use a better brand of video card.
There's no great mystery about any of this stuff. If you are still suffering then you are suffering because you choose to sabotage yourself.
Re: (Score:2)
Re: (Score:3)
Sadly nvidia's slowest graphics cards have a 29W TDP, and the chip he's talking of is a combined CPU+GPU at 18W TDP. Meaning no, there's no nvidia graphics in a netbook or tiny PC. And blaming the victim is lame, you can't change the graphics on a laptop with a GPU in the CPU.
If you're arguing that no one should by affordable laptops, why not. I like using desktops (and CRT monitors).
We'll see if 22nm Atom is better (with something like Ubuntu 14.04 - not necessarily the main edition - if you want driver su
Re: (Score:2)
I have an Atom 270, and the graphics support sucks, but I don't complain about it, it's just the reality of using shitty vendors, in this case PowerVR, who refuse to let Intel divulge documentation for their shitty GPU to anyone without an NDA*.
So the Atom N270 uses Intel GMA950 graphics core integrated to the GM945 northbridge. PowerVR is newer stuff.
Re: (Score:2)
> OpenGL and glx run many windows DirectX games under Wine FASTER than Windows running DirectX.
>
> Many games ported to use Linux and OpenGL natively show up FASTER than their Windows relation, either OpenGL
> (being faster on Windows than DirectX) or DirectX on Windows.
I can testify to this as well, having run Need for Speed Hot Pursuit as well as Roadrash under WINE.
Re: (Score:2)
That is why Linux wont win the desktop (Score:4, Insightful)
When will Linux finally use standard ABIs and APIs for drivers just like very other OS on the planet?
Why can't you just use one driver written a few years ago and use it universally across all distros due to this? The other free BSDs have this and you can install the extra compat libraries to accomplish this. I guess RMS thinks that is oppressive and wants opensource hardware even though patent holders from the likes of the h.264 consortorium forbid it!
Before I get flamed remember the article mentioned ATI and NVidia drivers as well so Intel is not the asshole here. Rather they different kernels and distros being redone requiring new QA and recompiling with every release.
There is a reason many old time linux users like myself only run CentOS in a VM Now. It is because Redhat provides ABIs and APIs that do not change for 5 years. Unfortunately it also means an out of date distro as well which is not fair to non server users (even a few server users who need a newer app or framework.)
Re:That is why Linux wont win the desktop (Score:5, Informative)
Never. The moves to support binary compatibility on Linux have been rejected time and time again by the Linux community. And that is far from the case for every other OS on the planet. Many OSes don't support arbitrary drivers at all.
RMS has little to do with this policy. Even Linus mostly supports it. The people who don't support it are mostly Windows users.
You can. You can use drivers from almost 2 decade ago that were sources into the kernel. You can't generally with binary drivers because Linux doesn't offer binary compatibility.
Re: (Score:3, Insightful)
Sorry but the patent trolls who sue everybody will make you sign a NDA making your work closed source if you make hardware. So the days of having it in the kernel are over.
Microkernels and exokernels are what acemics say are supperior and the wave of the future.
Regardless what OS doesn't use abi and api for driver development? I cant think of any modern OS? How about Mac users wanting a driver that works throughout versions? With the exception of the split between powerpc and x86 it is true on that platform
Re:That is why Linux wont win the desktop (Score:5, Insightful)
Sorry but the patent trolls who sue everybody will make you sign a NDA making your work closed source if you make hardware. So the days of having it in the kernel are over.
You realize you're commenting on a story about Intel, right? You know, the company that has Linux kernel developers writing open source drivers for their chipsets.
Re:That is why Linux wont win the desktop (Score:4, Insightful)
Re: (Score:3)
If you lose a patent suit and use someone else's patented work, yes.
ZSeries OS (MVS), ISeries OS (OS/400), Cisco iOS, most embedded.... In general most OSes that don't care about quick and easy hardware support.
Re: (Score:3)
And microkernels continue to remain in the realm of academics and theory, and not in the real world. Even Windows went down the microkernel route for a while with Windows NT, early versions, but for for performance reasons hacked and thunked things to the point that we're essentially back to a monolithic kernel now, with everything important running in-kernel, and in ring-0. Graphics moved back to ring-0, network drivers, etc.
Darwin, though based on a microkernel core, is a hybrid kernel with a large BSD
Re: (Score:2)
NT was never microkernel - the drivers always resided in the kernel, not userspace. Windows 8 is more microkernel than NT ever was.
Monolithic runs better on the x86 platform, while microkernel would run better on RISC, VLIW or SMP platforms. The reason monolithic seems to have won is that x86 has won. Microkernels have a lot better shot in CPUs based on ARM, MIPS, POWER, et al
Re:That is why Linux wont win the desktop (Score:4, Insightful)
Microkernels and exokernels are what acemics say are supperior and the wave of the future.
Academics have been saying that since it was MINIX vs Linux and reality won. This is also orthogonal to API/ABI, you can have userspace drivers without a stable API/ABI and you can have a stable API/ABI with in-kernel drivers.
Re: (Score:3)
Microkernels . . . are what acemics say are supperior and the wave of the future.
Yep. That's what they were saying 25 years ago, too. And if you want one, GNU HURD is ready and waiting.
Re: That is why Linux wont win the desktop (Score:2, Interesting)
Glad someone made this comment, because I was about to. I used to work at MS in the Windows division. I saw this first hand and close up: every release, lots of drivers broke. As far as I am concerned the folks complaining about Linux's lack of stable driver interfaces are completely clueless whiners. They have no clue what goes into writing drivers on any platform. The only reason you can generally get drivers that work on all recent versions of Windows is because the hardware vendors are forced to port an
Re: (Score:2)
But most manufacturers don't WANT to provide sources to their drivers - they'd be quite happy to provide a binary interface, but that's difficult to do in Linux.
You might argue, fuck them then, sources or bust. Well, Linux use on the desktop is so low anyway, what incentive would they have to comply when they can just stick with Win
Re:That is why Linux wont win the desktop (Score:5, Insightful)
Since this policy is never likely to change, I can't see why anyone is surprised Linux has still never made it on the desktop.
Who exactly is surprised by this? Certainly not those who created the policy. The purpose of the policy was not to make Linux popular on the desktop, or anywhere else for that matter. The creators of the policy do not profit from Linux, so its popularity isn't really a big concern.
Re: (Score:2)
There's of course the real risk that Google forks Linux over this.
Just like how Red Hat has forked Linux over this?
I doubt they'll really fork it, per se. Red Hat ships old kernel versions with patches backported far beyond what the kernel team is willing to support. They still upstream their patches, and they have every intent to migrate to a newer kernel in line with their own processes.
But sure, they could always stick an ABI translation layer on the kernel that they maintain, much as Nvidia already does (albeit only for their own driver).
Re: (Score:3)
But most manufacturers don't WANT to provide sources to their drivers
As someone who works on linux bug fixing for, among others, the hardware partners of a linux distro vendor I sense that changing day by day. Some never will publish but as a result those they compete with will generally have a lower per-developer cost of development leading to a higher rate of bug fixes alone for the vendors who do publish. Not publishing made sense when the PC was the only platform that mattered but I'm impressed by the number of x86/x86-64 build bugs I see for things being called point of
Re: (Score:2)
Agreed. The server manufacturers didn't want to either that was until large number of customers made Linux compatibility a reason to buy hardware.
Re: (Score:2)
Have you noticed Android tablet sales? Unless by desktop you mean x86 Linux is finally doing quite well.
By "desktop" I mean an environment where I can have more than one application displaying on the screen at once. Use cases include splitting the screen down the middle between the document I'm writing and the document I'm referring to, or having a calculator that appears on top of the application I'm running.
Re: (Score:2)
That's not really a market segment it is a use case. If you want to say "high power desktops" then Linux does much better there than on the low end possibly around 4%. OSX is the big player that in general has far worse hardware support than Linux.
Market segment of those who share a use case (Score:2)
[Multi-window multitasking is] not really a market segment it is a use case.
Every use case, such as multi-window multitasking, has a corresponding market segment of people who regularly use it. Page 4 of an Ars Technica article about OS features useful to the market segment of creative professionals [arstechnica.com] (discussion [slashdot.org]) mentions multi-window multitasking features, and page 5 decries Microsoft's focus on retooling its OS for "consumption" (passive viewing of works created by others) of one thing at a time.
Re: (Score:2)
Creative professional are part of the high power desktop category. And as I mentioned:
a) OSX is a big player
b) Linux is a bigger player (around 4%)
c) Windows has been steadily losing ground for years
So for ggp Linux is a viable (though not preferred offering)in that space.
Re: (Score:2)
Where are you getting your numbers?
I think a citation is in order.
Re: (Score:2, Informative)
You can. You can use drivers from almost 2 decade ago that were sources into the kernel. You can't generally with binary drivers because Linux doesn't offer binary compatibility.
You really really can't. (At least not in general.) Structures keep changing the names of members, and removing members. For example: Recently, user id's changed from being plain old integers to being potentially a struct that you have to use accessor methods to use. Every time a new kernel comes out, our drivers invariably break and need additional code adding to check for and cope with the new kernel. (No, we can't just stop supporting old versions of the kernel. Big companies are out there demandin
Re: (Score:2)
GP was talking about drivers not working between versions. You are talking about the complexity of maintaining a kernel module. That's a different issue. And yes stuff will break between kernel versions.
Re: (Score:2)
Re: (Score:2)
I could be wrong, but I'm pretty sure the Windows driver interface hasn't changed since Windows 2000 was released.
There are two exceptions to this: Sound Drivers and Display Drivers.
The former changed when Windows "enhanced" sound drivers in Windows Vista. And by "enhanced" I mean such useful things as killing hardware acceleration in order to have separate volume sliders for each app and adding effects like making things sound like they were in a Bathroom or Auditorium.
The latter changed when Windows adde
Re: (Score:2)
But hasn't X.org been the standard for well over a decade now, with Wayland only being non-universal in the future thanks to Canonical's self-segregating behavior? Or am I misunderstanding what you're saying?
I don't think that this is the reason that it hasn't really thrived on the desktop; outside the hardcore devs, most of the people (particularly non-geeks) that might/do use Linux tend to not know or care about the details as long as it works with minimal/no intervention. IMHO, the reason is that it ha
Re: (Score:2)
Printing has nothing to do with kernel drivers. The printing systems on Linux are rather standard. There aren't meaningfully printer drivers in the Windows sense at all.
Re: (Score:2)
And so can Windows 7 and 8! Oh wait ....
Just accept the fact that you are woefully misinformed. First of all, you don't know the difference between Linux and a distribution that uses the Linux kernel. Your confusion would be understandable, since it is common to refer to an entire distribution as a Linux distribution even though X, Wayland, and the whole of user space have nothing to
Re: (Score:3)
Re: (Score:2)
drivers for 7 have worked fine for me for 8.
catch is that they have to be signed OR you need to boot the 8 into an unsigned drivers mode for installing the driver..
Re: (Score:2)
Good thing you used "request a refund" in your statement, otherwise I would find it really hard to believe. I like Linux, I use Linux, I evangelized Linux in the workplace, and I had people come to my office and request that Linux be removed and replaced with Windows 7.
Re: (Score:2)
Well, maybe you didn't set it up completely, just as you didn't quote me completely. I said "request a refund for their misery" Obviously, if you didn't install it and configure it well, or didn't help them make the transition smoothly, then they will want to go back to the devil they know. For example, if you didn't get Java, Flash, etc set up properly they will complain. There are other things to consider as well. Is the person heavily into M$
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
This is your personal opinion.
So I am ignorant simply because I do not agree with
Re: (Score:2)
You really should learn the difference between an opinion and a fact. It is a fact that Porsche makes a superior car with respect to Ford. You might prefer a Ford just the same; that is a matter of opinion.
Now you are entering into the area of personal preference. This brings to the forefront your lack of experience and knowle
Re: (Score:2)
Give me a specific example on why you believe that once you switch to Linux that you'll never return to Windows. Please I'm waiting for something actually substantial to come from you. Well not really... I don't expect much.
You spend a lot of energy dancing around the question, but you never actually say anything remotely intelligible about the actual topic at hand.
Re: (Score:2)
For the record I used DOS when it first came out; Apple DOS before that, and had a good look at the source for Apple BASIC, written by Gates and Allen. I used Windows/286 [wikipedia.org] prior to becoming a VAX./VMS System manager in a network e
Re: (Score:2)
I think you meant anecdotal evidence but please go on...
Re: (Score:2)
You just aren't interested in getting a clue. Choice means there is no such thing as an out of the box experience that is universally good for everyone and loved by all. Just admit that you made a phenomenally idiotic statement - that Linux isn't "ready for the desktop" - and move on wit
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
So have I. Then, once they found out what they had asked for, they wanted Linux back. They were hoping to get Office 2000 on Ubuntu, but did not know how to describe it.
Re: (Score:2)
The last time I used Windows as a primary desktop was at a startup. We were working on a dedicated Linux server app. As new employees we were given a choice to use Linux or Win XP as our host OS with the other as a VM. Initially I chose to use XP as the host OS thinking that I'd have more control over the target OS (Linux) if it was the VM'd OS. That lasted about two weeks. I had so many problems with XP tripping over it's own two feet that I reversed the situation. After that I never looked back
Re: (Score:2)
Time for a Yoda yodel (Score:3)
Confused (Score:4, Insightful)
So when Ubuntu 13.10 ships, it will force you to use XMir?
If so, thanks for the warning. The last thing I want to do is deal with an unstable graphics driver. It's taken years for X11 with NVidia drivers to get stable, and I don't want to touch XMir with someone else's 10-foot pole for until it's been in use for at least 2-3 years.
Re: (Score:2)
Re: (Score:2)
For me the point of installing 13.04 was getting upgrades to certain packages I wanted, not testing.
Oh well, hopefully by the time I'm forced to upgrade from 13.04 the steaming pile will have stabilized.
Re: (Score:2)
Re: (Score:2)
Mir is fascinating... but not in a good way. (Score:5, Informative)
I think Mir is a case study in how to correctly identify problems and then going about solving them all wrong.
See, the good thing about Wayland is, it does the right thing in having a limited scope. It aims to do one thing and do it well: provide an API for GUI clients to share buffers with a compositor.
And the problem with Wayland is, of course, that... it has a limited scope. Screen management? Input handling? Buffer allocation? "A modern desktop needs all that!" say the Ubuntu devs, and yeah, that's absolutely correct. "That's a client concern," say the Wayland devs, and guess what? From their point of view, that's correct too. (Although Wayland since started working on an input handling API.)
Now, the important thing to realize is, when the Wayland guys say that something is a client concern, as I understand, they don't necessarily mean the GUI applications, no. They mean the compositor.
Meaning that a whole lot of the stuff desktop shells rely on is, in fact, not provided by Wayland itself.
That's where Weston comes in: it's supposed to be an example (a "reference implementation", to use the designated words) of how to write a compositor. But... not necessarily in a way that meets the higher level needs of desktop shells. Unsurprisingly, both KDE and GNOME will be using their own compositors.
So basically, a whole lot of the desktop integration on top of Wayland will be, as it were, left as an exercise to the reader.
With all that in mind, I think the highest outcome end game is somewhat clear: frame-perfect rendering through the Wayland API of Mir-composited KDE/GNOME/Unity clients.
Or in other words, Mir should probably be a set of APIs to handle all the admittedly important desktop integration -- clipboard, multi-screen layout, input and gestures, systray/notification requests... -- with an optional and replaceable compositor thrown in.
All the points of contention that I know of, mainly that Canonical requires server-side buffer allocation (presumably for mobile ARM platforms) where Wayland does it client-side, could have been resolved with some diplomacy and a mutual willingness to reach a satisfactory compromise.
But instead, it looks like the report card is just going to say, "Doesn't play well with others." As usual. What a sad mess and wasted opportunity.
Re: (Score:2)
Re: (Score:2)
What a load of tripe. With very little syntactic sugar you can compile C code with a C++ compiler.
You lose all the benefits of C++ by doing so, but it's perfectly feasible. So, yes, C++ is quite ready for doing low-level programming.
Re: (Score:2)
Re: (Score:2)
The whole point of C++ is to use it how you want hence why it's a multi-paradigm langauge. Bjarne specifically rejects the pidgeonholing you attempt to ascribe to C++.
Re: (Score:2)
Re: (Score:3, Informative)
Good luck finding contributors. Most FOSS contributors don't get C++ at all.
Absolute bullshit. KDE, for example, is written in C++ has had no hard time finding thousands of contributors. There are also tons of FOSS apps written in C++ with Qt. You sound like someone who has been in a cave from the mid 90s until now.
The language isn't ready for such low level components yet.
In what specific way exactly?
Re: (Score:2)
C++ is a superset of C. It includes all the functionality of C, along with an implimentation of OOP. The low level stuff is there. The problem is most FOSS contributors are apparently dinosaurs who never learned how to program in object-oriented fashion.
They did. They also know that C++ is a bad language for that, so much so that programming object-oriented constructs in C is better.
Re: (Score:2)
Re: (Score:2)
Thus spoke the unknowing. C++ compilers can and will optimize OOP code in ways a C compiler never can with fake OOP added to the language.
Please, sir, provide us an example of this wondrous claim.
Isn't this more pro-Wayland than anti-XMir? (Score:3)
I wonder what that means for Steam (Score:2)
This needless display system might put the fledgling Linux gaming industry on the back foot. Games need good drivers quite often. Steam only runs on Ubuntu (officially) and this silly bullying may cause them much more harm then the benefits they may get (and what are they after all!)
Re: (Score:2)
Re:Monopolist acts anticompetetively, film at 11 (Score:5, Interesting)
Re:Monopolist acts anticompetetively, film at 11 (Score:5, Insightful)
More like disinterested third party sees which way the winds are blowing and decides to pull resources away from supporting what is going to be an also-ran. Intel has been a very good citizen where it comes to provided chipset and video driver support to the Linux community. They are still making drivers for X.org ( you know the display server people actually use ) and likely will develop drivers for Wayland.
Why you or anyone else ( who does not run Ubuntu ) would want them dividing their efforts a third way writing software that will only be useful to a tiny segment escapes me. Normally I am not anti-choice but the best outcome here is for MIR to go down in flames.
The one factor that has made desktop UNIX/Linux a reality is the near universality of X11. Despite all the toolkit and desktop environment / window manager fights X11 was something software devs could depend on being there. As far as end users some integration issues aside they could run multiple toolkits and other high level stuff when needed. It would be really hard though for users to efficiently run multiple display servers. The display server is pretty much a core platform component now. I honestly think MIR is a Cononical attempt to create a walled garden for their platform. It isn't about better software for them but control.
I am glad Intel is abandoning the platform; hopeful Cononical's garden will simple become a ghetto.
Re: (Score:2)
Re:Black People (Score:4, Funny)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
This is X11 not the kernel and other developers aren't going to take over Intel's whole subsystem. If Intel doesn't want the XMir in there, it won't be in there.
Re:Dumb Management (Score:5, Insightful)
Canonical decided to write their own Mir display server instead of adopting the existing Wayland. They stated their reasons for doing so, but I'm not convinced they really had to start their own project instead of modifying Wayland.
It seems only fair to me that if Canonical wants to do their own thing, they'll have to put in the effort to maintain it. Because that is what this is about: Intel management decided that they're not going to pay their engineers to maintain code that benefits only Canonical.
Re: (Score:2)
Canonical decided to write their own Mir display server instead of adopting the existing Wayland. They stated their reasons for doing so, but I'm not convinced they really had to start their own project instead of modifying Wayland.
The nice thing about Wayland is that, because all the real work is being done by things like evdev, KMS and widget toolkit the actual display server is *much* simpler than Xorg. Weston is only a reference implementation of a Wayland compositor, and it's expected that desktop environments will implement their own that work the way they want them to (for example, work is underway to let KWin function as a Wayland compositor).
So it's not even a question of having to do some hackish modification of upstream to
Re: (Score:3)
There is a cost to keeping the code in there, even if it's not supported. If interfaces change, the unsupported code can break the build. Finding things in the code, by reading or grep, becomes harder since there is more of it. Static code analysis might flag issues in the unsupported code. Bugs will probably be filed that they'll then have to close as WONTFIX.
Also the question is what purpose would be served by keeping unsupported code in the main repository. If it's not regularly updated and tested, it wi
Re: (Score:2)