The Challenge In Delivering Open Source GPU Drivers 182
yuhong writes "After the recent Intel Sandy Bridge launch left Linux users having to build the latest source from Git repositories in order to have full support for the integrated graphics, Phoronix looked at the problems involved in delivering new graphics drivers for Linux."
It's not easy (Score:5, Insightful)
Unlike the proprietary drivers from ATI/AMD and NVIDIA or any of the drivers on the Microsoft Windows side, it's not easy to provide updated drivers post-release in distributions like Ubuntu due to the inter-dependence on these different components and all of these components being critical to the Linux desktop's well being for all users.
That's a funny was of saying Linux doesn't have a stable ABI because its architects are crazy.
I honestly hope in five years you can all go back and laugh at articles like these, but more than likely you'll have slightly bigger version numbers and different silly names.
hurl [phoronix.com]
blech [phoronix.com]
TANSTAAFL (Score:4, Insightful)
Re:Damn linux users! (Score:5, Insightful)
It's not so much about the eating of and having of cake. It's more about demanding that Intel ship you cake in time for there to be cake there when you are hungry (that you can both eat and have).
It's a bitchy whiney ridiculous complaint - and yet it is a good thing as it puts pressure on Intel and AMD to treat Linux support as something necessary for a launch. Hopefully it won't result in Intel pointing out that there is no cake...
Re:It's not easy (Score:5, Insightful)
Re:It's not easy (Score:5, Insightful)
Re:It's not easy (Score:4, Insightful)
That is just silly.
Paraphrasing, they say that they can't have a stable ABI because of small differences in how C compilers compile things (alignment of structures, etc.). Has that problem *really* not been solved? Microsoft manage to do it!
They then say they can't have a stable API (DPI?) because it would mean they have to maintain old code (true, but surely not too much work), and people might accidentally use the old version. Seriously? I guess they haven't heard of documentation.
And finally they say the solution is to get your driver into the main kernel tree. Not only would this be a hell of a lot more work than just shoving it on a website (subscribe to mailing lists, learn to use git properly, submit code for review, revise code, etc. etc.) but I seriously doubt they will just accept anything. What if I make a device that only I have? Will they accept a driver that is only useful for me?
Re:It's not easy (Score:3, Insightful)
That would be zero. There may well be gaping issues with MS software, but maintaining a stable API is not even PART of the problem. API stability (and even ABI stability) is just standard, well-established practice. And yes, Linux suffers a LOT for not having it.
Re:Damn linux users! (Score:4, Insightful)
Ah. That's sarcasm isn't it? [/sheldon]
I also got out of the article that it casts the current order of things as the ideal order of things -- in this case that Linux users are second class or lower users where Windows is the only OS that is deserving of support by hardware makers. But that is simply not what current and forward looking hardware developers should be thinking.
As others have predicted, I tend to agree that desktop computing is simply not the future of computing. In fact, it's barely the current state of computing even now. Of course business systems still run on Windows XP and pretty much the same stuff we had 5, 10 even 15 years ago with only incremental improvements. But on the consumer end, we are seeing a rapid surge in internet enabled devices serving a variety of purposes including content delivery and more. It is this area that is paving the way for adoption of this change from generic purpose computing to application specific computing devices. (AKA embedded)
And what are these embedded devices running? Some are running Windows, some are running BSD variants and derivatives ; most are running Linux. Windows is barely suitable for its originally intended purposes and most definitely not suitable for the additional uses and purposes it is being crammed into today. BSD variants and derivatives are successful but requires a heavier investment by implementers to customize the OS and surrounding code to make it work for them. Linux enjoys a greater momentum of use and support with a great deal more active enthusiasm in its communities.
As embedded systems are increasing, the selection of components that go into these devices are being made. If these components are limited in their support by which OSes are supported, I believe we will see a great deal of omission of these components in embedded devices. This is a large reason why we see less nVidia hardware in embedded applications and more Intel in my opinion.
Of course at present generic desktop computing is king. This is changing. Soon only hackers/developers will have generic desktop computing devices and the world will be using embedded systems.
It's not "linux users" that need support. It's hardware component makers that need to wake up and see what is going on. Evidently, they don't see it or they would be responding to changes in the market. Has Microsoft kept them so blinded and enticed? Where embedded systems are concerned, the majority is Linux, not Microsoft.
Re:Intel and Open Source (Score:5, Insightful)
Wait, am I getting this right? Intel wrote an _open source_ driver working with the latest and greatest in Linux GPU-support-land, it was availible on release day, and people are WHINING about this?! Back in the day you'd get a binary driver needing legacy components months after the hardware was released, if you got an official driver at all.
I guess Linux on the desktop has come a long way when people start bitching about new hardware not being supported out of the box in Ubuntu. Not long ago you'd follow guide after guide trying to get all the hardware in your 5 years old computer to work...
Re:It's not easy (Score:3, Insightful)
It's an entirely practical proposition: with having the source code we can fix bugs and can improve the code without having to wait for the 'ABI driver owner' to do something.
It's a tradeoff between long term independence and short term availability.
And if you look at how Linux stormed the supercomputer and smartphone space you'll have to admit its architectural flexibility works in a splendid way. Yes, the other side of the coin is that the established, Windows dominated, secrecy-obsessed PC hardware space will have it easier to exclude Linux from certain hardware components, by keeping specifications secret from OSS devs.
Tough luck for them, and as Samsung has shown it it's not impossible to build hardware from scratch and use the best kernel to dominate a new space.
Exactly So, the real problem is (Score:5, Insightful)
_hardware_ manufactures who think they want to be in the _software_ maintenance market.
The difference between calling an API to render color fast, and knowing that cramming a 0x721 into a register at 0x3392 to render color fast isn't particularly a hemorrhaging of 'intellectual property'.
Granted, it does let us know where the API is "cheating".
So while the example of one byte in one register is reductio ad absurdem, and the process is more about laying out memory buffers and such, who cares. Sure the manufactures may be worried about nock-off hardware, but that hardware almost certainly be nock-off quality. Think of all the SoundBlaster knock offs that have ever been made. Compare that to Creative's bottom line. Those third party cards, which are _still_ on the market made SoundBlaster a universal name. Creative has been reclined upon those laurels for years now.
It is horrifically stupid on the part of the hardware manufacturers to be palying so close to the vest. They should _want_ everybody scrambling to be compatible with _their_ hardware interface, making them the leader that the market has to chase.
First big name out of the gate with a fully open graphics hardware platform would own the segment anew for years.
But "companies" have no smarts and that "isn't the way (that) business is done" so here we languish on in a half-realized market.
(As for the "getting drivers" thing I have spent hundreds of hours of my professional and personal career "getting drivers" for windows machines. Only the "you'll damn well eat what we serve you" hardware platforms like Apple can remove the quest for drivers. And woe betide you if you want to use old gear from those guys. So the whole plaintive "waah, I had to look for drivers" complaint rings a little false.)
Re:It's not easy (Score:5, Insightful)
ABI stability helps no one but those that develop and release closed source binaries. Holding the rest of the kernel back for the sake of a handful of modules made by people who won't play nice is stupid in the extreme.
Re:Damn linux users! (Score:4, Insightful)
Meanwhile, Intel is requiring at least FIVE different base operating system components to be changed for their drivers to be updated?
I have understood the case made by the RTFA differently. (Or was the RTFA simply dumping the raw facts? Never mind.)
Supplying a binary driver works at the moment, and that what nVidia and AMD/ATI do. But that's bad because not OSS-friendly.
Yet, if a company (Intel in the case) decides to go full open source, properly and timely submits all the changes to the corresponding OSS projects as our God Linus intended, delivery of technology to the end user becomes a nightmare because of (1) all the inter-dependencies which exist between the projects AND (2) lack of central coordination.
IMO the story here is not per se that Intel f****ed it up - but about the fact that the particular area of OSS ecosystem is f****ed up (and easily alienates both vendors and users).
Re:It's not easy (Score:5, Insightful)
Ever read Raymond Chen's book? It's pretty terrific. There's an entire section dedicated to showing how Win32's stable API and ABI in kernel and user space has been a horrific nightmare and is a large waste of developer manpower.
Also, the *only* people affected by the lack of stable ABI are people that ship out-of-tree kernel drivers, all of whom have no excuse for not immediately pursuing upstream merges of one sort or another.
Also, some exported kernel APIs, like the syscall list and ioctl list, are sacred and are never altered. To take a topical example, all KMS graphics drivers respect and give sensible return values for legacy userspace X components calling pre-KMS settings.
And finally, to answer your strawman, *yes*, you can get a driver accepted if it has no users besides yourself. IBM's notorious for this; one of their upstream drivers has something like 2 users in the entire world. The drivers that tend to be controversial are things like reiserfs4 (layering issues, maintainer conflicts), aufs (layering issues, code quality issues), OSS4 (licensing issues, maintainers want to keep it out-of-tree!), etc. where there are clear and obvious reasons why the upstream merge hasn't happened.
Hell, for DRM, this was a problem too, since the DRM/libdrm tree was buildable for BSD as well. We made the decision a bit ago to merge into the Linux tree and make the out-of-tree repo for libdrm only, and all of a sudden, life gets *easier* because we no longer have to switch back and forth between Linux and BSD compat.