Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Intel Open Source Linux

The Challenge In Delivering Open Source GPU Drivers 182

yuhong writes "After the recent Intel Sandy Bridge launch left Linux users having to build the latest source from Git repositories in order to have full support for the integrated graphics, Phoronix looked at the problems involved in delivering new graphics drivers for Linux."
This discussion has been archived. No new comments can be posted.

The Challenge In Delivering Open Source GPU Drivers

Comments Filter:
  • by intellitech ( 1912116 ) * on Tuesday January 04, 2011 @06:14AM (#34752234)

    You've just gotta have your own cake and get to eat it too!

    • by smallfries ( 601545 ) on Tuesday January 04, 2011 @06:55AM (#34752340) Homepage

      It's not so much about the eating of and having of cake. It's more about demanding that Intel ship you cake in time for there to be cake there when you are hungry (that you can both eat and have).

      It's a bitchy whiney ridiculous complaint - and yet it is a good thing as it puts pressure on Intel and AMD to treat Linux support as something necessary for a launch. Hopefully it won't result in Intel pointing out that there is no cake...

      • by Sun ( 104778 )

        Oh, there is cake, alright. What there isn't is a spoon.

        Shachar

      • Re:Damn linux users! (Score:5, Informative)

        by LingNoi ( 1066278 ) on Tuesday January 04, 2011 @08:00AM (#34752532)

        After RTFA it seems more that there are a ton of features missing rather then delayed. Here's an excerpt from the article..

        They include Video Processing Accelerators - never coming to Linux, Color Processing Accelerators - never coming to Linux, Skin Tone Enhancements - never coming to Linux, Adaptive Contrast Enhancement - never coming to Linux, Total Color Control - never coming to Linux, Video Decode in hardware - Q1, Video Encode in hardware - Q1, 3D acceleration - Q1 sooner rather than later and a host of software to use it - never coming to Linux.

        • Kind of like my AMD R690M/Athlon L110-based system only works right under Vista... display trashing on Linux, graphics driver breaks suspend under Windows 7 and Windows XP, etc etc. The only vendor you can still assume will produce working, useful Linux support is nVidia. Because any old supported nVidia CPU does all that stuff plus, on anything even vaguely modern, CUDA. (Well, CUDA-assisted video encoding is still in its infancy, but at least the hardware support is there and usable.)

          It seems today that y

          • The people that care this much about GPU drivers on Linux are likely to build their machines from individual components rather than get a brand-name PC.
            • by Teun ( 17872 )
              As if you have a choice when buying laptops.
            • Ah yes, I was so happy when they announced standard form factors for netbook motherboards and graphics cards so that I could build my own.

              The machine I'm talking about is a netbook, which you would have known if you knew fucking anything about what we're talking about. Use google next time.

        • by IBitOBear ( 410965 ) on Tuesday January 04, 2011 @09:41AM (#34753088) Homepage Journal

          _hardware_ manufactures who think they want to be in the _software_ maintenance market.

          The difference between calling an API to render color fast, and knowing that cramming a 0x721 into a register at 0x3392 to render color fast isn't particularly a hemorrhaging of 'intellectual property'.

          Granted, it does let us know where the API is "cheating".

          So while the example of one byte in one register is reductio ad absurdem, and the process is more about laying out memory buffers and such, who cares. Sure the manufactures may be worried about nock-off hardware, but that hardware almost certainly be nock-off quality. Think of all the SoundBlaster knock offs that have ever been made. Compare that to Creative's bottom line. Those third party cards, which are _still_ on the market made SoundBlaster a universal name. Creative has been reclined upon those laurels for years now.

          It is horrifically stupid on the part of the hardware manufacturers to be palying so close to the vest. They should _want_ everybody scrambling to be compatible with _their_ hardware interface, making them the leader that the market has to chase.

          First big name out of the gate with a fully open graphics hardware platform would own the segment anew for years.

          But "companies" have no smarts and that "isn't the way (that) business is done" so here we languish on in a half-realized market.

          (As for the "getting drivers" thing I have spent hundreds of hours of my professional and personal career "getting drivers" for windows machines. Only the "you'll damn well eat what we serve you" hardware platforms like Apple can remove the quest for drivers. And woe betide you if you want to use old gear from those guys. So the whole plaintive "waah, I had to look for drivers" complaint rings a little false.)

        • by jedidiah ( 1196 )

          Sounds like a lot of features that are already in the Nvidia drivers that don't seem to be included on the list of problem children.

          Why can they manage while no one else can?

      • by Andy Dodd ( 701 )

        Actually, the way I read one of the articles - Intel has completely and totally fucked up their driver architecture.

        NVidia and ATI can drop a single driver "module" into an existing system and have it work great. No new kernel (just a tiny bit of kernel glue), no new Mesa, no new X.org in nearly all cases.

        Meanwhile, Intel is requiring at least FIVE different base operating system components to be changed for their drivers to be updated?

        It's just another example of "Intel graphics in Linux sucks" because of

        • by ThePhilips ( 752041 ) on Tuesday January 04, 2011 @01:00PM (#34755248) Homepage Journal

          Meanwhile, Intel is requiring at least FIVE different base operating system components to be changed for their drivers to be updated?

          I have understood the case made by the RTFA differently. (Or was the RTFA simply dumping the raw facts? Never mind.)

          Supplying a binary driver works at the moment, and that what nVidia and AMD/ATI do. But that's bad because not OSS-friendly.

          Yet, if a company (Intel in the case) decides to go full open source, properly and timely submits all the changes to the corresponding OSS projects as our God Linus intended, delivery of technology to the end user becomes a nightmare because of (1) all the inter-dependencies which exist between the projects AND (2) lack of central coordination.

          IMO the story here is not per se that Intel f****ed it up - but about the fact that the particular area of OSS ecosystem is f****ed up (and easily alienates both vendors and users).

    • by erroneus ( 253617 ) on Tuesday January 04, 2011 @08:16AM (#34752610) Homepage

      Ah. That's sarcasm isn't it? [/sheldon]

      I also got out of the article that it casts the current order of things as the ideal order of things -- in this case that Linux users are second class or lower users where Windows is the only OS that is deserving of support by hardware makers. But that is simply not what current and forward looking hardware developers should be thinking.

      As others have predicted, I tend to agree that desktop computing is simply not the future of computing. In fact, it's barely the current state of computing even now. Of course business systems still run on Windows XP and pretty much the same stuff we had 5, 10 even 15 years ago with only incremental improvements. But on the consumer end, we are seeing a rapid surge in internet enabled devices serving a variety of purposes including content delivery and more. It is this area that is paving the way for adoption of this change from generic purpose computing to application specific computing devices. (AKA embedded)

      And what are these embedded devices running? Some are running Windows, some are running BSD variants and derivatives ; most are running Linux. Windows is barely suitable for its originally intended purposes and most definitely not suitable for the additional uses and purposes it is being crammed into today. BSD variants and derivatives are successful but requires a heavier investment by implementers to customize the OS and surrounding code to make it work for them. Linux enjoys a greater momentum of use and support with a great deal more active enthusiasm in its communities.

      As embedded systems are increasing, the selection of components that go into these devices are being made. If these components are limited in their support by which OSes are supported, I believe we will see a great deal of omission of these components in embedded devices. This is a large reason why we see less nVidia hardware in embedded applications and more Intel in my opinion.

      Of course at present generic desktop computing is king. This is changing. Soon only hackers/developers will have generic desktop computing devices and the world will be using embedded systems.

      It's not "linux users" that need support. It's hardware component makers that need to wake up and see what is going on. Evidently, they don't see it or they would be responding to changes in the market. Has Microsoft kept them so blinded and enticed? Where embedded systems are concerned, the majority is Linux, not Microsoft.

      • I think your statement, in a broader form, is something that I've been hearing for what, 5 years now?

        I'm not saying it ain't true, but it kind of got old and honestly I have grown tired of hearing the same promise time and again. IMO the truth is somewhere in between. As in yes, we will see an increase in embedded devices running Linux or whatever under their hoods (and average users never cared what's under the hood as long as it satisfies their needs), but no, we will not see a decrease in generic desk
      • by gtall ( 79522 )

        In my opinion, the new devices usually have a company behind them pushing them as a software-hardware gizmo that is what it is. If they need a Linux driver, they'll produce it for themselves by themselves and it becomes part of the special sauce they use to distinguish their gimzo from all the others.

        I just do not see where there is a market for Intel or others to produce FOSS drivers or help others produce them.

        • That sounds like you have never developed a hardware product before. I have been involved in the production of a series of automated teller machines and I have to say that it was pieced together from a lot of parts and tied together through an application running on an operating system.

          Unless the device has its own proprietary OS (which is RARELY the case) builders of such systems typically select hardware based on the software they are creating. In some cases, they base the software creation on the cost

    • by mcgrew ( 92797 ) *

      As opposed to Windows users?

    • I bought an EDiMAX iLink USB WiFi modem today, to replace the pieceofcrap Broadcom one in my Toshiba laptop. It mentioned Vista, Mac and Linux on the front of the box (I suppose Windows 7 users are out of luck). I plugged it in and it worked immediately with zero effort. Only $27.50. Linux support has come a long way. W00t!
  • by Technician ( 215283 ) on Tuesday January 04, 2011 @06:25AM (#34752258)

    I would have expected Intel to have released drivers. They are involved heavily in Open Source. They have the Open Source Technology Center. Has anyone asked Intel about it?

    http://www3.intel.com/cd/corporate/icsc/apac/eng/teams/331393.htm [intel.com]

    • GPU drivers have always lagged behind in stability for the Linux world, I had to compile Intel drivers 2 years ago when my GPU was just released.

      Seeing as GPU manufacturers mainly used to support Windows and Mac, probably due to contracts between them, to attract cross-over clients. There was no such incentive for *nix back then.

      It's nice that Intel picked up this ball, but this outdated driver issue just shows us how the process is still not streamlined, and on track with their [Intel's] hardware releases.

      • Re: (Score:2, Informative)

        by Anonymous Coward

        Seeing as GPU manufacturers mainly used to support Windows and Mac, probably due to contracts between them, to attract cross-over clients. There was no such incentive for *nix back then.

        Well, it was less of that and more a matter of having a dis-incentive (if there is such a word). For a long time, most of the 90's, it was not easy to get drivers from the hardware vendors. It's only been in the past 10 years that we've seen a website with regular driver updates become standard- you used to have to hunt for a support number, usually not a toll free one, wait for an hour on hold only to be told it would cost you $20 for handling and take 3 to 6 weeks to ship you a disc. This meant having the

        • You're very right about the FUD there.

          The name I forgot to mention, which describes my point exactly, is 'Wintel':

          Wintel is a portmanteau of Windows and Intel. It usually refers to a computer system or the related ecosystem based on an Intel x86 compatible processor and running the Microsoft Windows operating system. It is sometimes used derisively to describe the monopolistic actions undertaken by both companies when attempting to dominate the market.

          (see the source link [wikipedia.org] for reference article links)

    • by vidnet ( 580068 ) on Tuesday January 04, 2011 @07:08AM (#34752366) Homepage

      They did release drivers for the latest kernel, and they work great. However, you do need bleeding edge versions of the entire graphics stack to use them. This is a problem when combined with non-free ATI and Nvidia which always lags behind with no way for maintainers to get them up to speed.

      In other words, a distro can include "old" kernels/drivers/X-servers with non-free ATI/Nvidia support XOR newer and less tested ones with the latest Intel support.

      Either way, it's a reduced user experience and that's what TFA is on about.

      • by Anonymous Coward on Tuesday January 04, 2011 @09:18AM (#34752900)

        Wait, am I getting this right? Intel wrote an _open source_ driver working with the latest and greatest in Linux GPU-support-land, it was availible on release day, and people are WHINING about this?! Back in the day you'd get a binary driver needing legacy components months after the hardware was released, if you got an official driver at all.

        I guess Linux on the desktop has come a long way when people start bitching about new hardware not being supported out of the box in Ubuntu. Not long ago you'd follow guide after guide trying to get all the hardware in your 5 years old computer to work...

        • by stoborrobots ( 577882 ) on Tuesday January 04, 2011 @10:03AM (#34753274)

          Wait, am I getting this right? Intel wrote an _open source_ driver working with the latest and greatest in Linux GPU-support-land, it was availible on release day, and people are WHINING about this?!

          You're getting it 90% right - the whining hasn't started yet, but these guys are explaining why it's about to start...

          • It's not a single driver - Intel contributed patches to all the relevant projects for support for the new features; but they've only been included into the repositories so far, and are expected to be included in the upcoming releases over the next few weeks, and some features are not yet complete, or not even planned to be supported...
          • The components involved which would need recompiling on to work include the kernel, the lowest-level support libraries like libdrm and libmesa, and X - the holy trinity of "if this fucks up I can't use my computer"...
          • Since the patches haven't been backported, they likely won't make it into packages which can be installed on currently-available release, or even next-releases of the big distros, where the freeze window starts some 6 months ahead of release...
          • From the article:

            Over the years the expectations of Linux users have gone from simply wanting Linux drivers for their hardware to wanting open-source Linux drivers (read: no binary blobs) to now wanting open-source drivers in the distribution of their choice at the time the hardware first ships...

          So, yeah - there's code out there which should be usable to make the open-source drivers go, but most of the reviewers on the net won't be able to make the bits go, some of the bits won't be ready for a while, and in general, anyone who tries to make them go in order to review this will have something or other to complain about...

          But you're spot on with this statement, which echos some of the sentiments from the article:

          I guess Linux on the desktop has come a long way when people start bitching about new hardware not being supported out of the box in Ubuntu.

          • The components involved which would need recompiling on to work include the kernel, the lowest-level support libraries like libdrm and libmesa, and X - the holy trinity of "if this fucks up I can't use my computer"...

            You are aware that you can compile stuff on a different machine than you're going to install on. You are aware that you can dual boot a machine. You are aware that you can ssh or use serial ports to access a machine without using X or graphics drivers. You are aware that you can make back
    • As Intel increasingly kowtows to the DRM desires of the Content Lords, I'm not sure how much longer their open-sourcing will continue.
    • Intel has already messed up the drivers situation for one of its product families - GMA500 aka Poulsbo. They've released 3 driver families for it, PSB, IEGD and EMGD, progressively doing a worst job. PSB is stil working in Xorg 1.9 thanks to some users who have been patching and hacking the source parts, without any Intel support. It has some unfinished parts that would take a xorg developer a couple of days to fix, but Intel has refused to even listen to the users who have been patching it, never mind help

    • by lkcl ( 517947 )

      the problem is that intel actually haven't been able to write a decent 3D driver, period. it's simply not an area where they have sufficient programming expertise.

      so ironically, it falls to the free software community to come up with innovative solutions such as LLVMpipe and Gallium3D to provide the answers.

      Gallium3D is a low-level "pipe" API which can be implemented on top of any GPU engine. OpenGL Reference Implementations such as MesaGL can then be put through the c-to-LLVM compiler and you automatical

  • There aren't really any compelling ($$$) reasons to support sweet graphics drivers in Linux. Talk to Adobe, Autodesk, et al... give users a reason to demand driver support.
    • I want to play warsow and xonotic, and watch 1080p movies without having to pay 7500 rupees for an OS I'll never use.
    • As I understand it one of the missing features of this card is hardware encoding. Since hosting video on linux servers is pretty popular you'd have expected them to at least support that out of the box.

      Server farms is also another area where linux and graphics are used often.

      I'm sure that are a lot more reasons but your mistake is assuming that the cards are going to be purely used in desktops.

      • by jonwil ( 467024 )

        Very surprised that Intel (who are normally VERY good with their in-house-developed GPUs on Linux) are not supporting a feature as cool and as nifty as hardware video encoding on Linux.

  • It's not easy (Score:5, Insightful)

    by ToasterMonkey ( 467067 ) on Tuesday January 04, 2011 @06:31AM (#34752284) Homepage

    Unlike the proprietary drivers from ATI/AMD and NVIDIA or any of the drivers on the Microsoft Windows side, it's not easy to provide updated drivers post-release in distributions like Ubuntu due to the inter-dependence on these different components and all of these components being critical to the Linux desktop's well being for all users.

    That's a funny was of saying Linux doesn't have a stable ABI because its architects are crazy.

    I honestly hope in five years you can all go back and laugh at articles like these, but more than likely you'll have slightly bigger version numbers and different silly names.

    hurl [phoronix.com]
    blech [phoronix.com]

    • Re:It's not easy (Score:5, Interesting)

      by hitmark ( 640295 ) on Tuesday January 04, 2011 @06:41AM (#34752312) Journal

      How many gaping issues are left unresolved because microsoft is maintaining a stable ABI?

      • Re: (Score:3, Insightful)

        by Anonymous Coward

        That would be zero. There may well be gaping issues with MS software, but maintaining a stable API is not even PART of the problem. API stability (and even ABI stability) is just standard, well-established practice. And yes, Linux suffers a LOT for not having it.

        • Re: (Score:3, Interesting)

          Comment removed based on user account deletion
        • Re:It's not easy (Score:5, Insightful)

          by Microlith ( 54737 ) on Tuesday January 04, 2011 @11:44AM (#34754292)

          ABI stability helps no one but those that develop and release closed source binaries. Holding the rest of the kernel back for the sake of a handful of modules made by people who won't play nice is stupid in the extreme.

          • No, it also helps those who develop and release open source kernel modules but don't have the time to maintain them for the foreseeable future just to make sure they keep compiling on every new minor kernel release.

            And the "handful of modules" are, for the majority of Linux users, the only way to get stable 3D acceleration for their systems, which, I dare say, is a very important "handful". I'm not going to say who is stupid in the extreme here, but do keep your response in mind next time we're going to dis

      • How many gaping issues are left unresolved because microsoft is maintaining a stable ABI?

        Bringing up gaping unresolved issues in a Linux debate is a lot like invading Asia. Please, tell us about these gaping issues caused by the modest amount of discipline required to maintain a stable ABI. Looking at the problems Linux has, how are the cons of an ABI not worth it? Can you even give one tangible pro for the status quo that an end-user would appreciate? "More frequent kernel releases" is so not an answer...

    • Re:It's not easy (Score:5, Insightful)

      by should_be_linear ( 779431 ) on Tuesday January 04, 2011 @07:03AM (#34752352)
      Stable ABI requires more resources for development (people, time, testing). Simple as that. Linux HQ decided that these resources are better spent somewhere else, like fixing security issues and overall improving. Bleeding edge graphic cards _are_ the problem for several months after introduction, but that sounds like acceptable trade off to me. Resources are always limited and trade off can only be moved elsewhere, but not eliminated.
      • Re:It's not easy (Score:5, Insightful)

        by pseudonomous ( 1389971 ) on Tuesday January 04, 2011 @07:16AM (#34752384)
        I'll admit I don't know too much about this but freebsd has managed to provide a stable ABI, I think back to the 4.x releases via compatibility layers (which are not installed by default but are available). I've heard that solaris's abi is stable back to the first official release. Linux devs could provide a stable abi ... but they don't care. They build their kernels from git anyway.
          • Re:It's not easy (Score:4, Insightful)

            by Timmmm ( 636430 ) on Tuesday January 04, 2011 @07:41AM (#34752470)

            That is just silly.

            Paraphrasing, they say that they can't have a stable ABI because of small differences in how C compilers compile things (alignment of structures, etc.). Has that problem *really* not been solved? Microsoft manage to do it!

            They then say they can't have a stable API (DPI?) because it would mean they have to maintain old code (true, but surely not too much work), and people might accidentally use the old version. Seriously? I guess they haven't heard of documentation.

            And finally they say the solution is to get your driver into the main kernel tree. Not only would this be a hell of a lot more work than just shoving it on a website (subscribe to mailing lists, learn to use git properly, submit code for review, revise code, etc. etc.) but I seriously doubt they will just accept anything. What if I make a device that only I have? Will they accept a driver that is only useful for me?

            • Not only would this be a hell of a lot more work than just shoving it on a website (subscribe to mailing lists, learn to use git properly, submit code for review, revise code, etc. etc.)

              Yes, you actually have to work to make sure the drivers integrates properly instead of doing a code dump. How shocking!

              What if I make a device that only I have? Will they accept a driver that is only useful for me?

              Regardless of whether they accept it or not, that's not really a reason to choose a stable API vs including in the tree.
              And if you're the only user, do you really need to keep an updated kernel? Can't you simply write it for the current kernel and not upgrade?

            • by iserlohn ( 49556 )

              The elephant is in the room but nobody acknowledges it. Intel can backport their OSS drivers to (relatively recent, but still) older kernels, but they chose not to. That is the root of the problem, not the lack of a stable ABI. The lack of a stable ABI keeps Linux source-based rather than binary-based. Linux is all about having driver source available!

              • The lack of a stable ABI keeps Linux source-based rather than binary-based. Linux is all about having driver source available!

                This is the main problem I have with Linux. Choices made for philosophical reasons rather than practical considerations. You have this in other places too (eg. business concerns trump practicality) but Linux takes the cake.

                • Re: (Score:3, Insightful)

                  by Anonymous Coward

                  It's an entirely practical proposition: with having the source code we can fix bugs and can improve the code without having to wait for the 'ABI driver owner' to do something.

                  It's a tradeoff between long term independence and short term availability.

                  And if you look at how Linux stormed the supercomputer and smartphone space you'll have to admit its architectural flexibility works in a splendid way. Yes, the other side of the coin is that the established, Windows dominated, secrecy-obsessed PC hardware space

                  • Exactly, this file from the kernel docs explains the practical reasoning of not having a stable ABI in detail - http://lxr.linux.no/#linux+v2.6.36/Documentation/stable_api_nonsense.txt [linux.no]

                    Not only is stability and being able to integrate the drivers better within the kernel a key part of having the source for the drivers. If some poor person gets stuck using a piece of once 'in' hardware that the manufacturer has long since abandoned supporting - the issues can still be fixed. Or will have been fixed befor
                • Comment removed based on user account deletion
              • by grumbel ( 592662 )

                That is the root of the problem, not the lack of a stable ABI.

                If the driver is OSS and yet still fails with older kernels you can't really blame Intel, they have done their work in actually providing the source, its the shitty underlying OSS infrastructure that fails in actually doing something with that code.

                If having code is so superior to a binary only driver it should work better then a closed one, but yet I have never heard a Windows person complain about any of these issues, there stuff "just works" (most of the time anyway).

                • by jimrthy ( 893116 )
                  Then you haven't been paying much attention. I have friends who have to replace some piece of hardware--be it a printer, a network card, or an under-powered video card--with every new Windows release.
              • The lack of a stable ABI keeps Linux source-based rather than binary-based.

                So what? All it takes is one compile and you've got a binary. If you've got the source and the binary, then it can be added to the repositories. The "elephant" may look tall and wide, but from this angle, it's thinner than a sheet of toilet paper.
            • by JonJ ( 907502 )

              What if I make a device that only I have? Will they accept a driver that is only useful for me?

              Yes, there are drivers in the kernel which has only one user.

            • by JonJ ( 907502 )

              Paraphrasing, they say that they can't have a stable ABI because of small differences in how C compilers compile things (alignment of structures, etc.).

              First of all, they're not saying the can't, they're saying they don't want to. There's a key difference here. Also, Linux runs on far more architectures than Windows does, so that might also be taken into consideration.

            • by jimrthy ( 893116 )

              Paraphrasing, they say that they can't have a stable ABI because of small differences in how C compilers compile things (alignment of structures, etc.). Has that problem *really* not been solved?

              No, and it never really will, at this level. You can get by with bytecode running on a VM for the kinds of software you write with Java. But, sooner or later, that VM has to interface with actual hardware. Which is where this problem comes up.

              Microsoft manage to do it!

              Umm...no, they don't. It's been a while since I opened up Visual Studio. But, last time I looked, they had options for building for both ARM and Intel. Several years back, it seems like they had quite a few more options (32-bit and 16-bit Intel, ARM, and SPARC, maybe?)

            • Re:It's not easy (Score:5, Insightful)

              by MostAwesomeDude ( 980382 ) on Tuesday January 04, 2011 @01:15PM (#34755434) Homepage

              Ever read Raymond Chen's book? It's pretty terrific. There's an entire section dedicated to showing how Win32's stable API and ABI in kernel and user space has been a horrific nightmare and is a large waste of developer manpower.

              Also, the *only* people affected by the lack of stable ABI are people that ship out-of-tree kernel drivers, all of whom have no excuse for not immediately pursuing upstream merges of one sort or another.

              Also, some exported kernel APIs, like the syscall list and ioctl list, are sacred and are never altered. To take a topical example, all KMS graphics drivers respect and give sensible return values for legacy userspace X components calling pre-KMS settings.

              And finally, to answer your strawman, *yes*, you can get a driver accepted if it has no users besides yourself. IBM's notorious for this; one of their upstream drivers has something like 2 users in the entire world. The drivers that tend to be controversial are things like reiserfs4 (layering issues, maintainer conflicts), aufs (layering issues, code quality issues), OSS4 (licensing issues, maintainers want to keep it out-of-tree!), etc. where there are clear and obvious reasons why the upstream merge hasn't happened.

              Hell, for DRM, this was a problem too, since the DRM/libdrm tree was buildable for BSD as well. We made the decision a bit ago to merge into the Linux tree and make the out-of-tree repo for libdrm only, and all of a sudden, life gets *easier* because we no longer have to switch back and forth between Linux and BSD compat.

        • Thats exactly my original point: FreeBSD and Solaris (Windows, OSX) all have stable ABI but I am still using Linux with unstable ABI. Obviously Linux devs did some things more useful for me then maintaining stable ABI.
    • by qbast ( 1265706 )
      Oh great, this again. For example OpenSolaris has stable ABI and yet it has much worse hardware support than Linux.
    • Re:It's not easy (Score:5, Informative)

      by Mad Merlin ( 837387 ) on Tuesday January 04, 2011 @07:27AM (#34752420) Homepage

      It's obvious that you don't understand the issue, kernel ABI is completely irrelevant here. Not only is the overwhelming majority of the software that requires updating here in userspace (Mesa, Xorg libraries and Intel DDX driver), you can already switch out the kernel version in use freely, without a stable ABI!

      No, what the article is trying to say is that because not every driver completely reinvents the wheel like they do on Windows, there needs to be more coordination between the driver and the other libraries that it depends upon, instead of just being able to dump the latest development code as a new release and call it a day.

    • by Kjella ( 173770 )

      That's a funny was of saying Linux doesn't have a stable ABI because its architects are crazy.

      This is about a bit more than the kernel ABI flamewar. The binary blobs don't just interact with the kernel but are pretty much all over the graphics stack. If you change the X server they stop working, while the open source drivers depend on a more recent X server. If you want to change this, you need to create a Linux equivalent of WDDM, the new graphics driver model that came with Vista and caused tons of grief even though both nVidia and ATI had tons of people working on it. It would take a huge effort

    • I don't think it's an issue of ABI/API stability at all. The true culprit, in my opinion, is the massive interdependence of independently developed and rapidly changing libraries.

      The one thing that has consistently stopped me from being more involved with open source is that every time I attempt to play with some open source utility I spend hours trying to find correct dependencies, often finding some that were undocumented as necessary after numerous failed compiles, and finally giving up.

      The thing that w

      • by jimrthy ( 893116 )

        My guess is that you didn't take the time to figure out the package managers. The last time I heard this complaint, it turned out that he hadn't noticed the "Quick Search" box in Synaptic.

        Admittedly, that's a weakness in Synaptic's UI, because it isn't obvious. Still, I find that finding and installing the software that I use (which is an unusual sample) is much easier and likely to succeed on Linux than Windows.

    • by vadim_t ( 324782 )

      It's not about the ABI.

      What's happening is that the way graphics work in Linux is being completely overhauled. This isn't a "now the do_stuff() function takes an extra argument" kind of change, it's a complete redesign. A stable ABI would prevent the former, but redesigns like this one would still happen. You can't use the Windows 3.1 drivers on Win7 for instance.

      This is more of an issue of bad timing, with hardware arriving before the software is ready. A bit like XP having a lot of trouble to install on s

  • by ChunderDownunder ( 709234 ) on Tuesday January 04, 2011 @06:43AM (#34752316)
    This thread [phoronix.com] discusses the availability of FOSS drivers for those snazzy ARM Cortex chips found commonly in touch-screen devices.
    Even if you can 'root' your Android phone, getting a 3D accelerated x.org experience is unlikely. Even Nokia's forthcoming Meego device will be a binary blob affair, I suspect.
    • by lkcl ( 517947 )

      It's not just ARM Cortex CPUs: it's the Telechips ARM11 (which is causing headaches for the sheer number of GPL violations Chinese products), but also there are MIPS SoC processors coming out as well - *all* of them use either proprietary NVIDIA, proprietary Vivante, proprietary MALI or proprietary PowerVR.

      now, i've spoken to Richard Stallman about this and it may surprise you that he pointed out that these proprietary libraries are actually classed as "System Libraries" under the GPL. so, the proprietary

  • TANSTAAFL (Score:4, Insightful)

    by bbbaldie ( 935205 ) on Tuesday January 04, 2011 @06:43AM (#34752318) Homepage
    I'm just glad drivers get written at all. In the last ten years, Linux has gone from daunting to a snap to install and maintain. If you can contribute, and you aren't doing so, you have no reason to bitch about the tardiness of drivers. Heck, you don't have a right to bitch anyway about something that's free.
    • Comment removed based on user account deletion
    • Yeah, I'm thankful for those drivers as well - but they still have some massive problems. I recently tried to use a dual monitor setup on an older laptop with an ATI chip.

      Absolute catastrophe. Either one or the other monitor wouldn't be set to the correct resolution, show only half of the picture - and when I finally managed to get it right through some obscure config magic, the setting would have been reset upon rebooting the laptop.
      And 50% of the time, trying to change the config resulted in a hardlock
    • If you're not willing to listen to anyone who's not a kernel dev then you're missing out on a lot of useful feedback. As well as being an arrogrant twat.

  • wow! intel is really keeping up with whats What hot and new , intel open source [intel.com] Now sounds like intel really wants to please OSS users like on Arrandale when they pull these kinds of stunts, TFA:

    Intel decided not to send out any Sandy Bridge CPU samples to us, so we are unable to deliver test results, but all I got were frustrated journalists asking me how to get the Sandy Bridge graphics working under Linux.

    Arrandale is also a complete mess [freedesktop.org] on some platforms like fedora for e.g. [fedoraproject.org]Currently now running gentoo with xorg 1.9. and kernel 2.6.37.7 and feeling lucky that most things are now working on an Arrandale platform.

  • Intel are lucky to have managed to write a driver that works for the kernel "X" and the window manager "Y ". How a developer will be able to make a driver that works in all the infinite combinations of software that constitute a Linux distro? How to make then to work in a graphics system that is actually a complete mess?
    And how many times the work has been completely lost because some idiot had the "brilliant idea" to change something in a vital library, completely breaking compatibility? Is a difficult jo
    • Comment removed based on user account deletion
    • by jedidiah ( 1196 )

      Nvidia can. Why can't Intel?

      Although since it is all source, then Canoncial could fix this if they really wanted to. So could Suse, if it were it's old self.

      • Nvidia uses a different approach, it bypasses most of the mess. But she still has problems from time to time (recently I had to update the video driver because the kernel has changed and broke compatibility).

        Whether the source, the problem is you need to have developers to fix it. Not everyone has the time and especially the knowledge necessary to make this work.
      • by Desler ( 1608317 )

        Because Nvidia says "fuck it" to most of the X.org crap and just bypasses it.

  • It would be nice to have some stable HPI, too. Everything that the old hardware can do, the old driver for the old hardware should be able to do with the new hardware. So with no change at all to the driver, everything that worked with the old hardware shall work with the new hardware, faster where applicable.

    New features are then the issue. If the hardware interface is designed in a flexible way, then the low level drivers should not need to care at all about what is going on with the device. They shou

  • All the kernel needs to do is run text console mode so why does it contain graphics drivers? Why arn't they just shipped with X windows as they were in the past? (This is a genuine question , I've not kept up with how linux handles graphics for years now)

    • by gringer ( 252588 )

      All the kernel needs to do is run text console mode so why does it contain graphics drivers?

      Frame buffer, a "text" console that can display graphics, kernel mode setting, X-independent video drivers, flicker-free startup, suspend, power management (among other things, I presume).

      • by Viol8 ( 599362 )

        So is a framebuffer a requirement now for running X or can it still run standalone with its own drivers? I can't help thinking that putting graphics drivers in the kernel is a bad idea given how flaky they can be.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...