Slashdot is powered by your submissions, so send in your scoop


Forgot your password?

Glamor, X11's OpenGL-Based 2D Acceleration Driver, Is Becoming Useful 46

The Glamor driver for X11 has sought for years to replace all of the GPU-specific 2D rendering acceleration code in with portable, high performance OpenGL. Unfortunately, that goal was hampered by the project starting in the awkward time when folks thought fixed-function hardware was still worth supporting. But, according to Keith Packard, the last few months have seen the code modernized and finally maturing as a credible replacement for many of the hardware-specific 2D acceleration backends. From his weblog: "Fast forward to the last six months. Eric has spent a bunch of time cleaning up Glamor internals, and in fact he’s had it merged into the core X server for version 1.16 which will be coming up this July. Within the Glamor code base, he's been cleaning some internal structures up and making life more tolerable for Glamor developers. ... A big part of the cleanup was a transition all of the extension function calls to use his other new project, libepoxy, which provides a sane, consistent and performant API to OpenGL extensions for Linux, Mac OS and Windows." Keith Packard dove in and replaced the Glamor acceleration for core text and points (points in X11 are particularly difficult to accelerate quickly) in just a few days. Text performance is now significantly faster than the software version (not that anyone is using core text any more, but "they’re often one of the hardest things to do efficiently with a heavy weight GPU interface, and OpenGL can be amazingly heavy weight if you let it."). For points, he moved vertex transformation to the GPU getting it up to the same speed as the software implementation. Looking forward, he wrote "Having managed to accelerate 17 of the 392 operations in x11perf, it’s pretty clear that I could spend a bunch of time just stepping through each of the remaining ones and working on them. Before doing that, we want to try and work out some general principles about how to handle core X fill styles. Moving all of the stipple and tile computation to the GPU will help reduce the amount of code necessary to fill rectangles and spans, along with improving performance, assuming the above exercise generalizes to other primitives." Code is available in anholt's and keithp's xserver branches.
This discussion has been archived. No new comments can be posted.

Glamor, X11's OpenGL-Based 2D Acceleration Driver, Is Becoming Useful

Comments Filter:
  • Reminder (Score:2, Funny)

    by Anonymous Coward

    X11 is the Iran-Contra of graphical user interfaces.

    • I dunno, I always get a big belly laugh whenever I log into something and see that horrible 1980s B&W X11 desktop, complete with ugly 'X' cursor.

      • Me too. :)
        • I think the last time I saw it was when I connected to a Raspberry Pi last year with some free X11 client or other.

          I cracked up laughing when it appeared.

      • Re:Reminder (Score:5, Funny)

        by red_dragon ( 1761 ) on Friday March 07, 2014 @12:37PM (#46428491) Homepage
        So do I. Before I start the X server I put my Quaker hat on and then say "HA! HA! I AM USING X11!" and everyone around looks at me saying "WTF is this weirdo talking about?" so I have to put the hat away and shut up. Nobody appreciates good software and hats these days.
      • by tlhIngan ( 30335 )

        I dunno, I always get a big belly laugh whenever I log into something and see that horrible 1980s B&W X11 desktop, complete with ugly 'X' cursor.

        Don't forget the stipple pattern background!

        • Hey man don't knock the stipple. For a long time I used a custom .xconfig to make tools with Athena style scrollbars like classic xterm have a nice gray on white style that would blend together into a solid color on a high res CRT. Add in some scroll wheel support and colored highlighting and that was one bad little terminal emulator.

      • I'll use X until it is pried from my cold dead hands. Or until Wayland has network transparency at least on par with X. Whichever happens first.

        Recently I was re-installing my desktop (Gentoo) from scratch and decided to have a go at not installing any big heavy desktop environment. I already used Ratpoison when I am connected over VNC and have memorized most of the key combos so I thought I would try Ratpoison on the local desktop. Completely banishing KDE I switched from KDM to XDM.

        I still have a stock

        • by fisted ( 2295862 )
          You must be a Linux user. "It's old.. it works well.. -- Lets change it!"

          IOW, don't fkn touch it.
      • I dunno, I always get a big belly laugh whenever I log into something and see that horrible 1980s B&W X11 desktop, complete with ugly 'X' cursor.

        Try flying on a Virgin America plane with the LCD screen inflight entertainment systems in the seat-backs. They'll often mass-reboot the things before or after a flight, briefly revealing that retro-fantastic, monochrome stippled background with 'X' cursor...

      • by fisted ( 2295862 )
        All your laughter does is giving away how little understanding there is on your side.
        There's a strict separation between mechanism and policy.
        X11 provides the mechanism; it creates a display to draw on, and provides basic operation to do the actual drawing of primitives.
        Window managers and the like provide the policy -- how does the GUI behave, how are windows decorated, what pixmap for the cursor, etc.

        This is a very useful and proven separation of concepts; there's nothing "horrible" to it. In fact,
  • "Before doing that, we want to try and work out some general principles about how to handle core X fill styles."

    Use textures! And stencil bitplanes!

  • Vs compositing? (Score:3, Interesting)

    by Anonymous Coward on Friday March 07, 2014 @12:12PM (#46428291)

    I wonder, how does it relate to compositing engine? Ain't surfaces already drawn using GPU accelerated function when using GL-based compositing ?

    • by fnj ( 64210 )

      I wonder, how does it relate to compositing engine? Ain't surfaces already drawn using GPU accelerated function when using GL-based compositing ?

      I would like to know this too. And not just with megabuck megawatt GPUs, but with something reasonable like Intel HD2000, 3000, and 4000.

    • Re:Vs compositing? (Score:4, Interesting)

      by Gaygirlie ( 1657131 ) <[moc.liamtoh] [ta] [eilrigyag]> on Friday March 07, 2014 @03:16PM (#46429797) Homepage

      I wonder, how does it relate to compositing engine? Ain't surfaces already drawn using GPU accelerated function when using GL-based compositing ?

      The windows themselves should be drawn via the GPU on a modern compositing engine, sure, but the window - contents themselves have nothing to do with compositing managers; an app, depending on what UI-toolkit it uses, may be drawing its buttons and text-entries and scrollbars and whatnot via software, H/W-accelerated and somewhat outdated 2D-acceleration, or via the 3D-engine. Many drivers these days don't bother even trying to support the whole range of 2D-accelerated methods and some drivers don't bother supporting such at all, so the toolkits that still use these methods basically fall back to software-rendering.

  • Yay! (Score:4, Insightful)

    by buchner.johannes ( 1139593 ) on Friday March 07, 2014 @12:25PM (#46428395) Homepage Journal

    Cheers to the heros working on improving X. It's probably the most important piece of software on GNU/Linux. Real hackers working there on the most complex issues.

    • by fnj ( 64210 )

      Well, it's utterly irrelevant for server use, so it can't be THAT important, but it is definitely right up there for Desktop linux, which COULD potentially have a whole hell of a lot more penetration than it does now.

  • From reading the blog site, it appears that there is some benefit to creating a 2D acceration wrapper around OpenGL? It's obvious, I'm not getting it.

    Why add a layer of complexity to OpenGL? Why not just explain OpenGL for 2D operations more clearly?
    • Re:Why? (Score:5, Insightful)

      by squiggleslash ( 241428 ) on Friday March 07, 2014 @01:38PM (#46428967) Homepage Journal

      The concept, as I understand it, is that at the moment in order to write a device driver for X11 you have to separately manage code that implements 2D and 3D graphic primitives. Given 2D operations are themselves a subset of 3D operations (even if the API doesn't reflect that), it makes sense to simply have device drivers implement the 3D parts. Then common wrapper code can implement the 2D, alleviating the driver developer of the burden of building and testing an entirely new block of code.

      It should make more reliable, as the same 2D code will be used for all drivers, and should end up being pretty solid. In the mean time, driver developers have more time to polish their 3D driver implementations. Win win. Maybe a slight performance hit, but probably not a significant one.

    • by Anonymous Coward

      To understand you first need to believe that GPUs are becoming ubiquitous through integration; even portable devices provide GPUs today. This is a new phenomenon; only a few years ago you could purchase a laptop with integrated graphics that did not provide a 3D GPU. It's rare to encounter a new desktop/laptop that doesn't have a built-in GPU today.

      So, with the assumption that OpenGL implementing GPUs will eventually be a given, there are two benefits. First, every 2D application is a subset of 3D; a 2D

      • What you really need to believe is that GPUs with decent X support are becoming ubiquitious. Devices where the GPU becomes a paper weight the moment you try to run X because the maker doesn't want to supply any info might as well not have GPUs.

  • by pavon ( 30274 ) on Friday March 07, 2014 @12:51PM (#46428601)

    The cairo-ickle blog [] has maintained very interesting benchmarks of the different cairo [] rendering backends. The short story is that every hardware accelered backend except for sandybridge SNA has performed worse than the software implementation. And in some cases the hardware acceleration is significantly less stable. I'm curious to see if this finally pushes Glamor over the hump and makes it faster than the software path.

    • by Chemisor ( 97276 ) on Friday March 07, 2014 @02:04PM (#46429203)

      I wonder if the differences are due to extracting the result from the GPU. There is no doubt whatsoever that doing 2D with OpenGL on the GPU will be faster than a software rasterizer - what kills the performance in these tests is having to copy the result back to the CPU so it can be displayed in an X window. Once X windows are fully composited and output graphics never leave the GPU memory, the hardware acceleration will no doubt prove to be the fastest.

    • by Anonymous Coward

      Making it faster than the software path isn't the hump. User responsiveness is. If Cairo through my GPU is 20% slower, but the CPU now gets 30% of its time back, that makes my application could run up to 30% faster. But that's only good if I need that 30% CPU time for the rest of my application and I'm not blocking on draw calls. (For games that's certainly true. For most applications that use Cairo I couldn't say.)

    • And the great thing about this project is that if a graphic hardware vendor needs to choose what to spend 1000hours to work on, it can now be the 3D driver instead of a 2D driver. Because someone else created a layer to use that 3D driver for all 2D operations.

      Previously the money/time was spent on the 2D to facilitate the largest target audience for the hardware and unfortunately that 2D effort could not be utilize for the linux 3D use case, so no wonder improvement to it were hard to get.

      So expect those

  • Accleration

    Could have been prevented...

What this country needs is a good five cent ANYTHING!