Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Graphics Open Source

Glyphy: High Quality Glyph Rendering Using OpenGL ES2 Shaders 59

Recently presented at Linuxconf.au was Glyphy, a text renderer implemented using OpenGL ES2 shaders. Current OpenGL applications rasterize text on the CPU using Freetype or a similar library, uploading glyphs to the GPU as textures. This inherently limits quality and flexibility (e.g. rotation, perspective transforms, etc. cause the font hinting to become incorrect and you cannot perform subpixel antialiasing). Glyphy, on the other hand, uploads typeface vectors to the GPU and renders text in real time, performing perspective correct antialiasing. The presentation can be watched or downloaded on Vimeo. The slide sources are in Python, and I generated a PDF of the slides (warning: 15M due to embedded images). Source code is at Google Code (including a demo application), under the Apache License.
This discussion has been archived. No new comments can be posted.

Glyphy: High Quality Glyph Rendering Using OpenGL ES2 Shaders

Comments Filter:
  • Now that is something I would like to have. And probably moving to this rendering described here removes that option.
    • by Anonymous Coward

      and you wrote this on your open hardware CPU and Mobo looking at your open hardware monitor?

    • by Jmc23 ( 2353706 )
      Why? Do you just randomly jump to conclusions so you can feel bad about your life?
    • by tepples ( 727027 )
      So long as a video card supports pixel shaders that modify the alpha channel, it can support distance field text rendering.
  • by Janek Kozicki ( 722688 ) on Wednesday January 15, 2014 @02:29PM (#45968451) Journal
    is the weirdest presentation that I ever saw on slashdot.
    • And with some of the worst use of fonts....

    • by behdad ( 320171 )

      I take it as a compliment ;).

  • I would love to know if this can be made to work with WebGL. There are so many possibilities in web applications for really nice font management.
    • > can be made to work with WebGL ?

      WebGL is OpenGL ES :-)

    • There are so many possibilities in web applications for really nice font management.

      Which are all wasted if the end user's browser lacks WebGL support entirely, as is the case with all web browsers for iPhone or iPad, or if the end user's browser detects insufficiency in the underlying OpenGL implementation, as my browser does (Firefox 26.0 on Xubuntu 12.04 LTS on Atom N450). All I get is "Hmm. While your browser seems to support WebGL, it is disabled or unavailable. If possible, please ensure that you are running the latest drivers for your video card", even after doing sudo sh -c "apt-g

      • This.

        I tried running some new fangled webgl demo on an old chevy pickup truck I have in the backyard. It didn't work there either, it just kind of sputtered and emitted white smoke.

      • Well, I often end up doing corporate in house sites where the capabilities of every machine that will access the site are known. In these kinds of cases the fallback case become much less important. I agree with your point, however, for public sites.
    • by behdad ( 320171 )

      As mentioned in the QA section, at some point I had GLyphy compiled through Emscripten to Javascript+WebGL. It was working rather fine. I should try that again.

  • I've often thought the great potential of Microsoft's DirectWrite was wasted on Direct3D. Having an Open replacement provides so many more opportunities.
  • by Anonymous Coward

    Whoever came up with blurry-color subpixel font rendering should be shot. I understand the theory, but it's an optical illusion that is incompatible with my eyeballs. Worse, subpixel rendering is the default in all kinds of places. My eyes hurt just thinking about it. Please oh please do not let this (otherwise very cool idea) make the problem even worse.

    • by Anonymous Coward

      Your monitor has unusual pixel ordering or you're insane. It's not an optical illusion at all, it's exactly what its name implies: it uses subpixels to smooth out the edges of a font. That's less of an optical illusion than the color being displayed on your monitor is.

      • Or his monitor has very sharp pixels. I had to disable subpixel rendering recently because my new monitor had too sharp pixels. It turned out I could adjust the "sharpness" of the screen and if I turned it down subpixel rendering worked again, but at average sharpness subpixel rendering just caused colored fringes on letters, which is exactly how it is rendered.
        • by Anonymous Coward

          Fair enough. That's a monitor image processing artifact though, not an issue with subpixel rendering itself. Sharpness at 50% with an HDMI/DVI/DisplayPort connection will cause any normal monitor not to apply any sharpening/blurring.

        • by AC-x ( 735297 ) on Wednesday January 15, 2014 @04:45PM (#45969801)

          You know what really annoys me? How almost all 1080p displays these days seem to, by default, take the hdmi video input, slightly up-scale it (to overscan) and sharpen the hell out of it.

          What the fuck?? It's a digital signal, they're taking the literally pixel perfect input and ruining it by smearing individual input pixels over several output pixels and putting sharpening artefacts everywhere. Why? When is that ever a good idea?? Why would you ever need to overscan HDMI?

          • Why would you ever need to overscan HDMI?

            Because television video is authored with early-adopter CRT HDTVs (and thus with overscan) in mind.

            • by AC-x ( 735297 )

              but it's not like there's a black border, why would you not want to view the edges?

              • by tepples ( 727027 )

                but it's not like there's a black border, why would you not want to view the edges?

                I guess it must throw the composition out of balance, especially for things like news tickers at the bottom and sports scores at the top. And older film and video might still have things like a boom mic just out of the action safe area (but protruding slightly into the overscan).

          • If you're on a monitor then it should not be messing with the signal at all.

            If you're on a TV, then it's expecting consumer-grade TV signals and will futz with it. On some better TVs there is a way to tell it that it's a computer signal and then it will skip the mangling and just show it as-is.

    • Re: (Score:2, Insightful)

      by Goaway ( 82658 )

      Maybe your OS is just using the wrong subpixel rendering for your display type.

    • Re: (Score:3, Funny)

      by Russ1642 ( 1087959 )

      Might I suggest using an oxygen free, mono directional, ultra gold-plated HDMI cable to connect your monitor. It should fix the anti-aliasing flaw that you can somehow detect with your superhuman eyeballs.

  • Current OpenGL applications rasterize text on the CPU using Freetype or a similar library, uploading glyphs to the GPU as textures. This inherently limits quality and flexibility (e.g. rotation, perspective transforms, etc. cause the font hinting to become incorrect and you cannot perform subpixel antialiasing).

    Wow, I never realized rendering text was such a royal pain in the ass.

    • Re:Surprising (Score:5, Interesting)

      by PhrostyMcByte ( 589271 ) <phrosty@gmail.com> on Wednesday January 15, 2014 @02:57PM (#45968757) Homepage

      Although rendering text correctly is maddenly complex, the reasons described here aren't actually any of them.

      The things described here are more a result of the good established libraries only being written for the CPU. Not because GPU is more complex, but simply because nobody had taken the time to do it.

      • If you rendered the glyphs on the GPU you would still cache the rendered glyphs because rendering that many small details on the screen can be quite demanding on the GPU and rendering a lot of textures is what GPUs do all the time and something that is very well optimized.
        • Likely very true for simple static text. A number of games do more complex things though, such as 3D huds that shift with movement. This should be able to render them in realtime without sacrificing quality, which is pretty cool.
    • Most of the time you just display text with no transforms and the times when you do want transformed you don't need it pixel perfect (for example, during a rotation transition effect, the user will hardly notice pixel imperfections while the text is rotating)

      • by behdad ( 320171 )

        Font-size *is* transform. A scale to be exact. One of the benefits of GLyphy is that you don't need to rasterize the font at every scale. Imagine pinch-zoom for example.

  • downloading that sucker could have taken down the entire Internet. ;) :D

  • by tepples ( 727027 ) <.tepples. .at. .gmail.com.> on Wednesday January 15, 2014 @03:13PM (#45968885) Homepage Journal

    Subpixel text rendering is just antialiasing with the red channel offset by a third of a pixel in one direction and the blue channel by a third of a pixel in the other direction. I'd compare it to anaglyph rendering, which offsets the camera position in the red channel by one intrapupil distance from the green and blue channels so that 3D glasses can reconstruct it. If the rest of your system performs correct antialiasing of edges (FSAA, MSAA, etc.), the video card will do the subpixel AA for you.

    The PDF mentions another technique I've read about in Team Fortress 2, called "SDF" or "signed distance field" fonts. This makes a slight change to the rasterization and blitting steps to store more edge information in each texel. First the alpha channel is blurred along the edges of glyphs so that becomes a ramp instead of a sharp transition, and the glyphs are uploaded as a texture. The alpha forms a height map where 128 is the center, less than 128 is outside the glyph by that distance, and more than 128 is inside the glyph by that distance. This makes alpha into a plane at any point on the contour. The video card's linear interpolation unit interpolates along the blurred alpha, which is ideal because interpolation of a plane is exact. Finally, a pixel shader uses the smooth-step function to saturate the alpha such that the transition becomes one pixel wide. This allows high-quality scaling of bitmap fonts even with textures stored at 32px or smaller. It also allows programmatically making bold or light faces by setting the transition band closer to 96 or 160 or whatever. But it comes at the expense of slightly distorting the corners of stems, so it's probably best for sans-serif fonts.

    The PDF also mentions approximating the outline as piecewise arcs of a circle, parabola, etc. and drawing each arc with an arc texture. This would be especially handy for TrueType glyph outlines, which are made of "quadratic Bezier splines", a fancy term for parabolic arcs.

    • > The PDF mentions another technique I've read about in Team Fortress 2, called "SDF" or "signed distance field" fonts.

      Correct; Valve published this technique in 2007.

      http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf [valvesoftware.com]

    • by spitzak ( 4019 )

      Yes, this is an improvement on signed distance fields. If I understand it right, it is not the distance to the nearest point, but a definition of the nearest circular arc that is stored in each texture pixel. This seems to preserve corners and thin stems. Though it sounds complex, he in fact has to store more than one arc per pixel (as the closest one varies depending on the position) and it looks like it has to define actual arcs, not circles, which I would imagine complicates the shader greatly.

    • by spitzak ( 4019 )

      Subpixel text rendering also needs a filtering step so that the color does not shift (imagine if the shape was such that more of it was in the red area than in the blue area). What happens is the red is made somewhat less than it should and the difference is added to the 4 nearest green and blue pixels, so the overall light is white, just concentrated at the red pixel.

      However your description is basically correct. The video said he needed to add a "direction" to make subpixel filtering work, which I don't u

  • Utter dribble (Score:4, Interesting)

    by Anonymous Coward on Wednesday January 15, 2014 @04:49PM (#45969813)

    There is NOTHING that the GPU can do that software rendering on the CPU cannot do. There MAY be a speed penalty, of course (and were you using the CPU to render a 3D game, rather than your GPU, the speed penalty would in in the order of thousands to tens of thousands of times slower).

    The reverse is NOT true. There are rendering methods available on the CPU that the GPU cannot implement, because of hardware limitations. Take COVERAGE-BASED anti-aliasing, for instance.

    On the CPU, it is trivial to write a triangle-fill algorithm that PERFECTLY anti-aliases the edges by calculating the exact percentage of a pixel the triangle edges cover. Amazingly, this option is NOT implemented in GPU hardware. GPU hardware uses the crude approach of pixel super-sampling- which can be thought of as an increase in the resolution of edge pixels. So, for instance, all 4x GPU anti-aliasing methods effectively increase the resolution of edge pixels by 2 (so a pixel becomes in some sense 2x2 pixels).

    Edge coverage calculations, while trivial to implement in hardware, were never considered 'useful' in common GPU solutions.

    A 'GLYPH' tends to have a highly curved contour, which sub-divides into a nasty mess of GPU unfriendly irregular tiny triangles. GPUs are most efficient when they render a stream of similar triangles of at least 16 visible screen pixels. Glyphs can be nasty, highly 'concave' entities with MANY failure conditions for fill algorithms. They are exactly the kind of geometric entities modern GPU hardware hates the most.

    It gets WORSE, much much worse. Modern GPU hardware drivers from AMD and Nvidia purposely cripple common 2D acceleration functions in DirectX and OpenGL, so they can sell so-called 'professional' hardware (with the exact same chips) to CAD users and the like. The situation got so bad with AMD solutions, that tech sites could not believe how slowly modern AMD GPU cards rendered the new accelerated Windows 2D interface- forcing AMD to release new drivers that backed off on the chocking just a little.

    Admittedly, so long as accelerated glyph rendering using the 3D pipeline, and ordinary shader solutions, the crippling will not happen- but the crippling WILL be back if non-gaming forms of anti-aliasing are activated in hardware on Nvidia or AMD cards. Nvidia, for instance, boasts that drawing anti-aliased lines with its 2000dollar plus professional card is hundreds of times faster than doing the same with the gaming version of the card that uses the EXACT same hardware, and actually has faster memory and GPU clocks.

    It gets WORSE. Rendering text on a modern CPU, and displaying the text using the 2D pipeline of a modern GPU is very power efficient. However, activate the 3D gaming mode of your GPU, by accelerating glyphs through the 3D pipeline, and power usage goes through the roof.

    • by Anonymous Coward

      The speaker (who is admittedly hard to understand because he apparently has marbles in his mouth) explains that he can't do things that have trivial implementations in OpenGL 3.x because he's intentionally limiting himself to OpenGL ES2.

      tl;dr: this guy is doing incremental research on font-rendering with signed distance fields(*) while intentionally holding one hand behind his back.

      * = See UnknownSoldier's link to the 2007 paper.

    • by Anonymous Coward

      Clean the froth from your mouth, then I can tell you the very simple answer. Subpixel rendering is not implemented on current 3D hardware because anything can be rendered at any depth in any order, so if you render a partial color to a pixel, what is it supposed to blend with? 2D scenes are easily sorted while sorting a 3D can be almost impossible. However, it MAY be possible to sort a 3D scene in many situations and it would be nice to put the hardware in a mode where it can assume it can just blend a part

    • On the CPU, it is trivial to write a triangle-fill algorithm that PERFECTLY anti-aliases the edges by calculating the exact percentage of a pixel the triangle edges cover. Amazingly, this option is NOT implemented in GPU hardware. GPU hardware uses the crude approach of pixel super-sampling- which can be thought of as an increase in the resolution of edge pixels. So, for instance, all 4x GPU anti-aliasing methods effectively increase the resolution of edge pixels by 2 (so a pixel becomes in some sense 2x2 pixels).

      Edge coverage calculations, while trivial to implement in hardware, were never considered 'useful' in common GPU solutions.

      You can do that with a fragment shader. It's most likely not going to be efficient. But certainly possible.

      But that's one of the biggest benefits of GPU. It's not just the huge FLOPs number.
      It's to get people to think of implementing solutions in an efficient way.

      GPUs are best at what's known as 'embarassingly parallel' problems which means that it's so easy to implement a problem in parallel that it's already embarassing. These days these problems are also known as 'perfectly parallel'.
      So the GPU forces pe

  • by UnknownSoldier ( 67820 ) on Wednesday January 15, 2014 @04:58PM (#45969959)

    I ran into font edges with fringes and halos 2 years back when trying to render an 8-bit luminance font with an arbitrary user specified color. (Blue was the worst offender for fringes.)

    I wasn't aware of the Valve's clever SDF solution at the time so I used a different 3-fold solution:

    * Generate the texture font atlas offline using custom code + FreeType2
    Each font is "natively" exported at various sizes from 8 px up to 72 px.

    * Use pre-multiplied alpha blending for rendering instead of the standard alpha blending
                    gl.enable( gl.BLEND );
                    gl.blendFunc( gl.ONE, gl.ONE_MINUS_SRC_ALPHA );

    * Fix the fragment shader to use pre-multiplied alpha:

    uniform lowp vec4 uvColor ;
    uniform sampler2D utDiffuse;
    varying mediump vec3 vvTexCoord;
    void main() {
            mediump vec2 st = vec2( vvTexCoord.x, vvTexCoord.y );
            lowp vec4 texel = texture2D( utDiffuse, st );
            lowp float font = texel.a;
            lowp float fade = vvTexCoord.z;
            lowp float premultiply = uvColor.a;
            gl_FragColor = uvColor * font * fade * premultiply;
    }

    We also pass in vertex alpha to allow each rendered font to "fade out to nothing" hence the non obvious "fade = vvTexCoord.z".

    Since the designers aren't doing arbitrary rotations nor scaling our solution looks great.

    My boss sent me a link to this article just after I saw it so looks like I'm off to research how easy / hard using SDF is into our WebGL font rendering system. :-)

Genius is ten percent inspiration and fifty percent capital gains.

Working...