Glyphy: High Quality Glyph Rendering Using OpenGL ES2 Shaders 59
Recently presented at Linuxconf.au was Glyphy, a text renderer implemented using OpenGL ES2 shaders. Current OpenGL applications rasterize text on the CPU using Freetype or a similar library, uploading glyphs to the GPU as textures. This inherently limits quality and flexibility (e.g. rotation, perspective transforms, etc. cause the font hinting to become incorrect and you cannot perform subpixel antialiasing). Glyphy, on the other hand, uploads typeface vectors to the GPU and renders text in real time, performing perspective correct antialiasing. The presentation can be watched or downloaded on Vimeo. The slide sources are in Python, and I generated a PDF of the slides (warning: 15M due to embedded images). Source code is at Google Code (including a demo application), under the Apache License.
Openhardware video card (Score:1)
Re: (Score:1)
and you wrote this on your open hardware CPU and Mobo looking at your open hardware monitor?
Re: (Score:2)
Re: (Score:3)
Ok, that pdf.. (Score:4, Funny)
Re: (Score:2)
And with some of the worst use of fonts....
Video of SDF rendering (Score:5, Informative)
Re: (Score:3)
That that video [youtube.com] shows the first type of signed distance implementation, while glyphy is a new type of storage.
He shows a texture for this older version quickly at the start of his video. In that version the distance from the center of the pixel to the nearest edge is stored in each pixel (one number per pixel). This has been done for years, btw, and is not new.
In glyphy the actual definition of the nearest circular arc is stored in each pixel (either 3 or 5 numbers per pixel depending on whether a circle or
Re: (Score:1)
What do you mean? The presentation is pretty much *all* screenshots!
Re: (Score:1)
I take it as a compliment ;).
WebGL (Score:2)
Re: (Score:2)
> can be made to work with WebGL ?
WebGL is OpenGL ES :-)
And what fallback? (Score:2)
There are so many possibilities in web applications for really nice font management.
Which are all wasted if the end user's browser lacks WebGL support entirely, as is the case with all web browsers for iPhone or iPad, or if the end user's browser detects insufficiency in the underlying OpenGL implementation, as my browser does (Firefox 26.0 on Xubuntu 12.04 LTS on Atom N450). All I get is "Hmm. While your browser seems to support WebGL, it is disabled or unavailable. If possible, please ensure that you are running the latest drivers for your video card", even after doing sudo sh -c "apt-g
Re: (Score:2)
This.
I tried running some new fangled webgl demo on an old chevy pickup truck I have in the backyard. It didn't work there either, it just kind of sputtered and emitted white smoke.
Re: (Score:2)
Re: (Score:1)
As mentioned in the QA section, at some point I had GLyphy compiled through Emscripten to Javascript+WebGL. It was working rather fine. I should try that again.
Re: (Score:2)
This is great! (Score:2)
damn subpixel antialiasing (Score:1)
Whoever came up with blurry-color subpixel font rendering should be shot. I understand the theory, but it's an optical illusion that is incompatible with my eyeballs. Worse, subpixel rendering is the default in all kinds of places. My eyes hurt just thinking about it. Please oh please do not let this (otherwise very cool idea) make the problem even worse.
Re: (Score:1)
Your monitor has unusual pixel ordering or you're insane. It's not an optical illusion at all, it's exactly what its name implies: it uses subpixels to smooth out the edges of a font. That's less of an optical illusion than the color being displayed on your monitor is.
Re: (Score:2)
Re: (Score:1)
Fair enough. That's a monitor image processing artifact though, not an issue with subpixel rendering itself. Sharpness at 50% with an HDMI/DVI/DisplayPort connection will cause any normal monitor not to apply any sharpening/blurring.
Re:damn subpixel antialiasing (Score:5, Insightful)
You know what really annoys me? How almost all 1080p displays these days seem to, by default, take the hdmi video input, slightly up-scale it (to overscan) and sharpen the hell out of it.
What the fuck?? It's a digital signal, they're taking the literally pixel perfect input and ruining it by smearing individual input pixels over several output pixels and putting sharpening artefacts everywhere. Why? When is that ever a good idea?? Why would you ever need to overscan HDMI?
Early-adopter CRT HDTVs (Score:2)
Why would you ever need to overscan HDMI?
Because television video is authored with early-adopter CRT HDTVs (and thus with overscan) in mind.
Re: (Score:2)
but it's not like there's a black border, why would you not want to view the edges?
Re: (Score:2)
but it's not like there's a black border, why would you not want to view the edges?
I guess it must throw the composition out of balance, especially for things like news tickers at the bottom and sports scores at the top. And older film and video might still have things like a boom mic just out of the action safe area (but protruding slightly into the overscan).
monitor or TV? (Score:2)
If you're on a monitor then it should not be messing with the signal at all.
If you're on a TV, then it's expecting consumer-grade TV signals and will futz with it. On some better TVs there is a way to tell it that it's a computer signal and then it will skip the mangling and just show it as-is.
Re: (Score:2, Insightful)
Maybe your OS is just using the wrong subpixel rendering for your display type.
Re: (Score:3, Funny)
Might I suggest using an oxygen free, mono directional, ultra gold-plated HDMI cable to connect your monitor. It should fix the anti-aliasing flaw that you can somehow detect with your superhuman eyeballs.
Re: (Score:2)
Don't forget to look for double shielded cables to protect you from viruses too [zdnet.com].
Surprising (Score:2)
Current OpenGL applications rasterize text on the CPU using Freetype or a similar library, uploading glyphs to the GPU as textures. This inherently limits quality and flexibility (e.g. rotation, perspective transforms, etc. cause the font hinting to become incorrect and you cannot perform subpixel antialiasing).
Wow, I never realized rendering text was such a royal pain in the ass.
Re:Surprising (Score:5, Interesting)
Although rendering text correctly is maddenly complex, the reasons described here aren't actually any of them.
The things described here are more a result of the good established libraries only being written for the CPU. Not because GPU is more complex, but simply because nobody had taken the time to do it.
Re: (Score:3)
Re: (Score:3)
Re: (Score:2)
Most of the time you just display text with no transforms and the times when you do want transformed you don't need it pixel perfect (for example, during a rotation transition effect, the user will hardly notice pixel imperfections while the text is rotating)
Re: (Score:1)
Font-size *is* transform. A scale to be exact. One of the benefits of GLyphy is that you don't need to rasterize the font at every scale. Imagine pinch-zoom for example.
Thanks for the warning about the 15M PDF file ... (Score:1)
downloading that sucker could have taken down the entire Internet. ;) :D
Re: (Score:2)
Wouldn't it be damn easy to generate intermediate bitmaps from the vectors and then use those when dragging and moving windows? You don't need to redo every single calculation every time you re-render something. You only need to do a few more to make sure nothing has changed. Even a modest amount of performance optimization would make 100% vector rendering as fast as the crap we have now.
Subpixel and anaglyphs; distance fields (Score:5, Informative)
Subpixel text rendering is just antialiasing with the red channel offset by a third of a pixel in one direction and the blue channel by a third of a pixel in the other direction. I'd compare it to anaglyph rendering, which offsets the camera position in the red channel by one intrapupil distance from the green and blue channels so that 3D glasses can reconstruct it. If the rest of your system performs correct antialiasing of edges (FSAA, MSAA, etc.), the video card will do the subpixel AA for you.
The PDF mentions another technique I've read about in Team Fortress 2, called "SDF" or "signed distance field" fonts. This makes a slight change to the rasterization and blitting steps to store more edge information in each texel. First the alpha channel is blurred along the edges of glyphs so that becomes a ramp instead of a sharp transition, and the glyphs are uploaded as a texture. The alpha forms a height map where 128 is the center, less than 128 is outside the glyph by that distance, and more than 128 is inside the glyph by that distance. This makes alpha into a plane at any point on the contour. The video card's linear interpolation unit interpolates along the blurred alpha, which is ideal because interpolation of a plane is exact. Finally, a pixel shader uses the smooth-step function to saturate the alpha such that the transition becomes one pixel wide. This allows high-quality scaling of bitmap fonts even with textures stored at 32px or smaller. It also allows programmatically making bold or light faces by setting the transition band closer to 96 or 160 or whatever. But it comes at the expense of slightly distorting the corners of stems, so it's probably best for sans-serif fonts.
The PDF also mentions approximating the outline as piecewise arcs of a circle, parabola, etc. and drawing each arc with an arc texture. This would be especially handy for TrueType glyph outlines, which are made of "quadratic Bezier splines", a fancy term for parabolic arcs.
Re: (Score:3)
> The PDF mentions another technique I've read about in Team Fortress 2, called "SDF" or "signed distance field" fonts.
Correct; Valve published this technique in 2007.
http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf [valvesoftware.com]
Re: (Score:2)
Yes, this is an improvement on signed distance fields. If I understand it right, it is not the distance to the nearest point, but a definition of the nearest circular arc that is stored in each texture pixel. This seems to preserve corners and thin stems. Though it sounds complex, he in fact has to store more than one arc per pixel (as the closest one varies depending on the position) and it looks like it has to define actual arcs, not circles, which I would imagine complicates the shader greatly.
Re: (Score:2)
Subpixel text rendering also needs a filtering step so that the color does not shift (imagine if the shape was such that more of it was in the red area than in the blue area). What happens is the red is made somewhat less than it should and the difference is added to the 4 nearest green and blue pixels, so the overall light is white, just concentrated at the red pixel.
However your description is basically correct. The video said he needed to add a "direction" to make subpixel filtering work, which I don't u
Re: (Score:2)
Utter dribble (Score:4, Interesting)
There is NOTHING that the GPU can do that software rendering on the CPU cannot do. There MAY be a speed penalty, of course (and were you using the CPU to render a 3D game, rather than your GPU, the speed penalty would in in the order of thousands to tens of thousands of times slower).
The reverse is NOT true. There are rendering methods available on the CPU that the GPU cannot implement, because of hardware limitations. Take COVERAGE-BASED anti-aliasing, for instance.
On the CPU, it is trivial to write a triangle-fill algorithm that PERFECTLY anti-aliases the edges by calculating the exact percentage of a pixel the triangle edges cover. Amazingly, this option is NOT implemented in GPU hardware. GPU hardware uses the crude approach of pixel super-sampling- which can be thought of as an increase in the resolution of edge pixels. So, for instance, all 4x GPU anti-aliasing methods effectively increase the resolution of edge pixels by 2 (so a pixel becomes in some sense 2x2 pixels).
Edge coverage calculations, while trivial to implement in hardware, were never considered 'useful' in common GPU solutions.
A 'GLYPH' tends to have a highly curved contour, which sub-divides into a nasty mess of GPU unfriendly irregular tiny triangles. GPUs are most efficient when they render a stream of similar triangles of at least 16 visible screen pixels. Glyphs can be nasty, highly 'concave' entities with MANY failure conditions for fill algorithms. They are exactly the kind of geometric entities modern GPU hardware hates the most.
It gets WORSE, much much worse. Modern GPU hardware drivers from AMD and Nvidia purposely cripple common 2D acceleration functions in DirectX and OpenGL, so they can sell so-called 'professional' hardware (with the exact same chips) to CAD users and the like. The situation got so bad with AMD solutions, that tech sites could not believe how slowly modern AMD GPU cards rendered the new accelerated Windows 2D interface- forcing AMD to release new drivers that backed off on the chocking just a little.
Admittedly, so long as accelerated glyph rendering using the 3D pipeline, and ordinary shader solutions, the crippling will not happen- but the crippling WILL be back if non-gaming forms of anti-aliasing are activated in hardware on Nvidia or AMD cards. Nvidia, for instance, boasts that drawing anti-aliased lines with its 2000dollar plus professional card is hundreds of times faster than doing the same with the gaming version of the card that uses the EXACT same hardware, and actually has faster memory and GPU clocks.
It gets WORSE. Rendering text on a modern CPU, and displaying the text using the 2D pipeline of a modern GPU is very power efficient. However, activate the 3D gaming mode of your GPU, by accelerating glyphs through the 3D pipeline, and power usage goes through the roof.
Re: (Score:1)
The speaker (who is admittedly hard to understand because he apparently has marbles in his mouth) explains that he can't do things that have trivial implementations in OpenGL 3.x because he's intentionally limiting himself to OpenGL ES2.
tl;dr: this guy is doing incremental research on font-rendering with signed distance fields(*) while intentionally holding one hand behind his back.
* = See UnknownSoldier's link to the 2007 paper.
Re: (Score:1)
Clean the froth from your mouth, then I can tell you the very simple answer. Subpixel rendering is not implemented on current 3D hardware because anything can be rendered at any depth in any order, so if you render a partial color to a pixel, what is it supposed to blend with? 2D scenes are easily sorted while sorting a 3D can be almost impossible. However, it MAY be possible to sort a 3D scene in many situations and it would be nice to put the hardware in a mode where it can assume it can just blend a part
Re: (Score:1)
On the CPU, it is trivial to write a triangle-fill algorithm that PERFECTLY anti-aliases the edges by calculating the exact percentage of a pixel the triangle edges cover. Amazingly, this option is NOT implemented in GPU hardware. GPU hardware uses the crude approach of pixel super-sampling- which can be thought of as an increase in the resolution of edge pixels. So, for instance, all 4x GPU anti-aliasing methods effectively increase the resolution of edge pixels by 2 (so a pixel becomes in some sense 2x2 pixels).
Edge coverage calculations, while trivial to implement in hardware, were never considered 'useful' in common GPU solutions.
You can do that with a fragment shader. It's most likely not going to be efficient. But certainly possible.
But that's one of the biggest benefits of GPU. It's not just the huge FLOPs number.
It's to get people to think of implementing solutions in an efficient way.
GPUs are best at what's known as 'embarassingly parallel' problems which means that it's so easy to implement a problem in parallel that it's already embarassing. These days these problems are also known as 'perfectly parallel'.
So the GPU forces pe
Previous WebGL work-around ... (Score:3)
I ran into font edges with fringes and halos 2 years back when trying to render an 8-bit luminance font with an arbitrary user specified color. (Blue was the worst offender for fringes.)
I wasn't aware of the Valve's clever SDF solution at the time so I used a different 3-fold solution:
* Generate the texture font atlas offline using custom code + FreeType2
Each font is "natively" exported at various sizes from 8 px up to 72 px.
* Use pre-multiplied alpha blending for rendering instead of the standard alpha blending
gl.enable( gl.BLEND );
gl.blendFunc( gl.ONE, gl.ONE_MINUS_SRC_ALPHA );
* Fix the fragment shader to use pre-multiplied alpha:
We also pass in vertex alpha to allow each rendered font to "fade out to nothing" hence the non obvious "fade = vvTexCoord.z".
Since the designers aren't doing arbitrary rotations nor scaling our solution looks great.
My boss sent me a link to this article just after I saw it so looks like I'm off to research how easy / hard using SDF is into our WebGL font rendering system. :-)
Re: (Score:1)
Thanks for the constructive feedback.
Re: (Score:1)
Re: (Score:1)