Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Graphics Software

Multi-Sampling Anti-Aliasing Explained 125

Alan writes: "FiringSquad.com just posted a new article explaining how next-generation multi-sampling anti-aliasing works. They claim that it will bring us one step closer to anti-aliasing without a performance hit, and that the technology is going to be implemented in one of the new chips coming out? (NV20 anyone?). It's pretty technical, so only hard-core techies need apply. The link is over here."
This discussion has been archived. No new comments can be posted.

Multi-Sampling Anti-Aliasing Explained

Comments Filter:
  • by Anonymous Coward
    The time it takes for the hardware to examine a pixel isn't significantly different from the time it takes the hardware to do a bunch of basic processing operations on the pixel. So if you're going to take the time to look at every pixel neighborhood, you might as well do the processing while you're at it.
  • by Anonymous Coward

    -As i understand it, FSAA actually antialiases every single pixel.. Surely this is incredibly inefficient, since antialiasing the already bilinearly-interpolated texture of the interior of a polygon is somewhat pointless. -

    Not really. When a texture is far away, and the onscreen size of the poly being textured is smaller than the size of the texture itself, then a moire-ing effect is seen on the texture, presumably as the renderer has to decide which texel (of several) to use to represent a single pixel. Antialiasing, at least the way the Voodoo5 did it, takes several very slightly different samples of the image, each of which chose a different texel to put in that pixel, and averages them. The result is that when the image is moving, say zooming in and out, the you don't have a given pixel popping between texel a and b, which produces a very distracting visual effect; rather, you just get a smooth scaling from a tiny far-away texture to a "regular size" texture.

    For certain types of games, mainly driving games, it really does make a huge difference - far away objects look a little hazy instead of extremely messy.

    I supose it's only really redunant with bilinear filtering when a texture is being mapped onto a much larger polygon - that type of operation doesn't experience this sort of artifacting.

  • by Anonymous Coward
    Surely this is incredibly inefficient, since antialiasing the already bilinearly-interpolated texture of the interior of a polygon is somewhat pointless.

    Bingo! I thought I was the only one who thought of that. It would be even better to just render the scene in wireframe, make the lines one or two pixels thicker than the poly edges, and just use that as a mask to determine what to supersample.

  • by Anonymous Coward
    Question authority.

    Why?

  • by Anonymous Coward
    I did that with a picture of goatse man, and now all I see is Jon Katz!?!!? help.
  • by dair ( 210 )
    Sounds good to me.

    Sounds even better to me.
  • by Kuroyi ( 211 )
    hehe, that's a great idea. We should impose a caste system based on user id. People over 50000 can slave in the mud pits making bricks as people under 20000 mercilessly whip them.

    And if you're good, you can be one of the guys who feed wine and grapes to those under 1000 and keep the fan moving.


    Sounds good to me.
  • Right, the smaller the object, the worse the sampling error. One way to reduce sampling errors in rendering is to use a lower-resolution, averaged color model for an object when it is far away. Essentially you do a pre-process geometry space filtering of the object to get an image space filtering. And, as a further plus, your rendering speed should be better.
  • BTW, "anti-aliasing" is a bit of a misnumer. Ordered sampling does not remove aliasing.

    Yes, but we're not computing to infinite precision, we're generally going down to 8 bits per color per pixel. Once stochastic sampling has reduced the error to less than, say, 0.5 bits per color per pixel, it effectively is eliminated.
  • Honestly, how often do you stop in the midst of a game of Counter-Strike and say to yourself;

    In games like Combat Flight Simulator, I often say to myself, "what direction is that plane heading?" and "how far above the ground am I really?" In rather less frantic games than your average FPS deathmatch, higher image quality is a definite plus.

    Lens flare is generally silly. We aren't looking through cameras in these games...
  • Instead of having useless features like FSAA, why not work on including better 2d image quality, remember the problem that NVIDIA is having with their digital->analog chips, crips, why not get some better quality parts!
    YES PLEASE! I got state of the art (at the time) NVidia Geforce something, but returned it after 1 week, could bare using this card in 2D. Got myself a Matrox G400 instead, slower 3D, rock solid 2D.
  • Honestly, how often do you think to yourself in the middle of a cutscene:

    "So the virus and the cure were both made by the same corporation. But what does Vice President Clark have to do with -- aaaaaggggghhhhh! Look at the dot crawl on the edge of that catwalk!"

    Not everything moves at the lightning speed of a deathmatch.

    We're not scare-mongering/This is really happening - Radiohead
  • Nice post, thanks for explaining this so clearly.
  • His whole point was that no matter what anti-aliasing you do, you incur a 4x bandwidth penalty for the simple reason that you're dealing with 4x the data. This method is nice because it avoids the 4x fill rate penalty, and so helps anything that would otherwise be fill-rate limited.

    Peter.
  • its not the Nvidia chip, its the RAMDAC chosen by the companies that put together cards. I guess they are trying to fix this
  • That's true. The speckles you see from laser light are are result of interference of the coherent light. I'm not sure how the resolution of your eye would come into play, as it would seem to me that the dominant effect of moving your head would be to change the interference pattern.
  • FSAA basically applies a very pretty blur to the entire picture. What he's saying is that instead of affecting the entire image, you only need to smooth out the polygon edges, or only those edges that occur on a signifigant color shift. (Building against sky)

    The thing is, can you do this detection and selective blur faster than you can the entire image (which is basically run at 2x the resolution and resize downward).
    FunOne
  • I have always assumed that this was the result of a bunch of coherent light being messed up (when it bounces off the wall) and interfering with itself. Why else would this require a laser rather than any other light source?
  • > When a texture is far away, and the onscreen size of the poly being textured is smaller than the size of the texture itself, then a moire-ing effect is seen on the texture, presumably as the renderer has to decide which texel (of several) to use to represent a single pixel.

    That's what trilinear filtering was invented = a bilinear filter between 2 textures.
  • Changing the aspect ratio does NOT remove aliasing.

    If an edge of a polygon goes (in screenspace) from lets say (3,0) to (0,1) the slope is an 1/3. A repeating decimal can not be represented exactly on a quantized grid.
  • Why don't the monitor manufacturers implement some form of analog anti-aliasing on the monitor side? They could sell "gaming" monitors with a FSAA button on them. :)
  • I hate to say it but you should mod this post up and mod the parent down. The parent post is NOT informative, but downright inaccurate as was pointed out by another poster as well (Laser Speckle Interferometry). I am not an expert on this subject, but I find it logically obvious that optical resolution is an angular measurement and depends entirely on the angle subtended by the object in question.
  • rubbing vaseline all over the screen. Jaggies BEGONE!
  • Never mind that!
    Just don't *use* your glasses (or lenses). =-)

    Or, your could simply get a card with tv-out and play on your tv.
    Lower the resolution to 1024x768 or maybe even 800x600, won't make a difference, and you'll get that antialiasing at a performance *increase*! ;-)
  • Regarding "The human eye can see aliasing artifacts at resolution up to and even beyond 4000x4000"

    Of course it can! It just depends on the size of the screen! :-9
    It would be interesting to know how many dpi the average eye can discern at about 0.5 meters distance. (That's about the usual distance to the screen while gameing, right?)

    Any eye-experts out there who can come up with some kind of fakts/guesses
  • Well, it sort of depends. I still remember the first time I got Unreal Tournament. I finally made it out of the starting spaceship and to the outside. I swear I just sort of looked around and was amazed by how gorgeous the scenary looked at the time. Also, I am one of those crazy bas*ards who likes to play a sniper in a game linux UT in assault mode. I gotta tell you, stunning visuals sure make it easier for me to get a real accurate head shot! And lastly, not everyone plays things like an FPS. I like RPGs like say Asheron's Call and the upcoming Neverwinter Nights. In a slower-paced game like this, eye candy makes the experience much more immersive. I won't argue on 2D quality. They need to put better RAMDACs in many of these cards! Denjin
  • Anti-aliasing has been STANDARD on the Acorn computers since oh, 1987. It makes word-processing or DTP so much easier when you can actually read text at 6pt or whatever, instead of PageMaker's 'greeking'. Acorns didn't need greeking: it was almost always legible.

    Where's anti-aliasing as standard for Linux? I won't even mention StarOffice's dismal font display!
  • Refresher: Ordered Grid antialaising renders everything at 4x the resolution and then averages groups of 4 pixels and represents them as 1 pixel just before sending it to the monior

    You could save a ton of work by representing the color of each of the four sub-pixels in 8 bit color. When combining the 4 sub-pixels into the single pixel that is displayed, you will have 8x4=32 bit antialaised color with no performance hit.

    Of course, I could be wrong. Moderators always think so.
  • You mean, aside from the fact that he was noting that Pixar has a patent that might apply here, and he actually worked for Pixar? Geez - maybe there is a smidgen of authority here.
  • Be a man. Don't reply as an Anonymous Coward. Karma whore - posting as anonymous just so you can't be moderated down.

    Bruce was merely pointing out that there is a patent that could relate to the antialiasing discussed in the article. If I was working on this software I think his comment would be highly important, and I would try to make sure that anything I did didn't conflict with the patent.

    You really see so much evil in this that you needed to tear him down?

  • Civilised reply, thanks. Bad AA obscures detail, displays are bad. Some flight simulators do their very best to be accurate and faithful; people sneer at 1000 polygon systems from 1980, but when it comes to training pilots, civilian as well as military, they emphasise what matters.

    I'm very reassured to know that commercial pilots must rain on simulators at regular intervals. If bad AA enforced bad flying habits, I'd be very worried, or dead. Ditto if bad AA caused military pilots to get trigger-happy or careless. QED.

  • Take a look at this patent [delphion.com]. I designed an 11 million gate system (in 1990:-) to implement these guys' ideas. I can't discuss anything not in the public domain, but if anyone has any comments I'm able to answer, I'd be only too pleased, because this subject is one of my long term geek interests.

    What matters is to solve three problems simultaneously, not only anti-aliasing but depth buffering and translucency.

  • If you can give me a good URL or book to read, it will be very nice.

    The work I did is history. I only hope that a company like nVIDIA will find a way to implement something like it now that millions of gates can be put on a consumer chip. Your reply suggests you are looking to the future - what I would like to see someday is real-time RenderMan, so I think you'd be interested in the Advanced RenderMan [mkp.com] book by Apodaca and Gritz.

  • Of course you have to use supersampling, duh. I'm dead serious, no need to be flippant with the comment about polarization and Wavelength effects. I've got the boards on my wall at work and a couple of dead chips as souvenirs. Read the patent.

    Not everything on /. is a troll or done for karma.

  • The anti-aliasing method discussed here detects edges in the model

    Oh dear. If two polygons intersect each other at run-time, the line of intersection will look jaggy if all one does is anti-alias polygon edges in the model.

    You're so right, brute force is the way to go, but there's dumb brute force and smart brute force ...

  • If you want to win a computer game played for entertainment, disable the anti-aliasing. When a target appears in the distance, it will pop and sparkle and generally draw attention to itself so you can aim at it and kill it.

    If you are in the military and are using the computer as a way of practicing your skills without getting killed, killing others or wasting ammo, then enable anti-aliasing. The first hundred times you'll die (and insert a quarter to try again) but thereafter you'll know why it's so hard to spot a blip on the horizon approaching at mach two.

  • Wireframe doesn't describe the run-time intersections between polygons. Imagine a house on the ground. The ground isn't flat, you don't want to waste a zillion polygons making the bottom of the house match the terrain contours, even if you tried floating point errors would catch you out. Also, in the real world, bricks instersect the ground.

  • You're right, memory bandwidth is the problem. Do a web search on tile based rendering (eg the PowerVR used in Dreamcast or the patent I mention elsewhere in this discussion). There's no need to waste memory bandwidth on anti-aliasing.

  • Why does everyone mention the NV20 anytime discussion of new 3D technology comes up? Rumors have it the Radeon 2 will be equal or better than the NV20.. And considering ATI's Open Source friendliness vs. NVidias closed, non-DRI drivers, you'd think we'd be hyping them instead on /. By the time the Radeon2 is out, the ATI DRI drivers for the Radeon should be fairly complete, including TCL support. It should from that point on be simple to extend the drivers to support Radeon 2. Now how long will it take for NVidia to put out XFree drivers? Who knows. I for one are very excited about things like evas running fully accelerated on some stable DRI drivers for my board. Just food for thought..
  • Honestly, how often do you stop in the midst of a game of Counter-Strike and say to yourself;

    "Boy, I'm sure glad I'm running 4xFSAA, why, that sniper over there aiming at my head sure does look alot aaaaaaagggghhhhh"

    Not often I bet. Sure, when you first get a new card, you jack everything up to the max and turn on all the features just to see what this baby can do, but after that you set things down to your normaly resolution (I actualy perfer 320x240 but I am slowly getting used to 640x480, 320x240 makes head shots REALLY friggin easy let me tell you:) and turn off the annoying features (shit, having everything shooting off lens flairs just makes the enemy harder to see! Doh, and it can increase load times, ala Expendable) and play as normal, but at a faster FPS.

    Instead of having useless features like FSAA, why not work on including better 2d image quality, remember the problem that NVIDIA is having with their digital->analog chips, crips, why not get some better quality parts!

    Even more so, just double the memory bus width please, because god knows it needs it! (though granted a 256bit DDR bus would be an emmense pain to implement, it's the only thing that will help NVIDIA cure their bandwidth woes, besides from a new memory archetecture design system)
  • Well yah, theres eye candy, but if people have hard edged polygon intersections to begin with. . . . Not to mention texturing, a properly textured game has minimal amounts of problems with the roads and such, its mainly things that stick up in the air (such as sidewalks and what not). Even then, I would rather that they get more polygons on the screen then worry about anti-aliasing! Seriusly, what would you rather have, realistic tree's (as opposed to simple measly little pathetic spirits) or antialiased Dragons?

    Dragon, did someone say Dragon?

    OH SHIT, IT'S COMING RIGHT FOR US! Hurry up and pull out the +100 Sword Of Dragon Thwamping!

    Shling

    Thwamp

    Thwamp

    Dead Dragon, Thwamped Dragon, Good Dragon.

    ::Dragon lifts head and eats player::

    Damn, Zombie Dragon, bastards don't die easily!

  • by Anonymous Coward
    There is an easy way that you can observe the finest possible detail that your eyes can resolve.

    There is an easy way to measure the resolution of your eyesight. Stare at the moon and try to remember the details you can see. Go home, find a hi-res picture of the moon on the net, and shrink it until you get pretty much the same amount of detail.

    The moon looks to me like a 30x30 pixel jpeg. The moon diameter is about 0.5 degree, so I can see 60 pixels/degree (which is about the textbook value). My digital camera can see 23 pixels/degree.

  • A group of graphics researchers in Vancouver, Canada have posted a comprehensive analysis [vancouver.bc.ca] of various antialiasing techniques both in hardware and software, including regular supersampling, stochastic supersampling, and filtering. They give a lot of theoretical justification from a signal processing perspective of all of them.
  • Isn't that what Puff Daddy got arrested for?

    --
    * CmdrTaco is an idiot.

  • When I first got Unreal 1, I had a crappy Rage Pro video card that could barely run the game at all (on a Mac too). Like most games, I had to crank the resolution all the way down to play it at all

    The minimum resolution available was 640x480 Pixel-Doubled, which by itself is not exactly an innovation. But they were running it through the 3D card for the pixel-doubling step, so it was effectively blurring the image with bilinear interpolation as it scaled. It didn't look nearly as bad as it sounds, and it ran quite well.

    Is it feasible to do something like this to accelerate a high-resolution display? The individual pixels would be much less noticable, and that 75% saved fillrate could be put towards something other than basic rasterization.

  • And if you're good, you can be one of the guys who feed wine and grapes to those under 1000 and keep the fan moving.

    Aw, crap.

  • Does this mean that you are questioning authority???
    By questioning him, are you acknowledging his authority?
  • I like the fact that hardware is catching up to some of the rendering algorithms and ideas that are used in software rendering.

    The better the hardware can simulate what a rendered image will look like, untill it is good enough to produce fullmotion 3D video on the fly, the less test renders will be needed. That makes my life sooo much easier. It almost bugs me how much of my cpu time is used for throw away images...
    if only you could sell frames on ebay... ;)

  • Think early Playstation 1 games vs. Nintendo 64. N64 uses much more antialiasing, resulting in a smoother (blurred?) look.

    AFAIK that's not antialiasing: that's merely blurring -- properly antialiased images do not appear blurred (just look at fonts on Windows and Mac).
    --
  • Let's look at this, shall we?

    8-bit color translates into a maximum of 256 possible colors at any one time. That will give you a playable game, but it's not going to look very photorealistic. Doing twisted things to exceed that on a display will make the hardware or software doing that convoluted and will negate any possible advantages to working with one byte pixels.

    Furthermore, 8-bit color doesn't mix like 32-bit color. You have to go through the motions of doing a color mix on each of the RGBA values on the palette entries and then re-map the resultant color to a (hopefully) matching color in the palette. It's actually SLOWER to do it that way. The only reason why we did 8-bit color in the first place was that it was cheaper back then to do it that way- not because it was superior in any way to anything else.
  • Anthony,

    I can not make myself an authority, I can be one only in other people's perceptions. For better or worse, there are some people who consider me one. I sometimes find this perception of authority to be in excess of the reality, and thus put that disclaimer in my .sig a while back. But then, someone (was it you?) found fault in my previous .sig, too.

    It would be silly for me to claim to be free of ego or self-promotion. I've tried to be an agent for constructive change, and being publicly known has been one of the tools I use. I'm sorry that offends you, but I like myself the way I am.

    Thanks

    Bruce

  • Now that you've changed your sig, does that mean that I have to change mine?

    How about...

    Warning to humans. Sometimes stuff I post here is wrong. Confuse your head. I'm currently being questioned by the authorities.

  • Let's place that dividing line where it belongs. Before and after Katz.
  • Only if it makes it invisible.
  • Lets see, this feature making it into the NV20 is highly unlikely. NVidia probably has had the overall chip design set in stone for at least the last 6 months. Maybe the NV30 or so...
  • The article talks about a radicially different approach to FSAA. It would be like saying the NV20 will support the Vodooo 5's method of FSAA. At the hardware level, there are dramatic changes needed to implement this new FSAA method. Something that won't happen for a long while, just like the integration of any 3dfx Rampage tech into the NVidia product line.
  • The article says that there's no rendering bottleneck, but there is still a bandwidth hit proportional to the amount of super-sampling you're doing. That's correct, however it says that the cost of that bandwidth hit will be offset by advances in bandwidth-conservation or enhances (new memory, Z-compression etc).

    Is this really the case? If graphics and PC architecture have shown us anything it's that memory bandwidth is the thing that's slowing us down the most, and that problem is only getting worse. Graphics chipsets are already using what seems to be fairly extreme memory technology. Current nVidia chipsets are running on 128-bit-wide 230MHz DDR SDRAM (almost 460MHz effective), and they're already memory-bandwidth-limited without adding a 4x memory bandwidth increase. Memory technology is getting better, but not at anywhere near the speed that graphics technology is improving.

    The article does mention some techniques for offsetting the huge gap we've currently got between CPU and memory capabilities, some of which are being used on ATI's Radeon, but are these techniques effective enough to offset a 4x memory bandwidth penalty? Any ideas?
  • Why hasn't anyone else picked up on this? The thumbnail versions of the diagrams explaining anti-aliasing were created using subsampling, and as a result look absolutely horrible. This is really bad design under any circumstances, and absolutely inexcusable in this context!

    In case you missed it, the bad-looking thumbnail images are here [gamers.com] and here [gamers.com].

    I have put properly anti-aliased versions of the same images here [ofb.net] and here [ofb.net]. (Isn't that much, much better?) These were created with 'pnmscale', a free (speech, beer, and everything else) tool that has been around for a decade now.

  • Cool. I had wondered what created that effect. I've noticed that quite a few materials have that "static" look to them when you light them with the laser. I think the best are cheap plastic toy balls. The ones two are three inches in diameter made out of translucent plastic work really well.
  • his id is roughly yours/100, so who are you to speak for anyone?

    hehe, that's a great idea. We should impose a caste system based on user id. People over 50000 can slave in the mud pits making bricks as people under 20000 mercilessly whip them.

    And if you're good, you can be one of the guys who feed wine and grapes to those under 1000 and keep the fan moving.
  • Somebody moderate the parent down. Its one of those lamer IE haxor tricks.

    (wasting my automatic 2)

    Is it just me or has the quality of lamers gone down.
  • My understanding is that this is not correct. If this were the case then you could use any old point light source but you can't. What is going on here is the laser light interfering with itself in your eye. See this link from the exploratorium: [exploratorium.edu]
    http://isaac.exploratorium.edu/~pauld/summeer_in st itute/summer_day1perception/laser_speckle.html

    It is quite an interesting effect but has nothing to do with you'r eyes' resolution.

    --Ben

  • Pixar has a patent on the stochastic dither multi-sample antialias. They've enforced it before.

    Uh-oh. POV-Ray [povray.org] uses it. Shh, don't tell Pixar. Wait, you used to work for Pixar... But, that was a zillion years ago. How long 'till the patent expires?
  • Bruce, you used to work for Pixar, and you know more about software patents, so you'd probably know more about this than us.

    I've read the text of the patent and can't work out exactly what they claim. It reads like they claim any application of Monte Carlo integration to image generation. Also, how come they've been able to file what looks like the same patent three times?

    I have the impression that Pixar are actually better than most about their patents, and I believe they've never tried to enforce their claimed API copyright. (Just as well for them. API copyrights are untested, and I don't think they want to be the first to test them. But then, it might just be because nobody tried to stand up to them.) Who did they enforce it against, and do you know what the circumstances were?

  • This new form of sampling shows that hardware manufacturers have finally woken up to the fact that, to use Renderman terminology, the shading rate (the sampling rate at which textures, lighting etc are determined) and the pixel sampling rate should be decoupled. This simple anti-aliasing technique samples uses a pixel sampling rate at 4x the shading rate, using ordered sampling. Eventually we will see graphics cards where these two figures can be tuned separately, but it won't be for a while.

    BTW, "anti-aliasing" is a bit of a misnumer. Ordered sampling does not remove aliasing. Neither does stochastic sampling. Ordered sampling merely moves the filtering problem up a few octaves. Stochastic sampling hides the aliasing behind noise, because our eyes find that less objectionable. The only way you can truly remove aliasing is analytically. Don't expect that in your graphics hardware for a long time. :-)

  • There are two ways the home user can emulate this exciting technology today! that's right, in your very own home, right now! These methods are quick, easy and proven!

    a.) use a tub of vaseline smeared over the screen.
    b.) use a bottle of vodka.

    I am not drunk.
    Glen Murphy
  • In the world of Digital Signal Processing, aliasing is the result of periodic sampling introducing artifacts into the signal. For instance, a 1001 hz tone sampled 1000 times a second will sound like a 1 hz tone.

    In video, this comes out as edges (which are local areas of high frequency) coming out funny or jaggedy. Close together diagonal lines will cause Moire patterns. et cetera.

    Antialiasing is any attempt to eliminate these artifacts. At really low resolutions, antialiased stuff will look blurry, while non-antialiased stuff will look blocky. Typically, the eye will be a lot more forgiving to a little blurriness than it will to the regular patterns of moire patterns and stair step edges.

    Hope that helps

    -me
  • From what I understand, reading the ATI Radeon spec several months ago, there is an issue with at least setting the Z-values to zero at the beginning of each frame. Though I don't know that compression affects this or related stages, there are still flushes that need to occur. And without specialized memory with block operations (SGRAM?) it takes time.

    -Michael
  • It's a perfectly anti-aliased blank screen though, isn't it? and with no performance hit! :)

    /Fross
  • As i understand it, FSAA actually antialiases every single pixel.. Surely this is incredibly inefficient, since antialiasing the already bilinearly-interpolated texture of the interior of a polygon is somewhat pointless.


    True, textured 3d polygons are already antialiased by definition (you take a texture, resample and stretch it to fit on the polygon depending on what angle you're viewing it at, voila). The one part that isn't is the edge of these textures. Hence why when you're playing halflife or whatever and everything is rendered nice and softly, if you look up at the edge of a building against the sky, it's horrible and jagged.


    FSAA, the way i understand it, softens these edges before sending the image along. It doesn't affect the actual contents of polygons, just their edges.

    Fross

  • Even the original Geforce2 had some FSAA (or approximation thereof), and that's over 6 months old. There's some (independant) information about that here [jsihardware.com].


    Additionally the Voodoo 5 has had full FSAA since its launch. So I think they've been working on it for a while, and it's likely the NV20 will have it - it's probably just been one of the less touted figures so far.

    Fross

  • ..those "screens" you'd clip over the front of your CRT to reduce glare (and radiation, as was touted back in the 80s. But then again monitors back then could be pretty damn dangerous) - they made everything soft and fuzzy. blurry, to be honest. If anything they caused more eye strain as you tried to make out what anything was behind it :> like having a permanently really filthy screen.

    Fross
  • Amazing. Back in '92 I went over to Silicon Graphics for three days to port a simulator we were writing for Unisys to the MIPS chip. At the time SGI had one of the first Reality Engine machines in the demo room, and the engineers took us over to look at it and drool at the mouth over the incredible beauty of the real-time 3d displays it could produce.

    Back then, no matter how cheap PC's eventually became, I couldn't imagine that kind of liquid photo-realistic 3d ever being affordable.

    I'm quite happy to be proven wrong, and be one step closer to photorealistic 3d.

  • It's not as simple as blurring. Anti-Aliasing adds information (or rather it enhances it). Blurring removes it.

    Think of what happens when you photocopy a photograph (in an old machine). You get black and white splotches. Anti-Aliasing is like a greyscale copier - It occurs at the same time the image is being sampled. Blurring occurs after the image is sampled and doesn't improve anything. If you took off your eye glasses to look at the photocopy of a photograph you just made, it won't look like greyscale.
  • Yah, right, no performance hit. I've heard that one before. Nice try guys.
  • FORGET THE ANTI ALIASING in games, okay?
    Give me a deep story line with complex NPC interaction and conversations, and I won't care if the graphics are blocky, 320x200, and super aliased.
    Ultima, Star Control II, System Shock 1/2, and Deus Ex rock the universe.
    Why is everyone so obsessed with the latest game graphics, when the story lines suck eggs (or don't even exist)?
    ========================
    63,000 bugs in the code, 63,000 bugs,
    ya get 1 whacked with a service pack,
  • I already do make use of it. I have Pixar's (see comments on their patent elsewhere in this article) abandonware program for the Mac, "Typestry", which does this in software, at the cost of hideous render times.

    The reason no-one has done this in cheap realtime PC hardware before is simply that it takes a @£$%ing lot of flops to do.

  • They've also shown their willingness to license the patent (MentalRay has a license, I believe). I thought the patent was on jittered supersampling. I didn't know it had anything to do with the dithering pattern, and the article doesn't mention anything about jittering the samples. In any case, the distributed ray tracing paper appeared in the '84 SIGGRAPH conference proceedings. The patent should be about to die if it hasn't already.
  • When you're dealing with a scanline renderer, you want a regular grid of pixels, because the algorithms take advantage of a simple data structure (e.g., a Z buffer). Trying to do it adaptively in this case would complicate the algorithm and probably slow you down quite a bit more than you'd gain by having fewer Z values.
  • I will make a few points after reading alot of comments here.

    1. Brute Force is almost never the only way to go.

    2. Selective antialiasing is a technique that had been used in 3D packages for ages. I am thinking about Lightave specifically but they all have them. The whole point is that rendering at a higher resolution and scaling the image down does not give you better antialiasing than a selective antialiasing technique that consumes the same resources. I can atest to this, I have tried it. It is kind of temporary solution as I see it, until anti-aliasing techniques are implemented in hardware.
  • How can you say it's not worth the performance hit?

    It agree that I can live without AA when I'm playing a game at 1280x1024 at 30 FPS. One frame doesn't stay still long enough for me to notice.

    But it makes a huge difference when trying to read normal text at a small font size! Totally worth the speed difference there, which is negligible, for much prettier and more readable fonts.
  • You're so right, brute force is the way to go, but there's dumb brute force and smart brute force ...

    Hmm I see your point.. I looked it up and FSAA [pc-gamers.net] is probably the better method here. Still, as a "cheap" software replacement multi-sampling might proove usefull for modified versions with larger sample grids or things like that. As I mentioned before for simple surfaces you can e.g. work with gradient (1/z) encoded spans to search for the edges, to eliminate the 'brute' from brute force here.. just thinking aloud here now though..


  • You need to consider every pixel, because every pixel can be valid for an anti-aliassing operation. FSAA is a trick to increase overall picture quality. Applying the method to only parts would lead to ugly artifacts.. You

    The anti-aliasing method discussed here detects edges in the model, not so much in the textures, so it presumes allready filtered or mipmapped textures. In that regard you could say that edges don't need to be detected for polygonal models that have smooth surfaces. But don't froget that any transformation can deform objects into hard-edged models which then again do need every pixel on the surface traced for possible hard edges.

    So instead of having to worry about the nature of the surfaces ( which could no doubt be determined by examining normals and smoothing groups), 3D cards generally resort to brute-force algorithms in the image synthesis stages of the pipeline. It could actually be easy to generate gradients, but that only works economically for flat surfaces, and only for multi-sampling, not for any other type of sampling method.

  • The one part that isn't is the edge of these textures.

    I believe you are incorrect. Textures get antialiased, true. This has been going on since the days of Voodoo1. The edges you see are not the edges of textures, rather, the edges of a model (polygons) in relationship to the rest of the scene. Hence the term "Full Scene Anti-Aliasing". Even a shaded mesh rotating on screen would result in "jaggies". At that point it has nothing to do with the textures.
  • I've been programming a 3D system as a hobby for some time... Sometimes with actors and sets, but mostly toying with "an aquarium of spaceships, embarking and departing station docks."

    With the older cards I've purchased, a distant ship appears to be a bright clump of disfigured pixels. Without the multisampling, the colors/shades chosen are either extremes from the edges of the model, or last-come-last-served colors. Multisampling gives a much better representation of that "region of space." By balancing samples from the model against the portion of samples that hit dead space, the pixels more accurately represent the core model fading at the edges.

  • (Regarding present monitor technology): This means that we are typically stuck at a maximum resolution of 1600x1200, and this leads to a problem. The human eye can see aliasing artifacts at resolution up to and even beyond 4000x4000, so obviously 1600x1200 is not sufficient. The obvious move we make is to implement anti-aliasing.

    Just as a point of interest, and education:

    There is an easy way that you can observe the finest possible detail that your eyes can resolve. This is merely for demonstration and educational purposes, and does not have other immediate applications. This small experiment will merely allow you to observe the individual "pixels", you could say, of your own eyes.

    Get a hold of a simple penlight laser pointer, and point it away from you at something that will make a nicesplash of light, such as a bit of matte white plastic, a dirty glass, etc.

    While holding the point of laser light perfectly still (if possible), also hold your head still. You might want to have your pointer resting on something, as well as your head. (Safety first kids! Don't look directly at the laser)

    Observe the pattern of light/dark Pixelation. Note that the pattern does not twinkle and does not shift so long as you hold your head still, and you hold the light still.

    Barely move you head slightly and slowly, and notice that pattern of Light/Dark moves slowly and consistantly with your head movement. The relationship of the light/spots does not shift at random, but shifts consistantly with your head motion, while you keep everything else still.

    The Light/Dark spots are essentially you seeing the individual Pixels (cones and rods, actually) of your your own eyes.

  • Some folks have great knowledge of Lasers, but are missing data on Eye Physiology. The bottom line is that each sensory cell in the eye, be it a cone or a rod, sends one point of brightness data to the brain.

    With colors , this corresponds to the colors mentioned below. Cells do not send multiple sets of brightness levels at the same time the brain sorts out the variations of light and dark to construct the lines and shapes and forms we perceive in the world. A cell sensing for Red sends data for that one point of red intensity, nothing else. Note that each cell can sense down to one photon of energy levels.

    Really, this is simple sensory stuff here. Point sensors for light intensity.

    When you have a laser light, YES there is interferance. Of course there is.

    So one cell senses one level of light, and another cell senses another.

    The question then becomes are the cells larger or smaller than the wave length of light. Since they can be observed in an optical microscope, they are larger. This means that the individual sensors are sending individual messages regarding the light intensity back to the brain, based on the average light intensity on that individual cone or rod. Remember this: Individual messages for light intensity by independent light sensors. Therefore you will have light sensing on a cell by cell basis of the light and dark patterns of light interferance from the laser light, and it must be on a cell by cell basis. there for you can see the speckles because of the eye's reception of the interferance patterns on a cell by cell basis. The graininess is inherent with the size of the sensors, the cones of the eyes. Remember that this are individual sensors. Additional data can be found here, as well as many medica; web pages on eye physiology: Anatomy, Physiology & Pathology of the Human Eye [aol.com], and is quoted below:

    photoreceptors (cones and rods)

    (intro omitted) The brain actually can detect one photon of light (the smallest unit of energy) being absorbed by a photoreceptor.

    There are about 6.5 to 7 million cones in each eye, and they are sensitive to bright light and to color. The highest concentration of cones is in the macula. The fovea centralis, at the center of the macula, contains only cones and no rods. There are 3 types of cone pigments, each most sensitive to a certain wavelength of light: short (430-440 nm), medium (535-540 nm) and long (560-565 nm). The wavelength of light perceived as brightest to the human eye is 555 nm, a greenish-yellow. (A ìnanometerîónmóis one billionth of a meter, which is one millionth of a millimeter.) Once a cone pigment is bleached by light, it takes about 6 minutes to regenerate.

    There are about 120 to 130 million rods in each eye, and they are sensitive to dim light, to movement, and to shapes. The highest concentration of rods is in the peripheral retina, decreasing in density up to the macula. Rods do not detect color, which is the main reason it is difficult to tell the color of an object at night or in the dark. The rod pigment is most sensitive to the light wavelength of 500 nm. Once a rod pigment is bleached by light, it takes about 30 minutes to regenerate. Defective or damaged cones results in color deficiency; whereas, defective or damaged rods results in problems seeing in the dark and at night.

  • My understanding is that this is not correct. If this were the case then you could use any old point light source but you can't. What is going on here is the laser light interfering with itself in your eye. See this link from the exploratorium:

    So is the variation caused by the interferance, on a receptor by receptor basis or not?

    Each receptors reports only one dot of light intensity data back to the brain. Or do the Receptors report more than one data point to the brain at the same time?

    There is also the point of performance differances between coherent vs none coherent light.

  • I still think High-end rendering is important, because today's high-end rendering is tommorrow's forgotten feature of the standard Intel chip.

    When we push the boundaries of what is computationally possible, there are two forces that act on the innovation. The first is optimization; the original technique is re-analyzed, to either make the computation more efficient (software optimization), or to make the compuation faster (hardware optimization). The second is consumer demand: if the feature makes things look cooler and/or more realistic, the technique becomes popular, and becomes one of the standard tools.

    Are we at the final stage in graphics technology? I think not. Remember Doom 2? There were very cool scenes, where you were looking at 50+ enemies, some just 4 pixels, and they were ALL shooting at you. In Quake, a level might have 13 enemies total. We could use Doom's level of enemy depth, but with 3D models and multi-level worlds.

    We still haven't mastered some environmental effects, such as heat distortion. A burning barrel should have an effect on the scene behind it, as well as realistic fog effects. A strightforward implementation may also help better model bullet paths, so that you really can't snipe from 5 miles away.

    Of course, all these are just visual tricks, that need to be put in service of a good plot. It seems a shame that just as technolgy gets to the point that Looking Glass Studios can have graphics comparable to the plot, they get pushed out of business. When you can really model a whole world, Richard Gariott decides to industry can't support his business plans. When you can make a real-time version as beautiful as the original Myst, Cyan does it, instead of making a new product.

    The tricks and effects are too tempting right now, so that we simply make things beautiful without making them intellegent. This will change, and the masters will use every tool at their disposal.

  • It's to take those ugly jagged edges off your fonts and stuff. It takes edges that jump from black to white (or any one color to another) and makes a gradients, to fool the eye into thinking there are many more pixels there than actually are. looking from far away it apears to be a very smooth curve, but as you know pixels are in a square grid, and don't really make nice circles. it's all about looks. not really worth a performance hit.
  • Think early Playstation 1 games vs. Nintendo 64. N64 uses much more antialiasing, resulting in a smoother (blurred?) look.
  • Very interesting. What you're seeing would be interference pattern of the laser light, so the actual size of the dots would be right around the wavelength of the laser light. I'd be interested to hear from some with a knowledge of visual physiology, explaining how the eye resolves the spots.

    Of course, if the resolution of the eye is 4000 by 4000 total (not per inch), then your distance from the monitor is also a factor. Based on these numbers, if the display area occupies about .12 of the area of your field of vision (.4 horizontal by .3 vertical), then a 1600x1200 display does display at the theoretical physiological maximum. A quick experiment suggests that this is about two and a half to three feet from the nineteen inch monitor that I am currently using.

  • I was wondering if the original poster had bothered to digest the article.

    In a nutshell:

    Super-sampling either 1) averages multiple renderings of the same scene, or 2) draws the scene at 4x (or 8x, etc.) the resolution and then averages blocks of 4 (or 8, etc.). (Voodoo 5 does the former.)

    Multi-sampling doesn't do this, but it multiplies the x and y size of the z buffer by some amount and uses the extra z info to decide how transparent a pixel is. (Each pixel has 4, or maybe 8 z-values.)

    In super-sampling, you have a big performance and bandwidth hit. Same with multi-sampling, except that it saves you three (or 7, etc.) extra texture lookups per pixel. For what it saves you, you don't get anti-aliased textures, but that stuff is usually dealt with in texture filtering anyway, so it's acceptable.

    So yes, you're right - there's a performance hit and the original poster was smoking something funny, but multi-sampling is still way cool.
  • Look, there are tools available on the web for these types of question, like for example google [google.com]:

    Dear google, please explain antialiasaing... [google.com]

    Thank you [widearea.co.uk] !

  • by ikekrull ( 59661 ) on Tuesday February 13, 2001 @01:07PM (#434452) Homepage
    Maybe its a limitation of the extremely pipelined graphics architecture prevalent today, but why not use some kind of thresholding algorithm to determine when a pixel needs to be antialiased?

    i.e. for each scan line, check the (color or z)value of the current pixel, and only perform the antialiasing step if the difference between them exceeds some value.

    As i understand it, FSAA actually antialiases every single pixel.. Surely this is incredibly inefficient, since antialiasing the already bilinearly-interpolated texture of the interior of a polygon is somewhat pointless.

    If this approach is unwieldy, i'd be interested to know why.

  • by edmz ( 118519 ) on Tuesday February 13, 2001 @12:51PM (#434453) Homepage
    ...just "forget" to clean your glasses for a couple of days. You wont believe what it does to those jagged lines.
  • by grammar nazi ( 197303 ) on Tuesday February 13, 2001 @03:34PM (#434454) Journal
    You are correct. The name of the phenomenon is Laser Speckle Interferometry (I took a PhD class in this exact subject). These laser 'speckles' do not correlate to the resolution of the human eye, rather they correlate to the wavelength of the laser light. One way to verify this is to look at the laser spot and begin to squint your eyes (or look through a tiny apeture). As the apeture gets smaller, the speckles get larger. This is because there is a smaller area of light reflecting off of the surface through the apature, thus allowing less interference and larger speckles. It works, try it!

    LSI is currently being used all over the place in Non-destructive testing. The movement of these speckles is very sensitive to the movement of the surface. For example, one can cover an inflated airplane tire with laser light and take an image. Next, add an additional 1-5 PSI to the tire and cover it with laser light and reimage it. Now when you subtract the two images, you will get nice 'moirre'-like fringes. Any small gashes or imperfections will be surrounded by many fringes and will be easy to see.

    I can recommend an excellent and very readable book on the subject: Gary Cloud's 'Optical Methods of Engineering Analysis'. His text covers Birefringent materials and laser speckle interferometry in graet detail. It also covers many other areas such as Holagrams and I forgot what else.

  • by johndiii ( 229824 ) on Tuesday February 13, 2001 @12:54PM (#434455) Journal
    Consider several areas: Games, CG effects for movies/video, impact on system cost.

    In games: At some point, focus on technology detracts from the actual game. If it is assumed that the total cost (that amount that will be spent on development) is fixed, then money spent supporting this type of technology will not be spent on more levels, maps, characters, artwork, etc. The key remains suspension of disbelief.

    Movies/video: This is interesting, because newer graphics hardware allows PC rendering to look much closer to dedicated CG effects systems. Yet there is a performance gap. Will we be able to make movies on our PCs? Yes. Does anyone care about the difference in rendering quality? Probably not, unless you're trying to get a studio to release your movie.

    System Cost: The GeForce II MX that I put into the last system that I built cost about 60% of the price of the EGA card that I bought in 1988 (maybe 1987). Looks great; less filling (time).

    Short answer: not much difference. We've definitely reached the point of dimishing returns in application of graphics technology.
  • by Bruce Perens ( 3872 ) <bruce@perens.com> on Tuesday February 13, 2001 @12:53PM (#434456) Homepage Journal
    Pixar has a patent on the stochastic dither multi-sample antialias. They've enforced it before.

    Bruce

  • by mc6809e ( 214243 ) on Tuesday February 13, 2001 @04:32PM (#434457)
    "Just as a point of interest, and education:"

    Before you try to educate someone else, start with yourself! The light/dark pattern thats seen with the experiment you describe is nothing more than an interference pattern created when the monochromatic and coherent light reflects off the surface of the object you're looking at and strikes your retina.

    For a more complete description, take a look at:

    http://www.repairfaq.org/sam/laserioi.htm#ioiscs0 [repairfaq.org]

    Now, as far as eye resolution goes:

    "For an eye with 20/20 vision, the angular resolution is 1 arcminute (1/60th of a degree)" [colorado.edu]

    With this information, we can make a good guess at what a monitors resolution "ought" to be.

    Take the sine of 1/60&#186 and multiply this by the aproximate distance from the monitor.

    At 3 feet, you get about 0.0105 inches. So you need about 100 pixels per inch. Thats 10,000 per square inch.

    A 17" monitor is about 13"x10", so a resolution of 1300x1000 should do the trick at 3 feet.

    Also, notice the qualifications I made

    A 17" monitor...

    ...at 3 feet.

    This shows that the statement: "The human eye can see aliasing artifacts at resolution up to and even beyond 4000x4000, so obviously 1600x1200 is not sufficient.

    is meaningless without knowing the viewing distance.

    But even knowing the viewing distance still gets us no where. Notice the ought above in quotes. No matter what the resolution of the monitor, there exists textures that will cause aliasing unless other steps are taken. Outline of a proof:

    1) Cast one ray from the virtual eye (POV) through each pixel of the screen onto a surface parallel to the screen, but some distance away in the virtual world.

    2) Where the cast rays meet the surface, find the texture element of the surface at this intersection and color it white.

    3) Color the rest of the texture elements black.

    It should be obvious that the surface viewed at this distance with this texture and without anti-aliasing will appear totally white. It should also be obvious that for any resolution we can create a texture that will cause this effect.

I THINK THEY SHOULD CONTINUE the policy of not giving a Nobel Prize for paneling. -- Jack Handley, The New Mexican, 1988.

Working...