Antialiasing
There are so many confusing, poorly written, or just plain incorrect articles about antialiasing (AA) that I feel compelled to add another one to the mix. Every article I've seen either gets caught up in the minutiae and technical details of each AA technique, which doesn't really matter or, alternately, are so generic as to be useless in trying to get a decent understanding of the current state of AA.
I stumbled onto this realm of vitriol and confusion by trying to find out a bit of basic information about the latest fad in AA, FXAA (and its derivatives). Many of the articles and, especially, comments about this topic were hilariously misinformed. Though amusing to watch, it really speaks to the lack of a clear and simple explanation on the topic. I'm sure better articles than the one I'm about to write exist out there somewhere, but in my 97 seconds of googling I was unable to find one (though the AA article on PC Gaming Wiki is not bad). In short, someone is wrong on the internet, and I'm going to do my part to fix that!
I'm breaking this up into two parts. The first half discusses what aliasing is and why we care about using AA to get rid of it. There is a LOT of misconception out there about the fundamental nature of the problem.
The second half covers the current technologies out there to do antialiasing in the real-time 3D graphics world (which is to say, gaming). Its focus is on clearing up buzz-words and acronyms and classifying the different AA approaches in use today.
Aliasing and Jaggies
So what the hell is aliasing, anyway? In short, aliasing (in the context of computer graphics) is a graphical artifact that results from the fact that most displays have BIG PIXELS.
Unless you are reading this on a new iPad, the pixels on whatever display you are using are physically large enough that you can individually pick out a single pixel. This will be old news to anyone who has done graphics work or web design, but it may be surprising to computer users who don't spend their time dealing with this on a daily basis.
(As an aside, in case you came into this article not knowing what a pixel is: a pixel is the basic physical unit of display for any kind of TV or monitor. A monitor displays an image by controlling the color of each and every pixel on the display. A display cannot show an individual, uniform color element any smaller than a single pixel. The resolution of your display defines how many pixels it has. A resolution of 1920x1080 means that the display is made up of a grid of pixels that is 1920 pixels wide and 1080 pixels high. That's over two million pixels. This also happens to be the resolution of any modern HD TV or monitor labeled as "1080p").
To prove that your display has BIG PIXELS, I present the following image. Can you see the white dot in the middle of it? That white dot is a single pixel on your display. If you can see it, your pixels are large enough to be noticed!
So what does pixel size have to do with aliasing? Everything!
Because pixels are big and noticeable, it is impossible to draw a completely smooth, curvy line or a completely straight, diagonal line on a computer screen. When trying to do so, we end up with something that has a "stair-step" or "jagged" pattern. This pattern occurs because we can only draw parts of our line in one pixel or another. We can't draw it half-way between two pixels because a pixel must always display a single, uniform color across its entire physical size. Since pixels are large enough to see, most people will notice the aliasing, or "jaggies", in the appearance of the following curve:
Note that the use of the word "aliasing" to describe this graphical artifact comes from the signal processing world and predates computer graphics. In other words, we didn't just make this stuff up.
Antialiasing in Brief
You might be amazed at just how much effort goes into making curvy and diagonal things look smooth on your monitor on an everyday basis. All of your windows and fonts and web sites have had hours of effort poured into making sure that you never notice your pixels.
But the question is, HOW do we make sure you never notice your pixels? Well the simplest and easiest way is... blur!
Effectively we can use color and contrast to trick the eye into not noticing the jagged edges. We do this by using selective blurring of the jagged edges to more gradually change the color from one pixel to the next. This more gradual contrast from one pixel to the next makes the edge appear less well defined and less jagged. If we do it juuuuust right, it makes the line appear smooth and, counter-intuitively, more "crisp" than the jagged counterpart. Too much, though, and it just ends up looking blurry.
Here is an example of some AA smoothing being applied to our jaggie curve:
To clarify, we haven't made the pixels any smaller. We just added a lot more pixels to the line in various shades of gray to blend the white line into the black background. Despite the fact that every one of these pixels is just as visible as before, your eye perceives this collection as a much smoother line, because vision is just neat that way. Here's a zoom of the previous images to show how this works:
Note that there are other tricks in addition to blur for doing AA. Font rendering in particular makes use of a nifty thing called sub-pixel rendering, but I'm not going to cover that here. It's an overly technical thing that's not all that relevant for my main focus of 3D gaming AA.
Perception and Pixel Size
There's an important thing to clarify about aliasing and AA that everyone seems to spend a lot of time fighting about: not everyone notices aliasing to the same extent! Whether or not you care about AA and the relative importance you place on it versus other visual factors usually boils down to whether or not you notice the aliasing in the first place. And whether or not you notice aliasing has a lot of contributing factors.
High-contrast aliasing (like the white line on black background example) is, for most people, a lot more noticeable than low-contrast aliasing or aliasing that is occurring in a "busy" setting with lots of color, variable contrast, and motion. This is simply a subjective thing about personal perception and it is one reason that many people who may clearly see the "jaggies" in the example image on this page still don't notice aliasing at all in an actual computer game or movie being displayed on their monitor. It doesn't mean the aliasing has gone away, it just means that their perception of the aliasing in these contexts may change dramatically to the point that it stops being noticeable.
This really shouldn't come as a surprise, especially considering that AA itself often just manipulates contrast and other factors to reduce our perception of the pixels. In other words, AA only works because of perception in the first place! And yet people will get surprisingly vehement about this topic on forums -- to sum up the typical exchange, if you don't notice jaggies you're a graphical Luddite and if you do notice jaggies you're an OCD basement dweller.
Subjective perception aside, it may surprise you to learn that there is no "standard" size for pixels. Some displays simply have smaller pixels than others. The size of the pixels in your display is easy to figure out; take the number of pixels it has in one dimension and divide by its physical size in the same dimension (works with either height or width since most displays have square pixels). This will give you the "pixels per inch" (PPI) of the display. For instance, my monitor is about 11.75" high and has a vertical resolution of 1080, so its PPI is about 92.
By increasing the resolution of a display without changing its physical size, you increase the PPI. The higher the PPI of a display, the smaller its pixels (which just makes common sense... packing more pixels into a smaller space must mean each pixel gets smaller).
A standard desktop monitor is going to have a PPI in the range of 80 to 100. A lot of laptops, however, are in the 100 to 150 PPI range. And many mobile devices have a PPI of between 200 and 300. The new "retina" display from Apple has a PPI of about 320. Since aliasing is a direct artifact of big pixels, the smaller the pixels in your display, the less likely you are to notice aliasing. If you game on a laptop, for instance, aliasing might be much less noticeable for you than for a desktop gamer.
The holy grail of PPI is around 300 or so; anything higher than this and the pixels become so small that most people cannot pick them out individually (though the exact limit is a matter of personal perception and your own eyesight). Unfortunately there has been little impetus for display manufacturers to make high PPI monitors or TVs for regular consumer use. There are a ton of reasons for this, none of which I'm going to talk about here.
Viewing habits also subjectively change your perception of the pixel size. The further away you are from the display the smaller the pixels seem to you. You might call this "relative PPI" or any other number of fancy things, but the basic idea is that the further you are from a display the less likely you'll be able to pick out individual pixels and therefore the less likely you are to notice aliasing.
Why Aliasing is an Issue for 3D Gaming
This brings us, finally, to the last point of this first half. Why is aliasing an issue for 3D gaming?
Really, aliasing is an issue for ALL computer graphics, not just real-time 3D. Every time something gets drawn to the computer display, somebody somewhere has thought about aliasing and implemented a method or an algorithm to deal with it. The big difference is that static images (like the ones on a website) or simple 2D graphics (like your application windows and icons) have relatively well-understood, well-performing solutions. That's not to say it's always an easy problem; again, you might be surprised at just how much effort it takes to display a simple line of text clearly and without aliasing artifacts. But it is a problem that has been pretty much solved with satisfactory performance for most 2D applications.
For 3D graphics, the problem has to do with the dynamic nature of the final image. 3D graphics are generated by drawing polygons and then painting them with textures. The edges that occur between polygons present a big problem. These edges are routinely aliased and jagged. Likewise, even if we pre-process our textures to avoid aliasing, modern 3D graphics engines do a LOT of texture work. They scale them, rotate them, stretch them, distort them, skew them, and even stitch multiple textures together. All of this manipulation of textures can easily re-introduce aliasing back into the image as well. Finally, post-process effects like shadows and particles can add even more sharp, aliased edges to the rendered frame.
The question then becomes, how do we get rid of all that aliasing in a generic fashion without ruining the final image quality while still maintaining a decent framerate?
The State of 3D AA
There are three broad categories of AA techniques that exist today for modern real-time 3D graphics. I haven't seen these three categories identified as such, so I'm using my own labels for these. That means you aren't likely to find these exact category names described anywhere else, despite the fact that I think they are crucial to understanding the various options and techniques in modern AA. They are, in order of their introduction to mainstream 3D graphics:
- Scaled AA (scaling, SSAA, Adaptive SSAA)
- Pipeline AA (MSAA, TSAA/TMAA/AdAA/TrAA, CSAA/CFAA/EQAA, Quincunx/QSAA, TXAA)
- Shader AA (MLAA/FXAA, SMAA)
It's important to note that in my discussions on these techniques I will be greatly simplifying the technical details once we get past the broad categorical distinctions. Too many articles try to cover the nuanced technical differences between each individual form of AA and, frankly, it's completely irrelevant to anyone but a 3D programmer. If you happen to BE a 3D programmer, beware the cringe-worthy over-simplifications ahead.
Scaled AA
Before 3D graphics were a hot thing, 2D graphics processing had already been dealing with aliased images for a long time. It came up often when you wanted to scale an image; that is, resize an image to be larger or smaller. When scaling an image up, you had to somehow "fill in the blanks" with image data you didn't have. When scaling an image down, you had to decide which original pixels to use and which original pixels to throw out. If you didn't have intelligent algorithms to deal with this issue, the resulting scaled image could end up with lots of ugly aliasing artifacts just like the ones we want to get rid of.
Luckily a lot of smart people came up with a lot of clever algorithms for interpolating pixels in order to make the 2D image scaling look as nice as possible. These algorithms have been refined and tinkered with for so long that they are now extremely sexy and very fast. In fact many of these algorithms are even implemented directly in the hardware of modern monitors and TVs because they often have to deal with displaying a source image that is not the same resolution as the monitor.
The interesting result is that if you take a final, rendered 3D frame and simply resize it, some of your aliasing artifacts will disappear. You can abuse the hardware in a modern monitor or TV to do this for you by setting a game to render at a different resolution (usually slightly larger) than the actual display resolution, then having the monitor or TV scale it down for you. This can eliminate some of the most noticeable "jagged edge" aliasing, but it may also introduce other issues. For instance, it may blur text or the user interface at the same time.
This oft-overlooked "benefit" to scaling is one reason many console users never notice aliasing at all. All too frequently a console game is being "up-converted" by a TV to a different resolution than it was rendered in, and the aliased, jagged edges are smoothed away by the algorithm doing this. Of course, a lot of the other visual content is suddenly blurry and less crisp as well.
Scaled AA works best when you produce an image much larger than you need, with more detail than your monitor can display, and then scale it down. This leads to...
Supersampling (SSAA)
Supersampling is an extremely simple concept. If we render an image much larger than the final image needs to be, then scale it down to the necessary size at the end, we can get rid of all the aliasing artifacts while barely impacting image quality. The reason this works is fairly easy to understand.
As already covered, a 300 PPI monitor would have tiny, indistinguishable pixels and therefore no perceptible aliasing. So let's pretend that I have a monitor that is 19.2" wide and 10.8" high and 300 PPI. It would have a resolution of 5760x3240. If I rendered my image to this imaginary 300 PPI monitor I would be unable to distinguish individual pixels and wouldn't notice any aliasing at all. That'd be great!
Unfortunately, a desktop monitor of this size is usually going to be at a resolution of only 1920x1080 (100 PPI). However, we know that there are awesome and fast algorithms that can resize a large image down to a smaller size without adding any noticeable aliasing. Sure the image on the 100 PPI monitor might be a little less crisp and have a bit of blur in comparison to the image on the 300 PPI monitor, but this would be the absolute optimal solution whose only limitation is the PPI of the display device.
SSAA does exactly that. It renders a very large image (usually 2x or 4x larger than the desired final resolution) and scales it down. The result is a high quality image with no artifacts and very little blur or image distortion. SSAA is really the ideal AA solution and was the very first AA solution introduced to 3D gaming. To this day it still reigns as the most visually attractive form of AA as well as one of the simplest techniques to integrate into a 3D engine -- really an engine doesn't need to do anything at all except render very large resolutions.
Alas, SSAA is extremely expensive to do. Doubling our resolution (200 PPI) doesn't mean we have to do twice the work; we actually have four times as many pixels to draw as before! (1920x1080 = 2073600 while 3840x2160 = 8294400). Likewise, tripling the resolution (our 300 PPI example) means we have nine times the number of pixels to draw! This means I'd need a graphics card 4x the power of my current one just to reach 200 PPI on the same size monitor and 9x as powerful to reach 300 PPI.
This is why a common misconception I see floating about is simply not true: higher PPI monitors won't help the situation. If I could rush out tomorrow and buy a 300 PPI monitor to stick on my desktop I would still be in the exact same boat I am now; I'd need a video card 9x more powerful to render the same content to that monitor. If I had a video card that powerful I could just render the image much larger than needed and use SSAA to make it look really nice and clean without upgrading my monitor at all.
Adaptive Supersampling (Adaptive SSAA)
Adaptive SSAA is an attempt to reduce the performance impact of SSAA while still gleaning all of its benefits. In short, you try to decide if pixels that are near each other in the final output really need all that AA work done. If they do then the engine can render just that small area at a much higher resolution, then scale it down to get the final output pixels.
In practice, adaptive SSAA did not typically provide a significant enough performance boost to be useful. Note that these days, "Adaptive Supersampling" almost always refers to "Transparency Adapative Supersampling" (TSAA).
Pipeline AA
SSAA came along and people saw how beautiful 3D could be without aliasing, but unfortunately no one actually had a video card powerful enough to run SSAA for most games. This started a technology race to try and reproduce the graphical results of SSAA with less intensive algorithms that didn't melt down video cards.
A batch of techniques developed out of the realization that the 3D engine knows a lot about the 3D geometry of a scene, and that this information can be used by AA algorithms inside the rendering pipeline to make more intelligent decisions about where and when to spend processing power to apply AA to specific parts of the image.
One major drawback of these pipeline AA techniques are that, for the most part, they must be integrated into the graphics engine. This makes it very difficult to retroactively apply new pipeline techniques to old games or engines, or to take advantage of new GPU hardware improvements. A great example of this is the new nVidia TXAA technique that recently launched; while a very effective and good-looking AA technique, it only works in games with engines that have been explicitly modified to support this new process.
Multisample AA (MSAA)
The details of exactly how MSAA works aren't that relevant. What matters is that MSAA is very good at using the 3D geometry of a scene to apply AA only to areas that need it most. In particular, the sharp edge that shows up when a polygon is rendered over top of something. Tight integration with the rendering pipeline and the scene geometry allows MSAA to apply AA directly to this special edge case (har har).
MSAA can eliminate aliasing caused by polygon edges with an acceptable performance impact. Unfortunately, MSAA doesn't deal well with aliasing from other sources. For instance, it doesn't handle aliasing within textures (due to texture scaling, rotation, procedural generation, etc.).
There are many different nuances in how exactly MSAA can sample the image to perform its AA, and some of these nuances are given vendor-specific monikers. For instance, nVidia had a habit of adding "Q" to any of its MSAA modes that it thinks produce better quality AA (though it now seems to use Q in AA modes to differentiate between MSAA and CSAA without having to use either of those terms). Often these monikers relate to the geometric pattern used in the MSAA sampling method.
In practice, the exact technical details between MSAA modes don't matter and wouldn't help you pick one MSAA mode over another. What matters is trying out the different MSAA modes in game and picking the one that looks best to you with the least performance hit you are willing to tolerate.
Transparency Multisampling AA (TMAA), Transparency Supersampling AA (TSAA), Transparency AA (TrAA), Adaptive AA (AdAA)
For all its performance boosts, MSAA does not handle transparent textures. Transparent textures are a common way of "faking" intricate geometric detail without actually creating 3D geometry. For instance, a chain-link fence might be a very simple polygon with a complicated transparent texture that produces the chain-link wire pattern while allowing the background scene to show through where the texture is transparent (e.g. between the wires of the fence).
MSAA does not typically handle aliasing inside a texture at all, and transparent textures are textures just like any other. Their transparent bits, when used to mimic geometry, have a tendency to cause aliasing issues just like the edge of a polygon would. To address this deficiency, transparency AA re-introduces more expensive AA only for the areas of a scene that contain an object with a transparent texture. Typically this is done by applying good old adapative supersampling AA to just that part of the scene, but other AA algorithms may also be applied. These are usually vendor-specific (ATI vs. nVidia).
Note that, without context, most modern references to "Adapative Supersampling" or just "Adaptive AA" almost always relate to the specific case of transparent textures only, not full-scene supersampling.
Coverage Sample AA (CSAA), Custom Filter AA (CFAA), Enhanced Quality AA (EQAA)
CSAA, CFAA, and EQAA are vendor-specific extensions of MSAA. CSAA comes from nVidia and CFAA and EQAA come from ATI/AMD. As with the rainbow of vendor-specific MSAA modes, the details of exactly what CSAA, CFAA, and EQAA do in addition to the basic MSAA functionality is not all that relevant. The most important bit is to note that all three of these attempt to replicate the quality of MSAA whilst reducing the performance impact.
Picking one mode over the other simply boils down to trying them out to see which one you prefer while incurring the least performance impact.
Quincunx Super AA (QSAA), Quincunx
This is yet another MSAA method that has been given its own name. It relates primarily to the pattern used to take AA samples, but again the technical details are mostly irrelevant. It does have a tendency to blur textures across the board, however, which is a good lesson to remember that MSAA does not always impact only edge geometry.
TXAA
TXAA is yet another pipeline-based technique being introduced by nVidia. One of its biggest improvements is that the algorithm uses temporal information -- that is, not just data from the current frame but other frames as well -- in order to perform its AA processing. This can address a "shimmering" effect often noticed when moving through a scene with classic MSAA techniques.
Shader AA
Because MSAA and other pipeline techniques are geometry based and exist within the rendering pipeline, they don't deal well with aliasing introduced by shaders or other post processing effects. This can include things like shadows, bloom and HDR effects, particle effects, and so on. In fact it can even interfere with some of these. When MSAA was introduced these techniques were less common, but as 3D engines have become more advanced these sources of aliasing are showing up more often.
As a result of all this aliasing showing up in a scene that pipeline/MSAA could not deal with there was renewed interest in applying AA to the entire rendered image all at once. Some people call this "Full Screen AA (FSAA)" but I dislike the term because that would include supersampling AA, which is really its own separate thing, and also because it takes focus away from what is most interesting about this form of AA; the fact that it happens in post processing shaders.
Without getting into the technical details of the rendering pipeline or what a shader is, it's sufficient to note that shader AA can analyze the entire rendered image, including all of its post process effects, and try to apply AA to all of that. It's akin to being able to open up every rendered frame in Photoshop and apply a filter effect just before that frame goes out to the monitor.
It's not deeply embedded in the 3D pipeline like geometry-based MSAA, nor is it an example of massive supersampling or rendering high resolution versions that will be scaled down. Instead it is selectively applying blurring or sub-pixel effects to the final rendered image at the same resolution as the final output in order to try and eliminate aliasing.
Shader AA is interesting for a few reasons. One is that it tends to be extremely fast and has very little impact on performance because of the nature of the algorithms involved.
Shader AA can also be applied to almost any game on most modern video cards, even retroactively, by making simple changes to the 3D library to include the post process AA shaders. This is in contrast to pipeline AA which is deeply integrated into a game's rendering engine and the video card hardware and drivers.
Another feature is that a game that embeds a form of shader AA into its engine can get decent AA for virtually no performance penalty, and it can further include this AA at the most logical place in rendering for maximum benefit. For instance, a game might apply shader AA before rendering the user interface so that UI elements are not accidentally blurred out.
Finally, shader AA can work in conjunction with other types of AA. Since it has low performance impact and happens at the end of the rendering process, shader AA can be added on top of existing AA techniques rather than replacing them entirely.
For all these benefits, shader AA does have some drawbacks. Most notable is the fact that it affects the entire image. This can lead to blurring, color distortion, or brightness changes across the entire image that negatively impact the final quality.
Morphological AA (MLAA), Fast Approximate AA (FXAA)
Both of these are the vendor-specific implementations of shader AA that can be enabled directly in their respective drivers. MLAA is the ATI/AMD version and FXAA is the nVidia one.
FXAA is notable in that it has an "injector" form, which is a version that can be added into the rendering process of any DX9 or DX10 game regardless of driver support. This works by simply copying a few files into the game's executable location to update its DX9 or DX10 stack. Since this also contains the actual FXAA shader source code, it's also possible to tweak the AA settings to customize the exact AA output independent of whatever the current implementation may be in the nVidia driver.
Enhanced Subpixel Morphological AA (SMAA)
SMAA is interesting because it really illustrates all of the powerful benefits of shader AA. It's a 3rd party shader written independently from both major hardware vendors (ATI/AMD and nVidia). Because it's external shader code it can be updated by the authors at any time regardless of video card driver updates. The algorithm is continually being improved to increase edge detection accuracy while avoiding unnecessary blurring and quality degradation.
As with other shader AA, it can be applied retroactively to just about any DX9 or DX10 title provided the video card has the necessary shader support.