Non-fullscreen Anti-Aliasing with OpenGL and SDL

Started by
10 comments, last by Yann L 12 years, 9 months ago

Oh.. that's sad.
Btw, on ati-radeon laptop, i do see ms capable visuals. I tried using the buffer sizes mentioned in the glxinfo listing with multisampling,
but still i see no AA going on.
Also, SDL_GetAttribute returns 4 and 1 for samples and buffers respectively.
What's happening?



Are you sure? Take two identical screenshots with and without AA and compare them. If SDL reports MS buffers are present, it should work (with glEnable(GL_MULTISAMPLE)). Try drawing some simple lines, or a triangle with severly extruded angle, it's easier to compare this way.


P.S : i am still not able to convince myself that there i have no anti-aliasing solution available.
Isn't there some some crude method to fall back to?, a software implementation perhaps? Something...
[/quote]

Well, older techniques include antialiasing using accumulation buffer. The idea is to render a frame several times into accumulation buffer introducing some jitter for projection transformation (so that resulting frame is shifted by very small amounts in different directions). Google for that, there's plenty of info out there. But keep in mind that this method comes with significant overhead. It's for you to decide if the goal worth the hassle :)
WBW, capricorn
Advertisement

P.S : i am still not able to convince myself that there i have no anti-aliasing solution available.
Isn't there some some crude method to fall back to?, a software implementation perhaps? Something...

I'm afraid you won't get anywhere with an Intel graphics chipset:

Source: Intel

Intel chipsets with integrated graphics do not support full scene anti-aliasing. Anti-aliased lines are supported in OpenGL* applications.
[/quote]

Now, alternatives do exist. The traditional approaches include supersampling or accumulation (or FBO) based jittering methods that were already mentioned. None of these will realistically work on Intel GPUs, as they require vast amounts of memory and rendering performance. In fact MSAA (multisampling AA, the most common form of FSAA) was developed to counter the huge resource requirements of the two previously mentioned algorithms.

Recently, a number of new shader based post processing FSAA algorithms were developed. The basic idea is to detect edges and blur them during a post-process. Some AAA games use these techniques to some extend, including Crysis AFAIR. The intent is to minimize the memory consumption of typical MSAA and circumvent limitations of MSAA with respect to deferred buffers. However even though these algorithms require less memory than large kernel MSAA, they are still very shader intensive. It is rather unlikely that they will function properly on very low end chips such as Intel GPUs.

This topic is closed to new replies.

Advertisement