Jump to content
  • Advertisement
Sign in to follow this  
dsr12

OpenGL proper way to do antialiasing?

This topic is 3969 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I read this post recently: http://www.gamedev.net/community/forums/topic.asp?topic_id=107637 and the problem the first author had is the exact problem I am having now. I made a large object composed of hundreds of quads and I see thin lines along the joints of the textured object when I tried this antialiasing technique. Unfortunately the only response in the thread is not helpful. I do not want to force antialiasing through some control panel outside my application (this does work though). What other approach can I try? I am aware of the jitter technique with the accumulation buffer in the red book, but from what I read elsewhere it sounded like performance would be rather poor for that. How is opengl antialiasing done in real world applications? It seems like every approach I read about online has some fundamental problems or is an unofficial way of doing it.

Share this post


Link to post
Share on other sites
Advertisement
Well, the only response to that other thread does mention the solution, eventhough it's poorly worded: you need to enable multisampling. ARB_multisample used to be the only way of doing this. It essentially does the same as changing the settings through the driver panel, but controlled by your application.

Nowadays, however, there is a much more flexible, powerful and simple way to achieve fullscreen antialiasing: EXT_framebuffer_multisample. The idea is to create a special multisampled FBO, and render your scene to it. You can very easily enable multisampling on FBOs, and control the quality (and performance) of the antialiasing. This approach also allows you to switch between different multisample qualities on the fly. For example, render to a lower quality multisample buffer while the player is moving fast (he is less likely to notice the artifacts in quick game action), and render to a high quality buffer when he stops or moves very slowly.

Once your scene was rendered to such an FBO, you need to resolve the multisample buffer to the screen (or to another FBO). EXT_framebuffer_blit will do this.

Share this post


Link to post
Share on other sites
As far as I know, there's best to use multisampling (example - http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=46), but I recommend to do all antialiasing on your own using shaders in postprocess phase. Why? When you're using frame buffer object (FBO), you can do antialiasing just on NVidia GPUs, not ATI GPUs (and I've got ATI). Jitter antialiasing - well you can try, but I recommend supersampling (very high quality antialiasing, it's processing not just geometry, but even transparent objects) or multisampling (medium quality antialiasing, cause it's processing just geometry - used as extension). Anyway something about jitter antialiasing is here http://www.cse.msu.edu/~cse872/tutorial5.html

Share this post


Link to post
Share on other sites
Quote:
Original post by Vilem Otte
Why? When you're using frame buffer object (FBO), you can do antialiasing just on NVidia GPUs, not ATI GPUs (and I've got ATI).


According to marketing
http://ati.amd.com/products/radeonhd2900/specs.html

"All anti-aliasing features compatible with HDR rendering"

that ATI GPU can do it and it can do it with fp16 or maybe fp32 RTT.
This only means one thing : their GL drivers doesn't expose the feature but the feature works with D3D9

Share this post


Link to post
Share on other sites
Quote:
Original post by Vilem Otte
but I recommend to do all antialiasing on your own using shaders in postprocess phase. Why? When you're using frame buffer object (FBO), you can do antialiasing just on NVidia GPUs, not ATI GPUs

This is not a valid reason. Why would you discard a perfectly fine, pretty much industry standard method of doing FSAA, just because one manufacturer is too incompetent to add it to their drivers ?

So ATI users won't get good AA. That's life. Let them complain to ATI, maybe they will react one day by releasing a driver that actually works. Or better, let them buy a good graphics card instead. sounds harsh ? Maybe. But I'm just sick of having to resort to bad workarounds just because of ATI.

Quote:
Original post by Vilem Otte
(and I've got ATI). Jitter antialiasing - well you can try, but I recommend supersampling (very high quality antialiasing, it's processing not just geometry,

Modern large tap multisampling is generally much higher quality than any type of practical supersampling. Realistically, you can only do 2x2 supersampling on todays resolutions, maybe 3x3 (at an extremely high performance cost). That's not enough for good quality. True, SS will also AA shaders and such, but other methods exist for this, in conjunction with MS, that are generally much higher quality and better performance.

Quote:
Original post by Vilem Otte
but even transparent objects)

Transparency works fine with multisampling (alpha to coverage).

Share this post


Link to post
Share on other sites
Quote:
Original post by Yann L
So ATI users won't get good AA. That's life. Let them complain to ATI, maybe they will react one day by releasing a driver that actually works. Or better, let them buy a good graphics card instead. sounds harsh ? Maybe. But I'm just sick of having to resort to bad workarounds just because of ATI.

I don't suppose you have anything to share about the 'best workaround' to do this in OpenGL on ATI cards?

Share this post


Link to post
Share on other sites
Quote:
Original post by dmatter
I don't suppose you have anything to share about the 'best workaround' to do this in OpenGL on ATI cards?

Don't do any advanced AA at all on ATI (advanced in this context means anything other than standard LDR AA on the framebuffer). Show a popup saying that advanced AA options are only supported on NVidia at this time, and explain why in laymans terms. Suggest updating the ATI drivers, just in case ATI/AMD fixed it by then (although the chances that our universe suddendly quantum tunnels into a world where ATI doesn't exist are probably much higher than this...)

Share this post


Link to post
Share on other sites
I should add that I am trying to do this through jogl. I am having trouble with the extensions such as GL_ARB_multisample and anything with WGL. I cannot say for sure, but I do not think these are accessible through the jogl api. I'm new to opengl and I don't want to start learning about shader languages if I can help it. If all I can reasonably get with supersampling is 2x, that isn't the approach I want to take. Also unless someone can challenge this, I am under the impression that using the accumulation buffer is slow, which means it isn't acceptable for real-time AA.

I would like the antialiasing to be compatible on whatever platform an average person could be expected to use. This means it should run on nvidia/ati and windows/mac os at least. It's not that critical for it to be compatible with any other configurations.

I hope my situation doesn't eliminate all the AA methods available. If you know how to use multisampling extensions in jogl I would apprecaite the help.

Share this post


Link to post
Share on other sites
Quote:
This is not a valid reason. Why would you discard a perfectly fine, pretty much industry standard method of doing FSAA, just because one manufacturer is too incompetent to add it to their drivers ?


No, I would NOT discard it - It needs to conact ATI about support and discuss this with them.

Quote:
So ATI users won't get good AA. That's life. Let them complain to ATI, maybe they will react one day by releasing a driver that actually works. Or better, let them buy a good graphics card instead. sounds harsh ? Maybe. But I'm just sick of having to resort to bad workarounds just because of ATI.


I'm too getting sick of them with their drivers and OpenGL support, really thinking about buying NVidia in next-generation (after many years).

Quote:
Modern large tap multisampling is generally much higher quality than any type of practical supersampling. Realistically, you can only do 2x2 supersampling on todays resolutions, maybe 3x3 (at an extremely high performance cost). That's not enough for good quality. True, SS will also AA shaders and such, but other methods exist for this, in conjunction with MS, that are generally much higher quality and better performance.


Well, we can discuss quality here - I can do multisampling F.e. up to 8x filter in standart scene (etc. with normalmapping, HDR, complex models, reflections, ...) at F.e. 50 - 70 fps. I can too so Supersampling at 8x filter (this would probably need some tricky way, but it's possible), F.e. at 5 - 15 fps. Supersampling will have much higher quality, but very very bad performance. I'm most interested in raytracing - realtime raytracing and there supersampling is way (very slow way, but decent quality way).

Quote:
Transparency works fine with multisampling (alpha to coverage).


It works fine using sampling alpha to coverage. I'm now much more interested in antialiasing technique, that wouldn't be so slow as supersampling (for realtime raytracing) - so it'd be achieveable in realtime with raytracing (this means no graphics card - for me, cause I'm using just CPUs).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!