Sign in to follow this  
cosmiczilch

OpenGL Non-fullscreen Anti-Aliasing with OpenGL and SDL

Recommended Posts

cosmiczilch    100
I have been looking at various anti-aliasing options in OpenGL, and none of them seems to provide what i want.
The ARB_multisample extension -AFAIK- is only for /full screen/ anti-aliasing, where as the core opengl polygon antialiasing requires me to
sort my polygons in depth order.
Is there a way to combine the 2 ? i.e. a non fullscreen anti-aliasing technique that *just works* ?

Thanks in advance.

P.S : I use OpenGL w/ SDL

Share this post


Link to post
Share on other sites
Yann L    1802
[quote name='cosmiczilch' timestamp='1310924568' post='4836430']
Is there a way to combine the 2 ? i.e. a non fullscreen anti-aliasing technique that *just works* ?
[/quote]
Can you be more precise on what you mean by "non fullscreen anti-aliasing" ? Antialiasing only part of a window ? Only one viewport ? Only selected primitives ?

MSAA and the old style line/polygon smoothing are two totally independent techniques, each with their respective limitations. First off, you should probably rethink your approach. Why can't you just antialias everything and be done with it ? That would certainly be the most efficient way.

Now, it is possible to have parts of your scene rendered with antialising and parts without by using multiple FBOs with different multisampling settings. It's rather trivial if they affect distinct parts of the screen. It's harder if MSAA settings are to be mixed within the same render area. Except for a few special cases (eg. having a reflection rendered without AA, but the rest of the scene with AA) this approach is not usually advisable.

Share this post


Link to post
Share on other sites
O-san    1900
I think when he says fullscreen he means not windowed mode? In other words the viewport covers the entire display (I could be wrong). I think multisample antialias should work in fullscreen as well as in windowed mode.. again, I could be wrong...

I have noticed that some nVidia chipsets have had some problems with msaa, it might require some driver tinkering to get working.

Share this post


Link to post
Share on other sites
capricorn    139
[quote name='O-san' timestamp='1310933965' post='4836473']
I think when he says fullscreen he means not windowed mode?
[/quote]


Yeah, that's a common misunderstanding that "FS" in "FSAA" means "full-screen" (it's "full-scene"). There is no difference whether the window covers entire screen or not.

Share this post


Link to post
Share on other sites
cosmiczilch    100
Oh..k. Sorry for the confusion.
I assumed the FS in FSAA to mean Full-Screen since it didn't work for me in windowed mode. Well, it didn't work in full screen mode either...
Please tell me what i'm doing wrong :

i am using SDL w/ OpenGL on ubuntu10.10
i have an ati-radeon card w/ fglrx installed

After SDL_Init, i do :

SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 1);
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES, 4);

and after SDL_SetVideoMode,

glEnable( GL_MULTISAMPLE );

But i don't see any multisampling going on.

What's worse is that if i run the same program on my intel laptop w/ inbuilt graphics card, the program /segfaults/ on the first gl call following SetVideoMode.
If i free the sdl_surface immediately after SetVideoMode and call SetVideoMode again, i don't see any segfault, but i don't see any AA either.

P.S : glGetString( GL_EXTENSIONS ); does print GL_ARB_multisample on both my laptops.

Please tell me what i'm missing.
Thank you.

Share this post


Link to post
Share on other sites
capricorn    139
[quote name='cosmiczilch' timestamp='1310965996' post='4836646']


After SDL_Init, i do :

SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 1);
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES, 4);

and after SDL_SetVideoMode,

glEnable( GL_MULTISAMPLE );

But i don't see any multisampling going on.
[/quote]

And what do SDL_GL_GetAttribute() for those attributes say after SDL_SetVideoMode? Or glGetInteger with GL_SAMPLE_BUFFERS_ARB/GL_SAMPES_ARB?
And also, are there any visuals supporting multisampling? Check output of glxinfo. In the table with visuals, look for "ms" column. "ns" there is number of samples, and "b" is number of buffers.

[quote]
What's worse is that if i run the same program on my intel laptop w/ inbuilt graphics card, the program /segfaults/ on the first gl call following SetVideoMode.
If i free the sdl_surface immediately after SetVideoMode and call SetVideoMode again, i don't see any segfault, but i don't see any AA either.
[/quote]

What do you mean "I free sdl_surface"? You should not ever free the surface returned by SDL_SetVideoMode. If you need another video mode, just call SDL_SetVideoMode again.

Share this post


Link to post
Share on other sites
cosmiczilch    100
Capricorn,
On my intel laptop (no dedicated graphics card), i tried glxinfo. It showed me no visuals with non-zero ms and b. Does this mean i can't have multisampling on this laptop?, or is there something i can try to make it happen. And, yeah, SDL_GetAttribute returned 0, 0 too. Also, are all opengl games i run on this laptop doomed to look jaggy?
I'll try the same on my ati-radeon laptop when i get home.

Thanks!

Share this post


Link to post
Share on other sites
capricorn    139
Short answer is yes, you can't have multisampling. Long answer is: check your video driver documentation for clues on enabling FSAA. nVidia drivers, for example, allow you to set __GL_FSAA_MODE environment variable prior to launching application, thus overriding any AA settings. Even then, if you don't have any ms-capable visuals, it's up to driver. Though I must tell I have little faith in both Intel's and ATI's Linux drivers anyway.

Share this post


Link to post
Share on other sites
cosmiczilch    100
Oh.. that's sad.
Btw, on ati-radeon laptop, i do see ms capable visuals. I tried using the buffer sizes mentioned in the glxinfo listing with multisampling,
but still i see no AA going on.
Also, SDL_GetAttribute returns 4 and 1 for samples and buffers respectively.

What's happening?

Thank you.

P.S : i am still not able to convince myself that there i have no anti-aliasing solution available.
Isn't there some some crude method to fall back to?, a software implementation perhaps? Something...

Share this post


Link to post
Share on other sites
capricorn    139
[quote name='cosmiczilch' timestamp='1311018502' post='4836986']
Oh.. that's sad.
Btw, on ati-radeon laptop, i do see ms capable visuals. I tried using the buffer sizes mentioned in the glxinfo listing with multisampling,
but still i see no AA going on.
Also, SDL_GetAttribute returns 4 and 1 for samples and buffers respectively.
What's happening?

[/quote]

Are you sure? Take two identical screenshots with and without AA and compare them. If SDL reports MS buffers are present, it should work (with glEnable(GL_MULTISAMPLE)). Try drawing some simple lines, or a triangle with severly extruded angle, it's easier to compare this way.

[quote]
P.S : i am still not able to convince myself that there i have no anti-aliasing solution available.
Isn't there some some crude method to fall back to?, a software implementation perhaps? Something...
[/quote]

Well, older techniques include antialiasing using accumulation buffer. The idea is to render a frame several times into accumulation buffer introducing some jitter for projection transformation (so that resulting frame is shifted by very small amounts in different directions). Google for that, there's plenty of info out there. But keep in mind that this method comes with significant overhead. It's for you to decide if the goal worth the hassle :)

Share this post


Link to post
Share on other sites
Yann L    1802
[quote name='cosmiczilch' timestamp='1311018502' post='4836986']
P.S : i am still not able to convince myself that there i have no anti-aliasing solution available.
Isn't there some some crude method to fall back to?, a software implementation perhaps? Something...
[/quote]
I'm afraid you won't get anywhere with an Intel graphics chipset:

Source: [url="http://www.intel.com/support/graphics/sb/cs-012644.htm#8"]Intel[/url]
[quote]
Intel chipsets with integrated graphics do not support full scene anti-aliasing. Anti-aliased lines are supported in OpenGL* applications.
[/quote]

Now, alternatives do exist. The traditional approaches include supersampling or accumulation (or FBO) based jittering methods that were already mentioned. None of these will realistically work on Intel GPUs, as they require vast amounts of memory and rendering performance. In fact MSAA (multisampling AA, the most common form of FSAA) was developed to counter the huge resource requirements of the two previously mentioned algorithms.

Recently, a number of new shader based post processing FSAA algorithms were developed. The basic idea is to detect edges and blur them during a post-process. Some AAA games use these techniques to some extend, including Crysis AFAIR. The intent is to minimize the memory consumption of typical MSAA and circumvent limitations of MSAA with respect to deferred buffers. However even though these algorithms require less memory than large kernel MSAA, they are still very shader intensive. It is rather unlikely that they will function properly on very low end chips such as Intel GPUs.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By povilaslt2
      Hello. I'm Programmer who is in search of 2D game project who preferably uses OpenGL and C++. You can see my projects in GitHub. Project genre doesn't matter (except MMO's :D).
    • By ZeldaFan555
      Hello, My name is Matt. I am a programmer. I mostly use Java, but can use C++ and various other languages. I'm looking for someone to partner up with for random projects, preferably using OpenGL, though I'd be open to just about anything. If you're interested you can contact me on Skype or on here, thank you!
      Skype: Mangodoor408
    • By tyhender
      Hello, my name is Mark. I'm hobby programmer. 
      So recently,I thought that it's good idea to find people to create a full 3D engine. I'm looking for people experienced in scripting 3D shaders and implementing physics into engine(game)(we are going to use the React physics engine). 
      And,ye,no money =D I'm just looking for hobbyists that will be proud of their work. If engine(or game) will have financial succes,well,then maybe =D
      Sorry for late replies.
      I mostly give more information when people PM me,but this post is REALLY short,even for me =D
      So here's few more points:
      Engine will use openGL and SDL for graphics. It will use React3D physics library for physics simulation. Engine(most probably,atleast for the first part) won't have graphical fron-end,it will be a framework . I think final engine should be enough to set up an FPS in a couple of minutes. A bit about my self:
      I've been programming for 7 years total. I learned very slowly it as "secondary interesting thing" for like 3 years, but then began to script more seriously.  My primary language is C++,which we are going to use for the engine. Yes,I did 3D graphics with physics simulation before. No, my portfolio isn't very impressive. I'm working on that No,I wasn't employed officially. If anybody need to know more PM me. 
       
    • By Zaphyk
      I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?
    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
  • Popular Now