Sign in to follow this  
cosmiczilch

OpenGL Non-fullscreen Anti-Aliasing with OpenGL and SDL

Recommended Posts

I have been looking at various anti-aliasing options in OpenGL, and none of them seems to provide what i want.
The ARB_multisample extension -AFAIK- is only for /full screen/ anti-aliasing, where as the core opengl polygon antialiasing requires me to
sort my polygons in depth order.
Is there a way to combine the 2 ? i.e. a non fullscreen anti-aliasing technique that *just works* ?

Thanks in advance.

P.S : I use OpenGL w/ SDL

Share this post


Link to post
Share on other sites
[quote name='cosmiczilch' timestamp='1310924568' post='4836430']
Is there a way to combine the 2 ? i.e. a non fullscreen anti-aliasing technique that *just works* ?
[/quote]
Can you be more precise on what you mean by "non fullscreen anti-aliasing" ? Antialiasing only part of a window ? Only one viewport ? Only selected primitives ?

MSAA and the old style line/polygon smoothing are two totally independent techniques, each with their respective limitations. First off, you should probably rethink your approach. Why can't you just antialias everything and be done with it ? That would certainly be the most efficient way.

Now, it is possible to have parts of your scene rendered with antialising and parts without by using multiple FBOs with different multisampling settings. It's rather trivial if they affect distinct parts of the screen. It's harder if MSAA settings are to be mixed within the same render area. Except for a few special cases (eg. having a reflection rendered without AA, but the rest of the scene with AA) this approach is not usually advisable.

Share this post


Link to post
Share on other sites
I think when he says fullscreen he means not windowed mode? In other words the viewport covers the entire display (I could be wrong). I think multisample antialias should work in fullscreen as well as in windowed mode.. again, I could be wrong...

I have noticed that some nVidia chipsets have had some problems with msaa, it might require some driver tinkering to get working.

Share this post


Link to post
Share on other sites
[quote name='O-san' timestamp='1310933965' post='4836473']
I think when he says fullscreen he means not windowed mode?
[/quote]


Yeah, that's a common misunderstanding that "FS" in "FSAA" means "full-screen" (it's "full-scene"). There is no difference whether the window covers entire screen or not.

Share this post


Link to post
Share on other sites
Oh..k. Sorry for the confusion.
I assumed the FS in FSAA to mean Full-Screen since it didn't work for me in windowed mode. Well, it didn't work in full screen mode either...
Please tell me what i'm doing wrong :

i am using SDL w/ OpenGL on ubuntu10.10
i have an ati-radeon card w/ fglrx installed

After SDL_Init, i do :

SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 1);
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES, 4);

and after SDL_SetVideoMode,

glEnable( GL_MULTISAMPLE );

But i don't see any multisampling going on.

What's worse is that if i run the same program on my intel laptop w/ inbuilt graphics card, the program /segfaults/ on the first gl call following SetVideoMode.
If i free the sdl_surface immediately after SetVideoMode and call SetVideoMode again, i don't see any segfault, but i don't see any AA either.

P.S : glGetString( GL_EXTENSIONS ); does print GL_ARB_multisample on both my laptops.

Please tell me what i'm missing.
Thank you.

Share this post


Link to post
Share on other sites
[quote name='cosmiczilch' timestamp='1310965996' post='4836646']


After SDL_Init, i do :

SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 1);
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES, 4);

and after SDL_SetVideoMode,

glEnable( GL_MULTISAMPLE );

But i don't see any multisampling going on.
[/quote]

And what do SDL_GL_GetAttribute() for those attributes say after SDL_SetVideoMode? Or glGetInteger with GL_SAMPLE_BUFFERS_ARB/GL_SAMPES_ARB?
And also, are there any visuals supporting multisampling? Check output of glxinfo. In the table with visuals, look for "ms" column. "ns" there is number of samples, and "b" is number of buffers.

[quote]
What's worse is that if i run the same program on my intel laptop w/ inbuilt graphics card, the program /segfaults/ on the first gl call following SetVideoMode.
If i free the sdl_surface immediately after SetVideoMode and call SetVideoMode again, i don't see any segfault, but i don't see any AA either.
[/quote]

What do you mean "I free sdl_surface"? You should not ever free the surface returned by SDL_SetVideoMode. If you need another video mode, just call SDL_SetVideoMode again.

Share this post


Link to post
Share on other sites
Capricorn,
On my intel laptop (no dedicated graphics card), i tried glxinfo. It showed me no visuals with non-zero ms and b. Does this mean i can't have multisampling on this laptop?, or is there something i can try to make it happen. And, yeah, SDL_GetAttribute returned 0, 0 too. Also, are all opengl games i run on this laptop doomed to look jaggy?
I'll try the same on my ati-radeon laptop when i get home.

Thanks!

Share this post


Link to post
Share on other sites
Short answer is yes, you can't have multisampling. Long answer is: check your video driver documentation for clues on enabling FSAA. nVidia drivers, for example, allow you to set __GL_FSAA_MODE environment variable prior to launching application, thus overriding any AA settings. Even then, if you don't have any ms-capable visuals, it's up to driver. Though I must tell I have little faith in both Intel's and ATI's Linux drivers anyway.

Share this post


Link to post
Share on other sites
Oh.. that's sad.
Btw, on ati-radeon laptop, i do see ms capable visuals. I tried using the buffer sizes mentioned in the glxinfo listing with multisampling,
but still i see no AA going on.
Also, SDL_GetAttribute returns 4 and 1 for samples and buffers respectively.

What's happening?

Thank you.

P.S : i am still not able to convince myself that there i have no anti-aliasing solution available.
Isn't there some some crude method to fall back to?, a software implementation perhaps? Something...

Share this post


Link to post
Share on other sites
[quote name='cosmiczilch' timestamp='1311018502' post='4836986']
Oh.. that's sad.
Btw, on ati-radeon laptop, i do see ms capable visuals. I tried using the buffer sizes mentioned in the glxinfo listing with multisampling,
but still i see no AA going on.
Also, SDL_GetAttribute returns 4 and 1 for samples and buffers respectively.
What's happening?

[/quote]

Are you sure? Take two identical screenshots with and without AA and compare them. If SDL reports MS buffers are present, it should work (with glEnable(GL_MULTISAMPLE)). Try drawing some simple lines, or a triangle with severly extruded angle, it's easier to compare this way.

[quote]
P.S : i am still not able to convince myself that there i have no anti-aliasing solution available.
Isn't there some some crude method to fall back to?, a software implementation perhaps? Something...
[/quote]

Well, older techniques include antialiasing using accumulation buffer. The idea is to render a frame several times into accumulation buffer introducing some jitter for projection transformation (so that resulting frame is shifted by very small amounts in different directions). Google for that, there's plenty of info out there. But keep in mind that this method comes with significant overhead. It's for you to decide if the goal worth the hassle :)

Share this post


Link to post
Share on other sites
[quote name='cosmiczilch' timestamp='1311018502' post='4836986']
P.S : i am still not able to convince myself that there i have no anti-aliasing solution available.
Isn't there some some crude method to fall back to?, a software implementation perhaps? Something...
[/quote]
I'm afraid you won't get anywhere with an Intel graphics chipset:

Source: [url="http://www.intel.com/support/graphics/sb/cs-012644.htm#8"]Intel[/url]
[quote]
Intel chipsets with integrated graphics do not support full scene anti-aliasing. Anti-aliased lines are supported in OpenGL* applications.
[/quote]

Now, alternatives do exist. The traditional approaches include supersampling or accumulation (or FBO) based jittering methods that were already mentioned. None of these will realistically work on Intel GPUs, as they require vast amounts of memory and rendering performance. In fact MSAA (multisampling AA, the most common form of FSAA) was developed to counter the huge resource requirements of the two previously mentioned algorithms.

Recently, a number of new shader based post processing FSAA algorithms were developed. The basic idea is to detect edges and blur them during a post-process. Some AAA games use these techniques to some extend, including Crysis AFAIR. The intent is to minimize the memory consumption of typical MSAA and circumvent limitations of MSAA with respect to deferred buffers. However even though these algorithms require less memory than large kernel MSAA, they are still very shader intensive. It is rather unlikely that they will function properly on very low end chips such as Intel GPUs.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      627715
    • Total Posts
      2978782
  • Similar Content

    • By DelicateTreeFrog
      Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
      Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
      For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
      So, here's what the plan is so far as far as loading goes:
      Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
      Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
      Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
      The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
      So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
      With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
    • By JJCDeveloper
      I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
    • By AyeRonTarpas
      A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

      -What I'm using:
          C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.  
      -Questions
      Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?  
    • By ferreiradaselva
      Both functions are available since 3.0, and I'm currently using `glMapBuffer()`, which works fine.
      But, I was wondering if anyone has experienced advantage in using `glMapBufferRange()`, which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
      Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
    • By xhcao
      Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness. 
  • Popular Now