GeForce 2 MX Anti-Aliasing

Started by
10 comments, last by zedzeek 18 years, 4 months ago
I have a GeForce 2 MX video card (yeah, old) and I'd like to enable anti-aliasing on it. I've seen that all tutorials use ARB multisample extension, but my card doesn't support it. It does support anti-aliasing though (I can force it through drivers, and it works). Looking on the net a bit, I found out that it uses supersampling for anti-aliasing. Any ideas how to enable anti-aliasing from my program? I didn't find any supersampling extension. By the way, in the programmer's guide they anti-alias by multiple renderings to the accumulating buffer. Is it still used nowadays?
Advertisement
if AA important for you just force it with the drivers
i had a gf2mx until 18months ago + then got a gffx5200 (very cheap card, u should be able to buy for under 20 big macs)
ppl complain how slow the fx5200 (+ it is) but after a gf2mx it feels like 4x quicker (not to mention u can do heaps more)
Quote:Original post by zedzeek
if AA important for you just force it with the drivers
i had a gf2mx until 18months ago + then got a gffx5200 (very cheap card, u should be able to buy for under 20 big macs)
ppl complain how slow the fx5200 (+ it is) but after a gf2mx it feels like 4x quicker (not to mention u can do heaps more)


I know I can force it with the drivers, but the whole idea is to do it from the program. I wanted to make use of the available technology, and make a game that would run with all the features even on a GeForce2MX. It would be nice to make a game that would use advanced features on a good card, but also use what it can on an old one.

But really, is there no way to enable supersampling from OpenGL? And is the accumulating buffer technique still in use (I've read it was slow)?

For myself I've got a Radeon X800GTO2, but that's on the other computer (that I don't program on). And I had FX5200 for one day to test it, and decided against buying it. It simply wasn't worth spending more money, it didn't give me much (I don't use shaders now).
I thought the extensions were by driver, not card. I never tested that theory though. I would suggest printing off the list of extensions then going through them one by one. My guess would be that it is ARB_multisample. That's a 1999 extension and there there is an NV_multisample_filter_hint that was from 2001 that was apparently for the GeForce 3. The date seems about right for a GeForce 2. The stated intent is full screen anti-aliasing done transparently to the application.
Keys to success: Ability, ambition and opportunity.
Quote:Original post by LilBudyWizer
I thought the extensions were by driver, not card. I never tested that theory though. I would suggest printing off the list of extensions then going through them one by one. My guess would be that it is ARB_multisample. That's a 1999 extension and there there is an NV_multisample_filter_hint that was from 2001 that was apparently for the GeForce 3. The date seems about right for a GeForce 2. The stated intent is full screen anti-aliasing done transparently to the application.


I think the extensions are by driver, but does that really matter? Unless I'm doing some real tweaking, the driver won't add me something the card doesn't support.

OK, you're right, it's the drivers. Ran my game through 3D-Analyze. It said the driver was by 3D analyze (obviously :P) and showed me a list of extensions including GL_ARB_multisample. (It added its own opengl32.dll to the game dir, and that plus ForceDLL.dll made it detect more extensions). I'll experiment later about what happens if I actually try to use that extension through emulation.

Here is the list of extensions my card supports with nVidia drivers:

Extensions: GL_ARB_imaging GL_ARB_multitexture GL_ARB_point_parameters GL_ARB_point_sprite GL_ARB_shader_objects GL_ARB_shading_language_100 GL_ARB_texture_compression GL_ARB_texture_cube_map GL_ARB_texture_env_add GL_ARB_texture_env_combine GL_ARB_texture_env_dot3 GL_ARB_texture_mirrored_repeat GL_ARB_texture_rectangle GL_ARB_transpose_matrix GL_ARB_vertex_buffer_object GL_ARB_vertex_program GL_ARB_vertex_shader GL_ARB_window_pos GL_S3_s3tc GL_EXT_texture_env_add GL_EXT_abgr GL_EXT_bgra GL_EXT_blend_color GL_EXT_blend_minmax GL_EXT_blend_subtract GL_EXT_clip_volume_hint GL_EXT_compiled_vertex_array GL_EXT_Cg_shader GL_EXT_draw_range_elements GL_EXT_fog_coord GL_EXT_multi_draw_arrays GL_EXT_packed_pixels GL_EXT_paletted_texture GL_EXT_pixel_buffer_object GL_EXT_point_parameters GL_EXT_rescale_normal GL_EXT_secondary_color GL_EXT_separate_specular_color GL_EXT_shared_texture_palette GL_EXT_stencil_wrap GL_EXT_texture_compression_s3tc GL_EXT_texture_cube_map GL_EXT_texture_edge_clamp GL_EXT_texture_env_combine GL_EXT_texture_env_dot3 GL_EXT_texture_filter_anisotropic GL_EXT_texture_lod GL_EXT_texture_lod_bias GL_EXT_texture_object GL_EXT_vertex_array GL_IBM_rasterpos_clip GL_IBM_texture_mirrored_repeat GL_KTX_buffer_region GL_NV_blend_square GL_NV_fence GL_NV_fog_distance GL_NV_light_max_exponent GL_NV_packed_depth_stencil GL_NV_pixel_data_range GL_NV_point_sprite GL_NV_register_combiners GL_NV_texgen_reflection GL_NV_texture_env_combine4 GL_NV_texture_rectangle GL_NV_vertex_array_range GL_NV_vertex_array_range2 GL_NV_vertex_program GL_NV_vertex_program1_1 GL_SGIS_generate_mipmap GL_SGIS_multitexture GL_SGIS_texture_lod GL_SUN_slice_accum GL_WIN_swap_hint WGL_EXT_swap_control

System extensions: WGL_ARB_buffer_region WGL_ARB_extensions_string WGL_ARB_make_current_read WGL_ARB_pbuffer WGL_ARB_pixel_format WGL_ARB_render_texture WGL_EXT_extensions_string WGL_EXT_swap_control WGL_NV_render_texture_rectangle

No multisampling here. Yet no supersampling as well.
Note that supersampling might be of interest as I've found that it should give better quality than multisample (although at a high performance cost). Thus it might still be useful on newer cards (if they support it).

By the way, does it support shaders? (It has GL_NV_vertex_program GL_NV_vertex_program1 GL_NV_register_combiners)

The only place related to ARB where I've found something on supersampling is here: http://www.opengl.org/about/arb/notes/meeting_note_2001-06-12.html
But I didn't understand what was really the decision.

Another thing I've found:
http://www.beyond3d.com/interviews/gffxqa/
Quote:
We've seen that the 'xS' FSAA modes (4xS and 6xS) are not available under OpenGL because OpenGL doesn't natively support the mixed Supersample / Multisample modes these are based on. However, 8X is also said to be mixing both multisampling and super sampling because NV30's pipelines are still only able to produce 4 AA samples per pipeline, so how does 8X differ from 4xS/6xS that enables it to operate under OpenGL?

You can mix multisampling and supersampling in OpenGL modes as long as you meet some very specific restrictions. Our 8X modes meets those restrictions, while our “xS” modes do not.

Why can't they say just how to do it?

Just wandering, how is the original accumulation buffer jittering related to multi/supersampling in terms of speed and quality?
There is a demo(with source code) that implements supersampling using OpenGL here: Link.

From what I understand, basically you render the scene multiple times by jittering the projection matrix a little. You add every generated image into the accumulation buffer using glAccum(GL_ACCUM,factor) and an appropriate factor(for example, if you have 4 renders you use 0.25), and in the end you display the contents of the buffer on screen by glAccum(GL_RETURN,...).

Problem is, the accumulation buffer has always been very slow. That demo reaches 12fps in my GFX5200. I haven't tried anything about supersampling yet, but I'm thinking: Is there any reason we can't accumulate the images using render-to-texture extensions instead?

Quote:Original post by mikeman
There is a demo(with source code) that implements supersampling using OpenGL here: Link.

From what I understand, basically you render the scene multiple times by jittering the projection matrix a little. You add every generated image into the accumulation buffer using glAccum(GL_ACCUM,factor) and an appropriate factor(for example, if you have 4 renders you use 0.25), and in the end you display the contents of the buffer on screen by glAccum(GL_RETURN,...).

Problem is, the accumulation buffer has always been very slow. That demo reaches 12fps in my GFX5200. I haven't tried anything about supersampling yet, but I'm thinking: Is there any reason we can't accumulate the images using render-to-texture extensions instead?


Well GFX5200 isn't really a fast card...
I can't run this demo, it requires GL_ARB_fragment_shader (what for?) and I don't have it. But that isn't important. What is important is that you say that supersampling IS the accum buffer jittering. For that there is code in the red book.

I'll try it out. However, it's a bit unclear where I jitter - modelview or projection matrix?

I'll try to find out. Thanks for the help.

BTW does it support shaders (if yes, then which), according to the extensons I posted? And what exactly is render to texture (and is it supported)?
Quote:
I can't run this demo, it requires GL_ARB_fragment_shader (what for?) and I don't have it. But that isn't important. What is important is that you say that supersampling IS the accum buffer jittering. For that there is code in the red book.

I'll try it out. However, it's a bit unclear where I jitter - modelview or projection matrix?


Well, actually supersampling is the method where you have multiple samples for all pixels. Jittering&Accumulation is a way of doing it(you have as many samples as your rendering passes). Another way would be to render the scene into a buffer bigger than your real one(say 4x its size), downsample it and display it - which is probably the way most graphic cards do it.

OTOH, multisampling is an optimization that takes multiple samples only for pixels that are not covered completely - basically it's supersampling that is performed only at the polygon edges. Supersampling affects the whole buffer, including the interior of polygons so it can perform antialiasing on textures too, or to edges inside alpha-tested polygons, something that multisampling doesn't. This article explains it better and does a comparison between the 2 methods.

As for the jitter, I believe it's the projection matrix. I think jittering the modelview matrix would have unwanted effects, like altering the fog calculation for example. Check the source code of the example though to see how it's done in detail.
OK, I tried out that accumulation buffer jittering.

It is SOOOOO slow. Small window, ortho projection, some text - normally runs at screen frequency (I keep wait for vsync on). It ran at 2FPS with only 2x (!!!) AA (e.g. I rendered only twice). When I increased window size, it seemed to stop responding completely (maybe it would render a frame if I had waited longer :P). That accumulation buffer looks pretty worthless.

But not only it was that slow, but the results also were pretty bad. It didn't look like anti-aliasing, but more like some weird combination of color shadow and blur. The edges were still noticeable (even when I rendered one frame with 8x) but the image was somewhat blurred.

My drivers definitely don't do it that way. 4x AA forced with drivers makes it still run at screen frequency, but it definitely looks better. Edges are much less noticeable. Overally looks like it should.

Guess supersampling isn't accumulation buffer after all. I wonder what do the drivers use for AA.
Looks like the only real anti-aliasing techniques are supersampling and multisampling. Accumulation buffer is probably just for some kind of view effects, that aren't really anti-aliasing (and are quite useless because of the slowness).

From that small article, it looks like supersampling is simply high-res rendering scaled down (through filters) to fit in window. Doesn't sound too hard, now is there any way to do this in OpenGL? (So it works at similiar speed like that of driver-forced AA).

This topic is closed to new replies.

Advertisement