render to depth texture on ATi cards
The WGL_ARB_render_texture extension doesn''t support the rendering of depth textures, which makes it useless for shadow maps. Nvidia has helpfully provided WGL_NV_render_depth_texture, however ATi doesn''t support this.
glCopyTexSubImage2D isn''t really good enough. It''s slow, since it involves an unnecessary copy, textures are limited to the current window size, and if antialiasing is enabled, the performance drain for rendering a depth texture becomes unacceptable.
Does anyone know if there''s a way around this that I haven''t yet discovered? If not I may have to contact ATi''s developer relations and whinge.
____________________________________________________________
www.elf-stone.com | Automated GL Extension Loading: GLee
Well, what about GL_ARB_depth_texture? It's part of the ARB extensions suite now, but if you can't find it (on older cards), there's always GL_SGIX_depth_texture (the predecessor of GL_ARB_depth_texture). Both extensions insert a new depth texture format, which you can specify to glCopyTexImage2D(). This means you don't need to use glReadPixels() to get the depth buffer, as glCopyTexImage2D() retrieves it and makes it a texture automatically… <br><br><FONT SIZE=1><HR><BR><b><FONT FACE="Courier New">Coding Stuff -></FONT> [ </b> <A HREF="http://uk.geocities.com/mentalmantle">iNsAn1tY Games</A> | <A HREF="http://uk.geocities.com/mentalmantle/darkvertex.html">DarkVertex</A> | <A HREF="http://www.cfxweb.net/~aggrav8d/tutorials/csg.html">How To Do CSG</A> | <A HREF="http://www.gamedev.net/reference/articles/article1775.asp">Direct3D Vs. OpenGL</A> | <A HREF="http://www.google.com">Google</A><b> ] </b> <BR><b><FONT FACE="Courier New">Fun Stuff -></FONT> [ </b> <A HREF="http://www.tshirthell.com">Evil T-Shirts</A> | <A HREF="http://www.sfdt.com">Stick-Based Comedy</A> | <A HREF="http://www.gamedev.net">You're Already Here</A> | <A HREF="http://www.mrcranky.com">The Best Film Reviews</A><b> ] </b> </FONT> <br><br><SPAN CLASS=editedby>[edited by - iNsAn1tY on September 1, 2003 5:48:09 AM]</SPAN>
I''m already using glCopyTexSubImage2D() with a depth texture format (from ARB_depth_texture) but it''s too slow, particularly in AA modes. I want to render directly to the depth texture.
quote:Original post by benjamin bunnyAh, you didn't mention that, but I should have guessed. You're saying that even with ARB_depth_texture, the depth buffer copy is too slow. Yeah, go and give ATi an ear-full...
I'm already using glCopyTexSubImage2D() with a depth texture format (from ARB_depth_texture) but it's too slow, particularly in AA modes. I want to render directly to the depth texture.
Coding Stuff -> [ iNsAn1tY Games | DarkVertex | How To Do CSG | Direct3D Vs. OpenGL | Google ]
Fun Stuff -> [ Evil T-Shirts | Stick-Based Comedy | You're Already Here | The Best Film Reviews ]
[edited by - iNsAn1tY on September 1, 2003 11:21:15 AM]
quote:Original post by benjamin bunny
The WGL_ARB_render_texture extension doesn''t support the rendering of depth textures, which makes it useless for shadow maps. Nvidia has helpfully provided WGL_NV_render_depth_texture, however ATi doesn''t support this.
Does anyone know if there''s a way around this that I haven''t yet discovered? If not I may have to contact ATi''s developer relations and whinge.
Note to self: Do not flame ATI drivers. . . . must . . . control . . . bad attitude about ATI drivers.
If you whinge to them about this issue, please also whinge to them about how the Mobility FireGL 9000 falls to software acceleration on both monitors when using OpenGL in extended desktop mode. Thanks!
Would rendering to a P-Buffer help you? I guess you still need to get the pixels, though.
I have included a link to the tutorial in case this will do you any good:
nVidia P-Buffer Tutorial
I have included a link to the tutorial in case this will do you any good:
nVidia P-Buffer Tutorial
Jeeky: WGL_ARB_render_texture uses a pbuffer. Pbuffers are no use unless I can render depth textures with them, which apparently I can''t.
Actually the main problem is not the speed of the copy, but the fact that if FSAA is enabled in the driver, shadowmaps are rendered with FSAA, which is much slower, and completely unnecessary. The resolution limit is a problem too.
I''ll contact ATi and see what they say.
quote:You''re saying that even with ARB_depth_texture, the depth buffer copy is too slow. Yeah, go and give ATi an ear-full...
Actually the main problem is not the speed of the copy, but the fact that if FSAA is enabled in the driver, shadowmaps are rendered with FSAA, which is much slower, and completely unnecessary. The resolution limit is a problem too.
I''ll contact ATi and see what they say.
B, afair you are using ARB_fragment_program anyway ? In that case, nobody forces you to actually encode the light depth into a depth texture. You can use a wide variety of more or less obscure encodings, from RGB encodings, over 16bit to floating point single component textures (depending on your hardware support). You adapt the perpixel shadowmap comparison accordingly, and you''re done. It will probably be a bit slower (possibly needing a lookup indirection), but since you''re doing the comparison operation manually anyway (see your shadowmap filtering thread), I don''t think the drop will be very noticeable.
Yann: That might be an option, but my current fragment program code is already at the instruction limit of my card, so I don''t think I can push it any further without adding another pass. I''ll give it a try though.
You can always do a CopyTexSubImage2D from a pbuffer which allows you to have whatever resolution you want as well as control whether AA occurs on the depthmap.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement