Dynamic simple shadow map for ATI ?
Hi,
I am looking for a way to implement some dynamic shadow maps for ATI cards.
For the nVidia I use the render to depth texture extension but I didn't found an equivalent that can work for ATI cards.
So if someone has a link to a sample code or knows how to deal with it, that that would be great.
Thanks,
-uto-
Hm, that's basically the only way to do the basic shadow maps. Render the depth to a texture, and then project.
It should work on both ATI and nVidia cards, unless you use some specific nVidia extensions. But that shouldn't be necessary because there are ARB extensions that do the job nicely. (WGL_ARB_pbuffer, WGL_ARB_render_texture, GL_ARB_shadow...)
There is a good tutorial that doesn't use any card specific extensions that will get you started, and you can modify it to suit your needs. You can find it here.
Rados
It should work on both ATI and nVidia cards, unless you use some specific nVidia extensions. But that shouldn't be necessary because there are ARB extensions that do the job nicely. (WGL_ARB_pbuffer, WGL_ARB_render_texture, GL_ARB_shadow...)
There is a good tutorial that doesn't use any card specific extensions that will get you started, and you can modify it to suit your needs. You can find it here.
Rados
Quote:Original post by rados
... Render the depth to a texture, and then project ...
The problem with ATI cards is that they don't support rendering directly to a depth texture via a pbuffer, so you would need to render it normally and use glCopyTexSubImage2D() to get a depth texture.
A work around for this, if you are using fragment programs, is to render the depth into a regular RGB texture by packing it into the RGB components. Or if your hardware supports it, render into a one-channel 16 or 32 bit float texture.
For now that's the best you've got.
heres a much better one (that uses ARB_shadow) and works on all cards w/out problems, @ delphi3d.net (It's the very last entry on this page Aug 11th 2002; I know, it's delphi code, but it's piss simple to convert to C++ because all the OpenGL calls are identical!)
Thanks to you for you replies,
Radod :
As Kalidor said the render to depth texture is not supported by the ATI cards. And I was actually trying to render to a pbuffer and copy its depth buffer to a depth texture, as in the Paul's tutorial. The issue with this approach is that it's not really suitable for a dynamic scene.
But I am trying to understand why the call glCopyTexSubImage2D() is 10 times slower if I initialize my texture as :
than
(in the first case it copies the frame buffer to the texture, in the second case it should copy the depth buffer to the texture)
Kalidor :
I'll try to work arround you solution, thanks.
Radod :
As Kalidor said the render to depth texture is not supported by the ATI cards. And I was actually trying to render to a pbuffer and copy its depth buffer to a depth texture, as in the Paul's tutorial. The issue with this approach is that it's not really suitable for a dynamic scene.
But I am trying to understand why the call glCopyTexSubImage2D() is 10 times slower if I initialize my texture as :
glTexImage2D( GL_TEXTURE_RECTANGLE_ARB, 0, GL_LUMINANCE16, m_width, m_height, 0,GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0L);
than
glTexImage2D( GL_TEXTURE_RECTANGLE_ARB, 0, GL_DEPTH_COMPONENT, m_width, m_height, 0,GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0L);
(in the first case it copies the frame buffer to the texture, in the second case it should copy the depth buffer to the texture)
Kalidor :
I'll try to work arround you solution, thanks.
yes the final outcome and setup is the same, but pauls projects are all great but i find they clash with alot of peoples coding styles and contain alot of unneccessary steps/code. the delphi demo is short-sweet and self-commenting, it may just be personal preference but i think the delphi3d.net demo is clearer.
Quote:Original post by uto314
But I am trying to understand why the call glCopyTexSubImage2D() is 10 times slower if I initialize my texture as :
*** Source Snippet Removed ***
than
*** Source Snippet Removed ***
Well.. in the first example the OpenGL drivers have to convert between 2 formats on the fly, and that makes the copy much slower.
The best sollution for ATI-cards seem to be either to render to a R32 texture, or encode the depth in a RGBA texture.. and do the compare yourself. ATI cards lack the hardware r-compare. :/
Thanks Gulgi.
As you might have seen I use rectangular texture.
And I can't perform (I mean it's not working) rectangular texture lookup from a glsl fragment shader with a sampler2DRect and the texture2DRect lookup function on a ATI card, I don't know why because it's supposed to support it.
If you have a piece of code to access rectangular texture with a sampler2DRect which works on ATI your welcome :)!
Thanks,
-uto-
As you might have seen I use rectangular texture.
And I can't perform (I mean it's not working) rectangular texture lookup from a glsl fragment shader with a sampler2DRect and the texture2DRect lookup function on a ATI card, I don't know why because it's supposed to support it.
If you have a piece of code to access rectangular texture with a sampler2DRect which works on ATI your welcome :)!
Thanks,
-uto-
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement