render z-value to texture

Started by
9 comments, last by YengaMatiC 18 years, 11 months ago
Hi, I want to render the depth value to texture using vertex shader, how can I read from the depth buffer and render to texture in CG? Thanks a lot. helen
Advertisement
Do you really need to do it in Cg? How about this:

// create textureunsigned int id;glGenTextures(1,&id);glBindTexture(GL_TEXTURE_2D,id);glTexImage2D(GL_TEXTURE_2D,0,GL_DEPTH_COMPONENT,width,height,0,GL_DEPTH_COMPONENT,GL_UNSIGNED_BYTE,NULL);// to save the depth values to textureglBindTexture(GL_TEXTURE_2D,id);glCopyTexSubImage2D(GL_TEXTURE_2D,0,0,0,0,0,width,height);

AxoDosS: Your method isn't quite as fast.

helenj: It needs to be done in the pixel shader, not the vertex shader. Just return the fragment depth.
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
I also think it should be implemented by a pixel shader,but as I read in a siggraph 2002 sketch paper by Jason.L Mitshell "Real-Time Image-Space Outlining for Non-Photorealistic Rendering",
it says:
For the cases of silhouette and crease outlining, we render the full
scene’s world-space normals and eye-space depths into an RGBA
(nxworld, nyworld, nzworld, deptheye) texture using a vertex shader to
populate the color channels.

I don't quite understand why they used the vertex shader, could someone explain it to me?
Thanks.
i fail to see why u *need* a vertex or fragment shader at all
Quote:Original post by zedzeek
i fail to see why u *need* a vertex or fragment shader at all


You don't need a fragment shader, but you need a vertex shader since the goal is to encode the values (normx,normy,normz,eyedepth) into the RGBA channels of the buffer. Those values will be outputed into the primary color per-vertex, and be interpolated across the triangles, filling the buffer with correct values. You need either a vertex shader for that, or calculate those values yourself in software and use glColor4f. Obviously it's best if you use vertex shaders.

Notice that the eyedepth is not the value used in the depth-buffer, it is just the z coordinate of the vertex position in eyespace(properly scaled down I assume in order to be packed into the [0,1] range).

Anyway, you can use render-to-texture extensions to copy the buffer into a texture, but they're a pain in the ass. glCopyTexSubImage2D will work fine unless your program is too performance demanding.
Quote:Original post by mikeman
glCopyTexSubImage2D will work fine unless your program is too performance demanding.


As I understand it, the copy call absolutely massacres the X800s and other cards on the same chipset series.
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
mikeman: Thanks so much for your detailed explanation
Promit: I am using nVidia 5700 card, will the glCopyTexSubImage2D call work on that? Anyway, I'll try it later.

Quote:Original post by helenj
I am using nVidia 5700 card, will the glCopyTexSubImage2D call work on that? Anyway, I'll try it later.


Yes it will work (I have the same card). There is nothing wrong in using glCopyTexSubImage2D.
what does "populate the color channels" mean here? I don't quite understand

Thanks

This topic is closed to new replies.

Advertisement