Using a HLSL shader to store a position ina texture.

Started by
13 comments, last by MarkusK 18 years, 8 months ago
Im having a problem with storing a position in a texture through an hlsl shader. Basically im doing deferred lighting and need to pass my object's verts in world coords. I think directx clamps the values to 0 through 1. And I need them to stay in world coords. (to any float value). Is there a way I could tell directs to do this? Thanks, Brad
--X
Advertisement
1: Texture coordinates are not clamped when the card transfers data between vertex and pixel shaders. You can copy the world-space position to one outgoing texture coordinate register, so it is transferred to pixel shader with maximum range and precision that your card can handle.

-and-

2: The receiving surface must be of floating-point format so as to "remember" float values at full range and precision. Also, the intermediate variables in the shaders must be of floating-point type instead of half precision.

Niko Suni

Quote:Original post by Nik02
1: Texture coordinates are not clamped when the card transfers data between vertex and pixel shaders. You can copy the world-space position to one outgoing texture coordinate register, so it is transferred to pixel shader with maximum range and precision that your card can handle.


Well Between the vertex shader to the pixel shader I can do that, but when the pixel shader outputs the 4 render targets, one of them is my world coords. And It is getting clamped to 0 to 1. How do I go about getting it out of the pixel shader and to the texture in a larger than 1 number?


Quote:
2: The receiving surface must be of floating-point format so as to "remember" float values at full range and precision. Also, the intermediate variables in the shaders must be of floating-point type instead of half precision.



the targets are of type D3DFMT_A8R8G8B8. Enough precision for now.



EDIT: Ahh I see what your saying now about the floating point, I got out my shaderX2 book on that. It recommends D3DFMT_A16B16G16R16F for world coords. Problem being though isnt that 64bit? And it cant be bigger than the back buffer which is at 32bit? And I dont know of a 64bit fmt for that... I guess I could use some help here.. thanks again.

I could use one 32bit XY format and one 16 Z format... unfortunatly that causes a problem, i use all 4 targets already.

[Edited by - xsirxx on August 10, 2005 4:12:02 PM]
--X
Quote:Original post by xsirxx

the targets are of type D3DFMT_A8R8G8B8. Enough precision for now.



Clearly, no. In practice, you will need floating-point targets to accomplish what you want.

Back buffer format has very little to do with possible render target formats. It is entirely possible to create a 128-bit floating point surface, even if the backbuffer is only of 32-bit or even 16-bit integer format (provided the card supports float rt formats altogether).

64-bit floating point (half precision) format may well be precise enough, depending on the scene.

Niko Suni

Quote:Original post by Nik02

Back buffer format has very little to do with possible render target formats. It is entirely possible to create a 128-bit floating point surface, even if the backbuffer is only of 32-bit or even 16-bit integer format (provided the card supports float rt formats altogether).

64-bit floating point (half precision) format may well be precise enough, depending on the scene.


I assumed the depth stencil has to be the same size of the rendertargets, thats what I thought I read on msdn. I create the render targets at 32bit float and I get weird artifacts all over the screen. Not like triangle artifacts, i mean weird dots everywhere.

--X
Quote:Original post by xsirxx
I assumed the depth stencil has to be the same size of the rendertargets, thats what I thought I read on msdn. I create the render targets at 32bit float and I get weird artifacts all over the screen. Not like triangle artifacts, i mean weird dots everywhere.


I fixed the weird artifacts by creating the depthbuffer texture with D3DFMT_D24X8 instead of D3DFMT_D32(which kept failing).

Now I need to find a way to get in the tangents into a target(no targets left).
--X
Quote:Original post by xsirxx

I assumed the depth stencil has to be the same size of the rendertargets, thats what I thought I read on msdn. I create the render targets at 32bit float and I get weird artifacts all over the screen. Not like triangle artifacts, i mean weird dots everywhere.


The "size" as mentioned in MSDN means the width and height of the surfaces in pixels, not the size in bytes per pixel [smile]

Niko Suni

Quote:Original post by xsirxx
Now I need to find a way to get in the tangents into a target(no targets left).


You usually don't want to output the tangents in the render target with deferred shading - you only need to output the surface normal at this point, which, in case of normal mapping, can (has to) be calculated when you output to the offscreen buffers in the first pass.
If you're not using the tangents for normal mapping though, just forget what i said.
Quote:Original post by bleyblue2
You usually don't want to output the tangents in the render target with deferred shading - you only need to output the surface normal at this point, which, in case of normal mapping, can (has to) be calculated when you output to the offscreen buffers in the first pass.
If you're not using the tangents for normal mapping though, just forget what i said.


My normals are just regular normals... I also have tangents that I want to pass through to do lighting with... to grab tangent space... unless somehow I can get bump mapping and lighting to work without it?
--X
Deferred shading, as you probably know, is done in serveral passes.
The first pass always outputs position, normal, albedo, etc. for each visible pixel on screen to several off-screen render targets.
The other passes compute lighting for each pixel : it takes as input the previous render targets, and for each pixel lit outputs the color of this pixel to the backbuffer.
Let's assume you want to do classic N.L lighting for a pixel.
Whether this pixel is bump-mapped or not, you only need this pixel's normal and the light direction (in the proper spaces) to do your lighting. You don't need any tangent-space information when you already have the pixel's normal (in view or world space, so that you can have a unique light direction vector).
Therefore, you want to output to your Normal Offscreen Render Target (the normal RT), the normal corresponding to each pixel.
For non-normal-mapped pixels, the normal is interpolated from vertex normals and stored in the NRT (in the first pass!).
For normal-mapped pixels, the normal is computed in the first pass therefore you don't need any tangent space vectors to do deferred shading. You need them in the first pass, when you render your objects to the offscreen buffers : you compute the normal there, transforming them from tangent space (your normal texture) to view/world space; the view/world space normal vector computed goes to the NRT, but remember that lighting is not applied yet.

I hope i've explained it enough, good luck anyway.

This topic is closed to new replies.

Advertisement