Archived

This topic is now archived and is closed to further replies.

DaJudge

Pixel shader problem (probably simple)

Recommended Posts

Hi all, I just got my hands on a GeforceFX which is (embarrassing but true) my first graphics board supporting pixel shaders. Now, I of course tried to write my first pixel shaders and I do have a probably simple problem. I do have a vert2frag struct containing a pos element that binds to the POSITION component (which as far as I understand is the clip space position). I saw this all over the samples from NVIDIA so I thought it shouldn''t be a problem at all. However the compiler tells me that this binding is not visible in the profile I selected. So I played around with different ps profiles which always resulted in the same compiler output. I am somewhat firm with vertex shaders since I use them for quite some time now and just don''t see what the problem with this pixel shader might be. I googled around a bit about it but did not find any results... Maybe one of you can help me. Thanks, Alex

Share this post


Link to post
Share on other sites
Sure...

nothing special.


struct v2f
{
float4 pos : POSITION;
};

float4 main(v2f IN) : COLOR
{
return IN.pos;
}


As stated this is the first pixel shader I write so I started with the KISS idea in mind...

The compiler command-line is:

cgc -profile ps_1_1 -q $(InputPath)

Alexander Stockinger
Programmer

Share this post


Link to post
Share on other sites
Hi,
As far as i know, you cannot access the POSITION element in the pixelshader.
You have to export the position from the vertex program as a texture coordinate, too.



struct v2f
{
float4 pos : POSITION;
float4 tpos : TEXCOORD0;
};

float4 main(v2f IN) : COLOR
{
return IN.tpos;
}



I don't know if this code works but i expect it to work.
The only thing you have to do in your vertex program is to copy pos to tpos.

EDIT: tags, typos

[edited by - hWnd on October 7, 2003 4:30:26 PM]

Share this post


Link to post
Share on other sites
Oh, that might be somewhat logical. Say, a pixel shader just has texture coordinates and colors as input. Sounds reasonable. And the definition is allowed since I can write a vertex and a pixel shader in the same file and use the same structure for vshader-output and pshader-input, right?

Cheers,
Alex

Share this post


Link to post
Share on other sites
Okay, now that I sorted that out I face my next problem:

I try to output a image that encodes the depth of the pixel in RGB values (say: Per pixel). I want to do depth shadow mapping and the purpose of that shader combination is to verify and visualize the depth rendering of the light's view frustrum (spotlight). I did some testing but nothing really gave me the results I expected...

Currently I use the following combination of shaders:

Vertex shader (excerpt):

// Here the hom. coords of the vertex will be encoded
pos = pos / pos.w;
pos.x = min(pos.z-.66, .33)*3;
pos.y = min(pos.z-.33, .33)*3;
pos.z = min(pos.z-0.0, .33)*3;
OUT.color.rgb = pos;
OUT.color.a = 1;


The pixel shader then just passes the values forward to the rasterizer.

Unfortunately all I get is a white rendering of the scene except for VERY near pixels... for them I get some (not really right looking) colors. Not exactly what I want to have.

Any ideas?

Please keep in mind that I am very new to pixel shaders...

Thanks,
Alex

[edited by - dajudge on October 8, 2003 1:22:14 PM]

Share this post


Link to post
Share on other sites
Hi again,
I think it would be way easier to use a backbuffer format like D3DFMT_R16F or a luminance format, where you only have to output to one channel (assuming you are using DirectX, but this should work in GL too).
Then you do not have to do these ugly decoding things and it could probably end up being much faster. Though you have to keep in mind that floating-point formats can be very slow and are NOT supported by the GFFX''s driver if i remember properly. Maybe a format like D3DFMT_G16R16 can do quite well too.

Share this post


Link to post
Share on other sites
Unfortunately that is not possible. I am working with DirectX 8 (for XBOX compatibility), so floating point surface formats are out of question.

And the maximum render target format the GeForce FX supports there is D3DFMT_A8R8G8B8...

Any ideas? This problem really starts to bother me. We have GPUs with transistor counts 4 times higher than a P4 but are not able to display reasonable color-depth greyscale images?

Share this post


Link to post
Share on other sites